We will analyse the top emoticons found in the messages of tweets, from the ‘msgraw_sample.txt’ data used in the tutorial of Week 7. Note this should be done a Linux machine or similar where bash supported.
The first sub-task is to extract the top 20 emoticons and their counts from the tweets. This must not be done entirely manually, and it can only be done using a single shell script. So you need to write a single shell script ‘tweet2emo.sh’ that will input ‘msgraw_sample.txt’ from stdin and produce a CSV file ‘potential_emoticon.csv’ giving a list of candidate emoticons with their occurrence counts. The important word here is “candidate”. Perhaps only 1 in 5 of your candidates are emoticons. Then you need to edit this by hand, deleting non-emoticons, and deleting less frequent ones, to get your final, list ’emoticon.csv’.
So for this task, you must submit:
(1) a single bash script, ‘tweet2emo.sh’ : this must output, one per line, a candidate emoticon and a count of occurrence, and cannot have any Python or R programmes embedded in it. More details on how to do this below.
(2) the candidate list of emoticons generated by the script, ‘potential_emoticon.csv’ : CSV file, TAB delimited file with (count, text-emoticon).
(3) the final list of emoticons selected, ’emoticon.csv’ : CSV file, TAB delimited file with (count, text-emoticon); these should be the 20 most frequent emoticons from ‘potential_emoticon.csv’, but you will have to select yourself, manually by editing, which are actually emoticons. To do this, you may use an externally provided list of recognised emoticons, but not should be used in step (2).
(4) a description for this task is included in your final PDF report describing the method used for the bash script, and then the method used to edit the file, to get the file for step (3).
Your bash scripts might take 2-5-10 lines and might require storing intermediate files.
The following single line commands, which process a file from stdin and generate stdout should be useful for this task:
perl -p -e ‘s/s+/n/g;’
— tokenise each line of text by converting space characters to newlines;
NOTE: this reportedly also work on Windows where newline character is different
perl -p -e ‘s/>/>/g; s/</
— convert embedded HTML escapes for ‘>’ and ‘
— you need to do this if you want to capture emoticons using the ‘<‘ or the ‘>’ characters, like ‘
sort | uniq -c | perl -p -e ‘s/^s+//; s/ /t/; ‘
— assumes the input file has one item per line
— sort and count the items and generates TAB delimited file with (count, item) entries
Specially, in order to recognise potential emoticons, you will need to write suitable greps. Here are some examples:
grep -e ‘^_^’
— match lines containing the string “^_^”
grep -e ‘^^_^’
— match lines starting with the string “^_^”, the initial “^”, called an anchor, says match start of line
grep -e ‘^_^$’
— match lines ending with the string “^_^”, the final “$”, called an anchor, says match end of line
grep -e ‘^^_^$’
— match lines made exactly of the string “^_^”, using beginning and ending anchors
grep -e ‘^0_0$’
— match lines made exactly of the string “0_0”
grep -e ‘^^_^$’ -e ‘^0_0$’
— match lines made exactly of the string “^_^” or “0_0”; so two match strings are ORed
grep -e ‘^[.:^]$’
— match lines made exactly of the characters in the set “.:^”
— the construction “[ … ]” means “characters in the set ” … ” but be warned some characters used inside have strange effects, like “-“, see next
grep -e ‘^[0-9ABC]$’
— match lines made exactly of the digits (“0-9” means in the range “0” to “9”) or characters “ABC”
grep -e ‘^[-0-9ABC]$’
— match lines made exactly of the dash “-“, the digits, or the characters “ABC”
— we place “-” at the front to stop in meaning “range”
For more detail on grep see:
https://opensourceforu.com/2012/06/beginners-guide-gnu-grep-basics-regular-expressions/
But my advice is “keep it simple” and stick with the above constructs. Remember you get to edit the final results by hand anyway. But if your grep match strings say “7” is an emoticon, it probably isn’t a strong enough filter.
We would like to compute word co-occurrence with emoticons. So suppose we have the tweet:
loved the results of the game 😉
then this means that emoticon ‘;-)’ co-occurs once with each of the words in the list ‘ loved the results of the game’ once.
You can use the supplied Python program ’emoword.py” which uses a single emoticon, takes ‘msgraw_sample.txt’ as stdin and outputs a raw list of co-occurring tokens.
./emoword.py ‘:))’
Note the emoticon is enclosed in single quotes because the punctuation can cause bash to do weird things otherwise.
You can also put this in a bash loop to run over your emoticon list like so:
for E in ‘;)’ ‘:)’ ‘echo running this emoticon $E
done
or counting them too using
CNT=1
for E in ‘;)’ ‘:)’ ‘echo running this emoticon $E > $CNT.out
CNT=$(( $CNT + 1)) # this is arithmetic in bash
done
But be warned, bash does strange things with punctuation … it treats it differently as it plays a role in the language. So while you can have a loop doing this:
for E in ‘;)’ ‘:)’ ‘
where you have edited in your emoticons, and used the single quotes to tell bash the quoted text is a single token, if instead you try and be clever and read them from a file
for E in `cat emoticons.txt` ; do
then bash well see individual punctuation and probably fail to work in the way you want.
For each emoticon in your list ’emoticon.csv’, find a list of the 10-20 most commonly occurring interesting words. Report on these words in your final PDF report. Note that words like “the” and “in” are called stop words, see https://en.wikipedia.org/wiki/Stop_words, and are uninteresting, so try and exclude these from your report.
So for this task, you must submit:
(1) a single bash script, ’emowords.sh’ : as used to support your answers, perhaps calling ’emoword.py’; this should output for each of your 20 emoticons the most frequent words co-occurring with it (in tweets); use what ever format suits, as the results will be transferred and written up in your report.
(2) a description for this task is included in your final PDF report describing the method used for the bash script, and then the final list of selected interesting words per emoticon, and how you got them.
See if there are other interesting information you can get about these emoticons. For instance is there anything about countries/cities and emoticons? Which emoticons have long or short messages? Whats sorts of messages are attached to different emoticons?
You can use the Python program ’emodata.py” which reads your ’emoticon.csv’ file, takes ‘msgraw_sample.txt’ as stdin and outputs selected data from the tweet file.
./emodata.py
Report on this in your final PDF report. Use any technique or coding you like to get this information. Your report should describe what you did and your results.
Consider the two files ‘training.csv’ and ‘test.csv’.
Plot histograms of X1, X2, X3 and X4 in train.csv respectively and answer: which variable(s) is(are) most likely samples drawn from normal distributions?
Fit two linear regression models using train.csv.
Model 1: Y~X1+X2+X3+X4
Model 2: Y~X2+X3+X4
Which model has higher Multiple R-squared value?
Now use the coefficients of Model 1 and 2 respectively to predict the Y values of test.csv, then calculate the Mean Squared Errors (MSE) between the predictions and the true values. Which model has smaller MSE? Which model is better? More complex models always have higher R square but are they always better?
The work required to prepare data, explore data and explain your findings should be all your own. If you use resources elsewhere, make sure that you acknowledge all of them in your PDF report. You may need to review the FIT citation styletutorial to make yourself familiar with appropriate citing and referencing for this assessment. Also, review the demystifying citing and referencingfor help.
The following outlines the criteria which you will be assessed against.
The following outlines the criteria which you will be assessed against:
The marks are allocated as follows:
Once you have completed your work, take the following steps to submit your work.
Why Work with Us
Top Quality and Well-Researched Papers
We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.
Professional and Experienced Academic Writers
We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.
Free Unlimited Revisions
If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.
Prompt Delivery and 100% Money-Back-Guarantee
All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.
Original & Confidential
We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.
24/7 Customer Support
Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.
Try it now!
How it works?
Follow these simple steps to get your paper done
Place your order
Fill in the order form and provide all details of your assignment.
Proceed with the payment
Choose the payment system that suits you most.
Receive the final file
Once your paper is ready, we will email it to you.
Our Services
No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.
Essays
No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.
Admissions
Admission Essays & Business Writing Help
An admission essay is an essay or other written statement by a candidate, often a potential student enrolling in a college, university, or graduate school. You can be rest assurred that through our service we will write the best admission essay for you.
Reviews
Editing Support
Our academic writers and editors make the necessary changes to your paper so that it is polished. We also format your document by correctly quoting the sources and creating reference lists in the formats APA, Harvard, MLA, Chicago / Turabian.
Reviews
Revision Support
If you think your paper could be improved, you can request a review. In this case, your paper will be checked by the writer or assigned to an editor. You can use this option as many times as you see fit. This is free because we want you to be completely satisfied with the service offered.