2021

Predicting wine rating based on reviews and general information about the wine

The purpose of the study was to analyse text classification of 2 different classifiers, in particular Naive Bayes and Random Forest Classifiers, by predicting the scores of wines with the classifiers trained on different parts of the data set. In this case, the data set used was a list of wine reviews from Kaggle that contained different information about wines and the reviewer. To achieve our goal, we used scikit-learn in Python and trained the models first on only the descriptions of the wines, then on the other features such as title, price, country of the wine, and then on both. Like that we can analyse if text description is an important factor in the classification. We used 85% of the data as a training set and 15% as a test set. We then computed the accuracy of both classifiers, the recall, precision and f1-score. However, considering the fact that the scores were ranging from 80 to 100 (21 values), the accuracy and other metrics were low. Therefore, we scaled the scores to the range of 1 to 4 to see if the metrics values improve when the classifiers have less possibilities to choose from, which happened as expected.

Real or fake? - Predicting job postings with text classification

Over the recent years the number of fake job postings have increased. Previous studies have shown that text classification can be used for prediction of job postings. The aim of this project was to investigate further potential differences in the predictions depending on if the job post was written in a country with English as its native language or not. The dataset consisted of approximately 17 000 jobs and all of them were written in English but from different countries. The classifiers that were used were Logistic Regression (LR) and Naive Bayes (NB) where NB was our baseline. The classifiers were evaluated with accuracy and F1-score. Due to imbalanced data we also conducted an experiment with under- and oversampled training data. The two experiments were compared using the same metrics. Our main findings were that the Naive Bayes (NB) classifier performed better than the Logistic Regression (LR) classifier in some of the experiments and vice versa in others, and therefore working with more than one classifier is favorable. Additionally an possible explanation to our results was the imbalance of the dataset, however the dataset reflects the reality, where real job postings constitute the majority of the existing job postings online.

Cooking recipe generation with LSTM

The purpose of our study was to generate original cooking recipes and evaluate the results according to correctness, usefulness and whether they were good enough to pass for real recipes. For this, we used an existing implementation of a character-level LSTM model trained on over 100 000 recipes from various websites using the Recipe Box dataset. To evaluate the generated recipes, an e-survey was designed and conducted with 20 participants. The results of our survey showed that the participants could easily distinguishing between human-written and AI-written recipes, with an overall accuracy of 94%, meaning that the program could not compete with the human recipes. 59% of the answers rated the machine-generated recipes as grammatically correct, but less than 10% thought that any of them made sense, showcasing that the main issue with the generative model should be attributed to issues such as coherence rather than pure grammatical ones. We found that evaluating text generation systems is very difficult to perform in an objective and replicable way. Automated methods of evaluation were considered inadequate for this task due to the open-ended nature of the generation.

Male or Female rapper? - A text classification study

The aim of this project is to study how female and male rap and hip hop artists differ in their use of language by implementing and evaluating text classification methods; Naive Bayes and Support Vector Machine (SVM). Data from a classification- survey carried out by humans and a most important feature- function provides the discussion with more nuance. The dataset is retrieved from kaggle.com and consists of song lyrics. The male songs are randomly selected, and the female songs are systematically selected because the website was limited. 20% of the dataset is used to train and 80% to testing. The dataset consists of a total of 30 000 words distributed equally over two genders (female, male). Naive Bayes uses priori and log-probabilities and achieves an accuracy of 63%. SVM uses binary and multi- classification and achieves a higher accuracy- score; 88%. The two classifiers are also evaluated through F1-score, precision and recall. The words generated by the most important feature function shows differences in use of words between females and males. The classification- survey results in an accuracy of 80%. The accuracy between the classifiers shows that, in this project, male and female rappers use similar words, but differ more in sentence construction.

Demographic Cinematic: Classifying the Highest Rating Demographic Based on the Movie Plots

There is a clear commercial value in predicting how different demographics respond to and subsequently rate movies. Being able to properly identify which language data that correlates to what types of movies that go down well with a specific age demographic could aid in determining the success of a movie. The question we wanted to answer in this project was: Can we adequately predict how different gender and age group demographics on IMDb rate movies based on plotline summaries using text classification? To examine this, we created a custom dataset using the IMDbPY API, consisting of data points extracted from the IMDb database. The dataset contains a unique ID for each movie, its plotline summary, and how six different demographics rated the movie. This was then put through a Naive Bayes model and a Support Vector Machine model. After some initial experiments which yielded low results compared to our baseline* (NB Accuracy: 22-36%, SVM Accuracy: 19-30%), we created a binary classification task of only male and female ratings, resulting in higher scores (NB Accuracy: 48-75%, SVM Accuracy: 54-73%). *Most Frequent Class, Accuracy: 35%

Raters gonna rate

The main idea of our project was to classify movie ratings based on their descriptions through the use of text classification. The aim of this project was to deepen our knowledge about text classification and evaluation measures. The dataset used to perform the text classification was retrieved from Kaggle and consisted of information about 85.855 movies. Since our purpose was to predict the rating of a movie based on its description, we only retrieved the Titles, Descriptions and Ratings of all of the movies. To analyse how well our system would work on datasets of descriptions and ratings not retrieved from IMDb, we also used a dataset retrieved from a similar website called Rotten Tomatoes consisting of information about 16.638 movies. We implemented a Most Frequent Class baseline (MFC) classifier, a Naive Bayes (NB) classifier and a Support Vector Machine (SVM) classifier on both of the datasets. The evaluation of the classifiers was done by computing the accuracy, precision, recall and F1 score for both classes. The results from the different classifiers were compared and discussed. Both NB and SVM performed better than the MFC baseline, though NB generally performed better than SVM, however the difference was not significant.

Predicting restaurant review scores using Naive Bayes

In the age of social media, customer experiences have become more valuable to businesses than ever before. This has led to an increase in opinion mining among system developers and business owners. In congruence with this, we have implemented a classifier that predicts the overall rating of a certain restaurant, based on its written reviews. To achieve this, we web scraped 3986 reviews from ten different McDonald's restaurants and trained them by mapping each word to a certain number from 1 to 5 based on the star rating in the reviews. To classify the reviews and predict an overall star rating, we implemented a Naive Bayes model. To determine how accurate our system is, we measured its precision-, recall- and accuracy score and compared the results with gold standard data, which in this case is the overall star rating of the restaurant. The results showed a small difference in the model’s predictions and the actual ratings, even when classifying custom reviews written by us, though our classifier tend to rate the restaurants lower than the gold standard. This could be a result of a high proportion of negative ratings in the training set, which dominates some word frequencies in that class.

David’s secret power chord: Classifying song genre from lyrics

The aim of this project was to investigate the lyrics of songs and to test if we could identify the genres of songs using only the lyrics and machine learning. We tested this on a corpus of ~500 songs that were scraped from Genius.com using code written in Python. The models used in this project were Naive Bayes and Support Vector Machine (SVM). We evaluated the models using accuracy, precision, recall and F1. The SVM gave us an accuracy of 42.27% and Naive Bayes gave us an accuracy of 38.14%. Given that the dataset included 5 different genres these results are better than chance. Both models had a better F1 score for rap than for any other genre. Our results suggest that identifying genres solely on lyrics is insufficient, which is in line with previous research. There are companies that have implemented similar machine learning programs to identify genres of songs, however they analyze a lot more than just lyrics.

Evaluating popular machine translation systems

The aim of this project was to compare four popular online machine translation systems: Google Translate, Bing Translate, DeepL Translate and Yandex Translate; using Google Translate as a baseline, since it’s probably the most popular. Firstly, we translated, with each tool, more than 100 texts into four defined languages (English, Swedish, German and French) based on two methods, Direct Translation, a simple translation from language A to language B and Circle Translation, a circular translation from the original language to Japanese, then to Polish, then back to the original language. This enables us to do automated, objective evaluation. Secondly, we implemented two methods to evaluate the quality of the translations. Objective evaluation, based on technical algorithms to evaluate word and sentence accuracy (by using Levenshtein distance) which was only used on the ‘circle’ translations. Subjective evaluation, by using the feeling of the user to grade the quality of translations, according to our own predefined scale. Compared to the baseline, Google Translate, all the other translators did a better job with direct translations. For the circular translations, Bing Translate did worse than the baseline, while the other two did better.

2020

Bellman or the Bible? Using a text classifier

In our project we decided to classify data from two different corpuses. The first one containing information from a Bible that was written in 1873, the second one containing data from Bellman's work during his lifetime 1740-1795. The classifier then determined whether the data belonged to the corpus of the Bible or of Bellmans work.

The classifier that was used was a Naive Bayes. We used the swedish tree bank (SprŒkbanken) annotated corpuses, Bible and Bellman. Both corpuses were randomly divided into training and test data and also assigned to a gold standard class based on which corpus the data belonged to. The classifier then evaluated by examining accuracy, precision and recall. The results from this gave us an answer to how well, and in what cases, the classifier predicts the same class as in the gold standard.

Our results were varied in certain aspects and we withdrew from drawing any conclusion until further revisions, implementations and refinements are made into the script. To illustrate, we had a precision of 98.41% in the bellman class, and in the bible class we had 72.21%. This is further exemplified in our recall values which were 61.60% in the bellman class and 99.00% in the bible class. The cause of this is yet unknown and deserves according to our opinions further investigation.

Text classification of song lyrics over time

The aim of this project was to predict which decade a song was produced, based on the lyrics by using text classification methods. This was achieved by using a dataset containing the top 100 songs each year from the Billboard between 1964-2019. The classification methods were two variants of Naive Bayes.

Our results were were pretty poor with one achieving an accuracy of 12,5% and the other 31.3%. The classifier with 12,5% was a Naive Bayes that used a-priori and log-probabilities, where the lyrics that were used for testing were treated as binary. The other classifier with 31.3% was a Naive Bayes with additive k-smoothing, MLE and which also implemented the usage of log-probabilities. This classifier grouped the years in categories: old (60s-80s), mid (80s-00s) and new (00s-10s).

We used a different dataset as test data, which did not reflect our training data. This most likely affected our results, and a corrective measure for future testing would be to use part of the training data as test data. The results show that the classifier that treats the release year as categories performs better than the one that groups them into decenniums.

Classifying movies' premiere decade based on their plot

The main idea of the project was to classify movies to which decade they premiered based on their plots, using text classification. This was achieved by implementing a Nai ve Bayes Classifier, a Support Vector Machine (SVM) classifier, and a Random Forest classifier.

Datasets were found and downloaded from the IMDb website containing information about movie title, release year, number of ratings and more. We filtered these datasets and extracted only the movies with at least 50 ratings and whose release date was within certain years. Furthermore, movie plots for the extracted movies were retrieved using the IMDbPY package for Python. This we did with two separate datasets, one with the classes 70s, 90s, 10s containing 24908 movie plots, and another with the classes 90s, 00s and 10s containing 28676 movie plots. In both datasets the classes were evenly distributed.

The classifiers were evaluated by computing accuracy, precision, recall and F1 score, the last three with respect to the defined classes. Then the results were compared. The SVM was the classifier found to perform the best with both datasets. The results for the dataset containing the classes 70s, 90s and 10s were slightly better than the other dataset.

Language development: a study of how a naive bayes classifier can predict political party affiliation using historical data

The aim of this project was to evaluate how language use has evolved over time and how this evolution has affected political discourse. In political debates, it is interesting to see how different parties use language relative to their political positions. The focus of this project has been the United Kingdom, and the left, right and nationalist wings.

We worked with a naive bayes classifier, training it on historical debates and seeing whether it could accurately predict parties based on current affairs. The investigated text documents were categorized with classes and then evaluated depending on the similarities between the word use and the corresponding class over time. We conducted the evaluation by measuring accuracy, recall and precision using a confusion matrix. Data since 1803 formatted as xml / text files were available for training. The test data consisted of text files from modern debates taken from the 21st century.

According to the the obtained results, it is not possible to predict political affiliation with historic data. As an example, when debates on world war 2 were used as training-data and speeches on the Iraq war laid out as test-data the classifier always predicted right-wing parties. One reason for this could be that the older the training data used, the parties were much more politically similar, and as such, used similar language.

Classanthony Reviewthano: Classifying and predicting Music album Reviews

Music critique and internet personality Anthony Fantano has during the last ten years made over 2000 album and EP reviews. His reviews almost always contain a score between 0 to 10 depending on his impression of the album. A big portion of these reviews with their respective scores can be found on kaggle and we used this as our training and testing data.

In the project our aim was to investigate how different classifiers could identify what score an album got based on the review. We chose the classifiers SVM, random forest & multi-layer perceptron and evaluated those with regards to accuracy. In order to get a more accurate classification, the score was divided into two classes, positive(score 6 to 10) and negative(score 5 to 0) reviews. The results showed; Most Frequent Baseline: 73%, Multi-layer perceptron: 81%, Random Forest: 77%, Support Vector Machine: 94%.

Judging a book by it's cover. Multi-class classification with SVM and Naive Bayes

The aim with of our project was to categorize book genres through their descriptions. To do this we implement a Naive Bayes classifier and a Support Vector Machine (SVM) that would predict the genre of books based on their descriptions. We also implemented a Most Likelihood Estimation (MLE) classifier as our baseline. Furthermore we compared the two classifiers, Naive Bayes and SVM, to our baseline (MLE) so that we could determine which of the three classifiers worked the best.

While comparing the classifiers we could also speculate their pros and cons. We structured the data to only include the genre and the description of books. We trained on 80% of the data and 20% of the data was used for testing. The SVM had the best ever all scoring but had difficulty distinguishing between certain classes, as the f1-score for poetry was 0.

Movie genre classification given plot

Text classification for multilabel datasets poses challenges both in terms of how to train models and how to evaluate them. The aim of this project was to gain a more profound understanding of how different classifiers work with multilabel data and how these classifiers compare against each other.

The data set, a collection of 117 352 movies and series, was scraped from the InternetMovieDatabase (IMDb) in December 2017 and contained movie titles, plots and one or more genres per movie. The classifiers implemented were Naive Bayes (NB), Support Vector Machine (SVM), Support Vector Machine multi-label (SVMm) and Logistic Regression (LR). The various classifiers used had different challenges and strengths. NB and SVM could only be trained on and predict a single label, which lead to loss of information and difficulties in evaluating the models. SVMm and LR could handle multiple labels, but are more difficult to understand and implement.

We also found that performance measures varied widely depending on how we calculated correct predictions. The overall conclusion was that the effectiveness of a classifier depends on both the algorithm itself but also to a great extent the context - the data set, evaluation criteria and chosen parameters.

No title

This project's goal is to generate news article headlines from scratch using different technologies and compare the results between those technologies as well as real news headlines.

To achieve this, we compared two different models with different features which are n-grams (5-grams) and LSTM (128 and 256 nodes) and we trained the models on a pre existing database of a million headlines from the Australian Broadcasting Corp. We then generated some plausible headlines (three 5-grams, three LSTM 128 and three LSTM 256) and picked some real ones (11) and asked 24 people how likely each one was to be real on a scale of 0 to 10. We then checked how close the n-grams and the LSTM scores were to the real ones.

The results showed that the 256 node LSTM model fared the best, with the n-gram model in the second place and the 128 node LSTM in the third. The score of the 256 node LSTM model was still lower than the score from the real headlines. This still proposes the interesting discussion of where text generation through LSTM models start to outperform n-gram models. Even though this testing was made with a quite small sample of generated text, this still lays the foundation for further research of the effectiveness of text generation through LSTM models compared to n-gram models.

Predicting the decade of a song from Melodifestivalen

The aim of this project was to create a classifier that would predict the decade of a song which competed in the swedish song contest Melodifestivalen and whether the song won or not.

The corpus we made contained lyrics from Melodifestivalen finals through the years of 1960 - 2019. We created a Naive Bayes classifier and then implemented the classifier Support Vector Machine (SVM) from the Scikit-learn library. We evaluated the two classifiers on accuracy, precision and recall but also reviewed them relative to the baseline Most Frequent Class. We compared the results from The Naive Bayes classifier and the results from SVM.

The result on predicting the decade did not differ very much between the Naive Bayes and the SVM, Naive Bayes had a higher precision (53%) while SVM had higher accuracy (50%) and recall (43%). The results from The Naive Bayes classifier and SVM in predicting the winner showed little difference between the two classifiers as well. SVM had a little higher percentage than The Naive Bayes on every result with accuracy (96%), precision (48%) and recall (50%). The two classifiers performed better than the baseline in most categories.

Predicting stock market outcomes through press releases

The aim of our project was to look at previous press releases, connect them to stock market changes and train a text classifier to predict if a stock goes up or down in value. To train the model, we first use a web crawler to download roughly 15,000 press releases from avanza.se and send them through a tokenizer to sort out unrelated words and numbers. We then use Python yfinance to look at the stock market changes for the specific dates the press releases are published, and connect the tokens in the press releases to their stock market change in forms of positive and negative.

Our text classifier uses the Support Vector Machine (SVM) algorithm and we split the training data into 50/50 positive and negative press releases as we noticed the results were less biased towards any side. The classifier reached about 77% accuracy. We also implemented an investment simulation that invested in stocks based on the press releases it encountered; this test showed that our assumptions about how stock value changes relate to press releases were naive.

Assessing Google Translate on Idiomatic Expressions

Idioms are not only a part of language but also a cultural tool of communication. Categorized as formulaic language along with proverbs and expletives an idiom is a phrase or expression that carries another meaning than the literal meaning of the words it consists of. However when translating idioms with machines or AI, one might experience difficulty due to the ambiguous nature of idiomatic expressions.

In this project, we have studied Google Translate and its ability to translate English idioms into Swedish, Spanish, Chinese and Japanese. A gold standard based on 80 English idioms was created, which also contains matching idioms (when applicable) or their semantic meanings from the other languages.

We found that languages based on the Latin alphabet scored higher on overall semantic accuracy with Swedish at 50% and Spanish at 31.25% compared to Chinese with 27.50% and Japanese at 23.7%. When presented with matching idioms the results were similar for Chinese with only 19.35% accuracy but higher for Japanese with 36.84%. Swedish and Spanish remained higher with 47.22% and 43.75% respectively. In conclusion Google Translate struggles with idioms, but given the difficulty of the task the results are still impressive.

2019

Sport or business? Predicting the category of a news article using a text classifier

In this project we decided to classify news articles after two and five separate categories. We did this by making a text classifier where we implemented and compared a total of 5 different models; Naive Bayes, Logistic Regression, Support-Vector Machine (SVM), Neural network and Random Forest.

The training-data was made using a csv-file that we found on a site called Kaggle, which was divided into 5 different categories (sport, business, entertainment, politics, tech) and consisted of 2226 news articles. We tested the classifiers with another set of data, which was divided into 2 categories (sport, business) and consisted of 2692 news articles. We also tested our trained text classifier with our own created test data, which consisted of a total of 10 articles, where 5 of the articles were in the business category and 5 were in the sport category.

The results we got indicate that SVM is the most suited for these kind of tasks. Naive Bayes also performed well. The linear classifier had consistent good results in every test. The importance of how you treat data before tests also became apparent.

Detecting sarcastic tweets with a Naive Bayes classifier

The aim of this project was to build a Naive Bayes classifier that could detect whether a tweet was sarcastic or not. The data we used was collected from both Twitter and the corpus Twittermix from the swedish tree bank (Språkbanken). We made the assumption that tweets with the hashtag ”#sarkasm” (sarcasm in Swedish) were sarcastic and tweets with any other hashtag were not sarcastic. In total we used 1099 tweets of each class. We then evaluated the classifier by computing accuracy, precision and recall. In addition to these measures we did a questionnaire to validate the data and if deemed valid establish a point of reference. The result showed that the classifier had an accuracy of around 85% and the user tests gave a result of 83% correct answers, when testing on the same 30 tweets. Our conclusion from this project is that classification of sarcasm is a very difficult task and that results of the classifier is highly dependent on the amount of data and the quality of it.

Google Translate och idiom, en komplicerad historia

Vi undersökte hur Google Translate hanterar översättning av svenska idiomatiska uttryck till engelska samt åt motsatt håll. Undersökningen utfördes genom att ta ett randomiserat urval av totalt 75 idiomatiska uttryck från sidan http://www.paengelska.com, för att sedan se om översättningarna som hemsidan anvisar överensstämmer med hur Google Translate översätter uttrycken. Då resultaten från översättningarna var varierande utgick vi från fyra följande parametrar vid analys av resultat: Korrekt idiomatiskt, Acceptabelt korrekta, Ordagrann översättning (inkorrekt), samt helt inkorrekt. Resultatet i översättningar från svenska till engelska påvisade att 32% blev korrekt översätta idiomatiskt, 9,3% blev ”acceptabelt” översatta men inte helt korrekta, 53,3% av uttrycken blev ordagrant översatta och inte med tanke på den bakomliggande semantiken för det idiomatiska uttrycket (alltså en felaktig översättning), samt resterande 5,3% blev inkorrekt översatta. Översättningarna från engelska till svenska gav följande resultat: andelen korrekt översatta idiomatiska uttryck var 32,9%, andelen ordagrant översatta men felaktigt gällande översättningen för de idiomatiska uttrycken var 63,2% och 3,9% blev inkorrekt översatta. För att nå bästa resultat, dras slutsatsen att lingvister i samröre med varandra skulle kunna bidra till en sammanställning av de korrekta idiomatiska uttrycken för Google Translate. Då Google Translates bakomliggande algoritmerna är bristfälliga på den idiomatiska fronten.

Implementation and analysis of a voice assistant in Dialogflow

In this project, a voice assistant was implemented with Google’s Dialogflow framework. The assistant’s name was LiU-bot and its purpose was to answer questions in Swedish related to Linköpings University and the users’ studies. The goal was to enable the bot to answer questions such as; ‘When is my next lecture?’, ‘What computers are available?’ and ‘I feel stressed, how do I get help?’. The finished bot was tested with a small group of students (N = 4) to analyze what worked well and what type of issues occurred in conversations with the bot. A usability test was performed where the participants performed tasks using the bot followed by an evaluation survey. Data analyzed consisted of notes from the test results, the question and answer data saved in Dialogflow, as well as survey results. The bot performed well with general fact questions about the University and locations of buildings. Some of the issues found was that the bot answered specific questions with superfluous information, had trouble hearing some requests, and that it was hard for the users to estimate the scope of the questions that could be answered. All participants enjoyed using the bot and understood it, but had varying experiences regarding how well the bot could answer their questions.

Rap song generation with LSTM

The aim of this project was to compare and evaluate two different models for text generation. We used a markov quadrigram-model as a baseline and compared it to a character based LSTM-model. The models were trained on a dataset containing roughly 400 50-cent lyrics with around 800,000 tokens. A subjective evaluation was conducted in a user survey (N = 32) where participants were asked to rate the likelihood (scale: 1–5) of three texts being written by a human. One text was real and the other two were generated by the models. Our results show no difference between the two models (mean = 3) while the real text received a higher likelihood to be written by a human (mean = 4). Our conclusion is that, for the purpose of generating 50-cent lyrics, a character based LSTM-model with limited training data will not outperform a simple quadrigram-model.

Change in language on Reddit over time: a visualization

The purpose of this project was to study changes in language on specified subreddits over time on the internet forum reddit.com. This was mainly done by visualizing the features of posts using 2D points over a certain timespan, done with a custom dynamic t-sne algorithm. The project included 4 main steps; gathering data with a webcrawler, applying the metrics to the data, do a dimensionality-reduction and clustering of said data, and finally analyzing the results. The metrics we used to study the language change included (among others); sentiment analysis, average post and sentence length and verb + noun-ratio. To visualize our results we have investigated a few selected number of subreddits including r/Avicii, r/conspiracy and r/theDonald, over different timespans. We found that subreddits that have content which can be related to real events were most effective to analyze – for example we noticed distinct differences in our visualization of clusters on the days around Avicii’s death.

Predicting genre of movies from plot description using text classification

The goal of this project was to predict a movie based on its synopsis using text classification. To do this we used the methods Naive Bayes and Support Vector Machine (SVM). With Naive Bayes we implemented two different solutions, one of which used scikit-learn’s learning and prediction algorithms and one where we wrote the necessary learning and prediction code from scratch. To have a baseline to compare to, we also used Most likelihood estimation (MLE). To train the model we downloaded a dataset from Kaggle consisting of information from roughly 10000 movies. 80% of the data was used for training and 20% for testing. The results was not as good as expected. Our initial hypothesis was that the accuracy would be quite high. However, we were not aware of how difficult it would be to achieve the desired results due to the fact that there were multiple classes (genres) for each movie. Therefore when training and testing the data, we limited the class to the first genre in the genre list.

Fantastic recipes and how to generate them

The purpose of this project was to generate credible recipes with lists of ingredients and instructions on how to make them, and evaluate the feasibility of using n-gram models and templates to do so. To create ingredient lists, probabilities of ingredients occurring together were calculated in a dataset of one million recipes. The probabilities were used in an n-gram model to generate a credible ingredient list. After the ingredient list was generated, every ingredient was linked to a ‘method’ that frequently occurred with the ingredient within the instruction dataset. The ingredients and methods were matched in modular templates forming an entire recipe. A recipe was generated based on a main ingredient that users manually write into the program. To evaluate the generated recipes, a survey was sent out where participants evaluated real recipes and generated recipes without knowing which ones were real. They were asked about their thoughts on the recipes. Results showed that real recipes looked more legit and had clearer instructions than generated recipes, while there was no significant difference between the generated and the real recipes in tastiness and interest in using the recipes.

2018

Language identification using n-gram models

For this project different types of n-gram models for language identification were created. All models were tested on two different languages, with the exception of the character-based model which was tested on four different languages. Initially a word-based unigram model was implemented taking unknown words in consideration. Two different types of bigram models were then created based on the structure from the unigram model. One bigram model was word-based and one character-based, both models used additive smoothing. The models were trained to get probabilities computed based on the frequencies of the n-grams in the training data. The results showed that the character-based bigram model performed best of the three models when comparing them.

Generating inspirational quotes using n-grams

The purpose of the projects was to implement an n-gram model to generate inspirational quotes and through user studies evaluate the quality of the quotes. To gather training data for this model we implemented programs that ‘scraped’ websites containing quotes tagged as inspirational. Approximately 2900 quotes were used for training the model, and for comparing performance we used both a bigram model (N=2) and trigram model (N=3). The model calculates probabilities for the occurence of words given a certain context and uses the probability to generate new quotes. In the user studies 30 quotes (bigrams = 10, trigrams = 10, real quotes = 10) were presented on a website and the participants were asked to identify whether the quote was generated or real. Results showed a marginally better performance for the trigram model compared to the bigram model. For the bigram generated quotes, 34% were able to pass as real quotes. For the trigram generated quotes 42% managed to pass as real quotes. Interestingly enough 18% of the real quotes were incorrectly classified as generated. Our conclusion is that n-grams is a very time-efficient model but might not produce reliable results.

Generating hip hop lyrics using an ngram-model

The aim of our project has been to evaluate the difficulties and feasibility of using ngram-models to generate text, in particular song lyrics in the genre of hip hop. In order to do this evaluation we created a computer application that produced hip-hop lyrics and afterwards tested the results in a smaller experimental study. In test 1 the participants were asked to read ten computer generated lyrics and decide which genre they thought the lyrics belonged to. In test 2 they were presented with ten lyrics, six of them computer generated and four of them ‘real’ lyrics taken from the training data, and decide which were computer generated and which were real lyrics. The participants were in most instances able to tell which lyrics were computer generated.

Implementing our idea of creating hip-hip lyrics through a computer program using an ngram-model was feasible. The difficulties come into play when the generated lyrics has to be realistic and convince the reader, which is the goal of the application. Doing this would require more than just an ngram-model. We can conclude that the ngram-model is easy to implement but limited when practically applied by itself.

Text classification of lyrics

The purpose of this project was to use text classification methods to predict the genre of a song based on its lyrics. The classification method that was used was a multinomial Naive Bayes classifier, with uni- and bigrams as tokens, which we used to compare with a simple baseline which always guessed on the genre ‘rock’. The data used was song lyrics tagged with (among other things) genre, which was what we were interested in. The original data contained a lot of corrupted lyrics, which was removed. A majority of the data was tagged with the genre rock, this was corrected so that we had the same amount of songs for each genre. However, after correcting this error much of the data has been discovered to be labeled wrongly and this fact has probably affected our results. The genres we have used are: Hip-hop, rock, pop, metal, jazz, indie, folk, country, electronic, R&B and ‘other’. The results show that the classifier is better at predictions when we use unigrams than bigrams.

Idioms in Google translation machine

This project's purpose was to examine how 'Google Translate' translate idioms from both Swedish and Chinese to English. We created a list consisting several English idioms, which worked as the gold-standard, and the Swedish and Chinese equivalents, which worked as the test-language. We examined the translated idioms, analyzed the idioms semantic meanings and observed if they had potential changes during translation. To be able to analyze we investigated how 'Google Translate' works. With our knowledge about how 'Google Translate' works, we tried to connect the potential differences occurring through the translation with the limitations of 'Google Translate'. The results showed that idioms rarely loses the semantic meaning (Swedish 80 %, Chinese 52%). It also showed that the idioms was rarely translated totally correct in both languages, it was only in 12% (6/50). From Chinese to English the correct translated idioms was 16% (8/50) and from Swedish to English the correct translated idioms was 36% (18/50). In summary, we can state that the results show that not many idioms translated correctly and most of the time somewhat "semi"-correctly. Google Translate's limitations represent some of the reasons why.

To click or not to click: Generating awesome clickbaits

Our goal with the project was to produce new clickbaits and evaluate them. To do this we used N-grams as a generator and Naive Bayes as a classifier. We had access to two different datasets, one with headlines defined as clickbaits and one with ordinary headlines. The result of the project was the new headlines that the model generated and the probability that they belonged to the class of clickbaits. We also did an evaluation of the classifiers accuracy, recall, precision and F1 and compared it to the baseline ‘most frequent class’. Our model showed very good results, mainly because the datasets were so distinct and the classification hence very easy to perform. Here are some examples of headlines that it produced: ‘Vegan desserts that are actually mind-blowing optical illusions’ and ‘Celebrity twitter beefs of 2015’.

Trumpet

We have developed an n-gram language model that we call Trumpet. We trained the language model on over 34,000 tweets by Donald Trump and with this trained model we then generate texts, somewhat similar to tweets, that are supposed to mimic the characteristics of Donald Trump. However, the main goal of our work is not just to generate authentic looking tweets, but also to evaluate different smoothing techniques and model orders to see how they affect the end-result, either intrinsically through entropy or extrinsically by looking at the generated texts.

The different smoothing techniques we use are: no smoothing; add-K smoothing (for bigrams); Witten-Bell smoothing (for bigrams). The results show that as the model order increases, the generated texts become more coherent and closer to something Donald Trump would write. The problem that arises with a higher model order is that the texts are not very creative, instead they often copy-paste whole sentences from existing tweets. Also, texts generated while using the two different smoothing techniques are often rather incoherent as smoothing does not take into account what an English sentence should look like. In regards to entropy when evaluating without smoothing we find that the entropy decreases rather drastically between orders one and five but stagnates after that.

From Wiki to Nisse, one character at a time

Is it possible to create humoristic articles through the use of source based facts? With this project we aimed at achieving just that. By using a LSTM-network and generating words and sentences character by character, we strived at generating humoristic Nissepedia articles that would seem to be written by a human. To make this happen we used Wikipedia articles as a base to both find new subjects to write Nissepedia articles about, but also to teach the network about the words and structure of the Swedish written language.

To evaluate our generated articles, we used two different methods. The first method was a simple spell-checker to see whether or not our articles actually consisted of real Swedish words since we generated our articles letter by letter. Our results from the spell-checker showed that approximately 86% of the words were correctly spelled. The second evaluation method consisted of a survey containing human written Nissepedia articles and our generated articles. The survey was then sent out to 33 different participants who were given the task to guess which articles were written by a human and which articles were generated by our LSTM-network. Perhaps to no surprise at all, since only 86% of the words were correctly spelled and most of our generated articles were complete nonsense, only a handful of participants thought our generated articles were written by a human. However, plenty of participants believed some human written articles were generated by a computer.

Classifying Swedish political tweets using a Naive Bayes and Multi-Class Perceptron model

This project has attempted (and succeeded) to classify political tweets using Maximum Likelihood Estimation (MLE), Naive Bayes Classifier (NBC) and Multi Class Perceptrons (MCP). The twitter feeds of political parties, their youth associations and members of parliament where scraped, parsed and tokenized them into separate databases. These were used to train and test the aforementioned models. Training of the models was conducted using different combination of the gathered data. Accuracy, precision, recall and harmonic mean were computed for each model and used as the basis of comparison with the MLE baseline.

We started out just classifying the correct block (left or right wing). Surprised by the good results we decided to move on to not only classify block but also political party.

The results show that NBC and MCP are far superior to the baseline of MLE in all cases. We came to the conclusion that NBC gave better results when being trained on tweets from different parliament members. MCB is yet to be thoroughly evaluated.

Generating quotes with a neural network

In this project, an implementation of a recurrent neural network with LSTM cells that generates new quotes has been made, using already known quotes as training data. The training data consisted of approximately 31000 quotes, and the generator trained for close to 24 hours before generating the new quotes. In order to evaluate the generator a technical evaluation as well as a user study was performed. The user studies in turn laid the foundation for a statistical analysis. The user study involved 30 participants who were asked to decide from 12 quotes, which ones were computer generated and which ones were man made. The data was then collected and a statistical analysis was conducted upon it. In the technical evaluation two different metrics were considered; the average word distance within a sentence and the occurence of word tags on the dataset. The evaluations showed that the model managed to generate some quotes that were hard to distinguish from man made quotes.

Rap song generator

For the project after attempting to implement an n-gram model found on GitHub, our group created our own bigram-model and a trigram-model which produces a few verses of rap songs. The models were trained on 1000 English rap song lyrics automatically extracted from a website. The bigram-model was also coded to be able to rhyme.

To evaluate the models we performed a test with 27 human participants and asked them to classify given song lyrics into human written and computer written. The participants also had to judge how difficult it was to determine if the song was created by a person or by a machine. The survey contained eight excerpts, six of them were generated by our program and two of them were written by a human. The generated lyrics were evenly distributed between lyrics from the bigram-model, trigram-model and the rhyming bigram-model.

The results show that even though the participants were fairly certain about which lyrics were computer generated they made a remarkable amount of errors in both cases. The project also gave us an insight into the feasibility of n-gram models and what is required to create and implement such an application.

Predicting wine color from flavor descriptions

This project looks at classification of wine descriptions into red or white wine. We have looked at two classification algorithms, Naive Bayes and support vector machine, to compare which performs better at the task. The wine database WineMag was used for creating our training-set and test-set. The database was parsed into pairs of wine color and wine descriptions. 5000 white and 5000 red entries was used as training-data, and all 30000 entries were used as test-data. The Naive Bayes classifier and support vector machine classifiers were implemented with use of the Scikit-learn library. Both classifiers use the default smoothing of 1. They were evaluated on recall, precision, and F1-score. The results show 98.66% F1-score for Naive Bayes and 98.93% F1-score for support vector machine. We had expected to achieve 95% for both classifiers with better results for the support vector machine. As an experiment we applied grid search for parameter tuning on the support vector machine classifier to see if its F1-score could be improved. This unexpectedly lowered the score to 94%, losing 4% units. Our conclusion for the project is that both classifiers perform well at the task, with a slight preference for the support vector machine.

2017

Predicera genre för låttexter

Syftet med detta projekt är att skapa en Naive Bayes-klassificerare som tar in låttexter och predicerar den rätta genren. De genrer som vi har utgått från är psalmer, hiphop-låtar och barnvisor. Till en början bygger vi upp en klassificerare som skapar en baseline. Genom att sedan identifiera olika särdrag bland låttexterna skapar vi nya klassificerare i hopp om att förbättra resultatet jämfört med den ursprungliga baselinen. Vi har utgått från fyra tydliga särdrag bland de olika texterna. Det första särdraget handlar om texternas längd. Det visade sig att genren hiphop i genomsnitt hade en mycket större textlängd, jämfört med de två andra genrerna. I det andra särdraget tog vi in unika ord i beräkningen för att förbättra prediceraren. Ordet ”förorten” som är sällsynt i psalmer och barnvisor säger mer om en låts genre än exempelvis ordet ”det” som är frekvent förekommande i samtliga genrer. Den tredje klassificeraren bygger på tanken om att en psalms titel oftast även är början på själva låttexten. Därför tittar klassificeraren extra noga på detta. Den sista klassificeraren handlar om att när klassificeraren stöter på ett okänt ord väljer den att, istället för att hoppa över detta okända ord, bläddra igenom en ordlista med lexem.

Generating texts at various levels of readability

Our project is about language modeling, where we have created a programme which generate sentences based on readability measures. The sentences have then been tested on people through a survey where people had the choice to set a value for the readability difficulty of the sentence.

The code uses a corpus that already has been tagged with part-of-speech tags from Universal Dependencies. The code evaluates the readability measures nominal quota and dependency length on each sentence in the corpus. With each run of the programme a span is set for both nominalkvot and the dependency length. The language model then trains on the sentences given and generate new sentences based on either a bigram, part of speech or a combined approached model. The output is sentences with different readability measures.

Our results showed that the generated sentences made very little sense and did not follow the grammatical rules in natural language. The bigram model however showed a more coherent result and in general followed the readability measurements better than the other models.

Text summarization

This project investigates text summarization through Summa package and through a text summarizer developed by the team. Then, both of them were evaluated through generated summaries on an intrinsic and extrinsic approach. Summaries were generated from BBC’s original articles among three different categories: sport, technology and politics. The extrinsic evaluation was based on the agreement on questions about the content in the summaries and the original texts. The intrinsic part was based on rating the summaries on non­redundancy, structure and coherence, and amount of error in anaphoric references. Comparing the ratings between the systems led to a non­statistically significant difference in the performances for both the extrinsic and intrinsic analysis. However, the agreement on content between people reading summaries and original texts was high, leading to believe that the automatic extraction­based summaries can accurately convey the gist of a text. The lowest intrinsic ratings were in the structure and coherence category. This can be explained by the fact that the extracted sentences are chained together without considering prosaic quality.

Generera erotiska noveller

Vi har valt att arbeta med textgenerering i vårt projekt. Systemet genererar stycken på tre meningar baserat på ett korpus bestående av erotiska noveller. Syftet med projektet är att se om vårt system kan generera stycken som våra testpersoner kan bedöma som människoskrivna. Vi utgick från ett system liknande lab 2, en Markovmodell som vi gjorde lite ändringar i. För att undvika för grova och olämpliga meningar lade vi även till en funktion som filtrerade bort stycken vi ansåg innehålla för grova ord, både för de människoskrivna styckena och de genererade styckena. Resultatet visade på att testpersonerna bedömde vissa av de genererade styckena som människoskrivna, men de bedömde även människoskrivna stycken som genererade. Av de genererade styckena svarade försökspersonerna att de var människosrivna i 22% av fallen. För just erotiska noveller fungerade vår textegenererare rätt bra, men mycket på grund av att de människoskrivna texterna håller en dålig kvalitet.

I have a country!

Vi har valt att utvärdera n-grams genom att skapa en bot som ska försöka efterlikna Trumps tweets. Syftet med vårt projekt är att se om tweets genererade genom n-grams kan mäta sig med tweets skrivna av riktiga människor, alltså om vårt program kan lura någon att det tweet den spottat ur sig är skrivet av Trump. Vi utvärderade sedan detta system genom att sammanställa ett formulär där deltagarna fick välja vilket tweet som de trodde var skrivet av programmet. Vår undersökning visar att n-grams är långt ifrån perfekt, men lyckas ändå lura en hel del personer.

Kan du avgöra vilket som är riktiga Trump och vilket som är från en bot?

Klassificera filmer efter genre

I detta projekt har vi undersökt språkteknologitillämpningen textklassificering genom att implementera en Naive Bayes klassificerare som predicerar vilken filmgenre en filmsammanfattning tillhör. Sammanfattningarna kunde klassificeras som antingen musikal, western eller sci-fi. Vi har utvärderat systemet genom att använda oss av evalueringsmetoderna accuracy, recall och precision. Systemet tränades på texter hämtade från IMDB annoterade med korrekt genre och testades på annan data från IMDB. Systemets totala träffsäkerhet var 80%. Systemet klassificerade alla filmer i genren sci-fi korrekt, men hade svårare att känna igen western och musikal. Vi valde även att jämföra denna data genom att utföra ett test på människor för att se hur väl de predicerade rätt genre. Detta visade på liknande resultat som vårt system då de predicerade sci-fi med 100% korrekthet med mindre träffsäkerhet på western och sci-fi. En anledningen till det kan vara att filmer som tillhör genren sci-fi har mer genrespecifika ord i sammanfattningarna, jämfört med musikaler. Western har även genrespecifika ord men då vissa filmer kan ha flera genrer kan det förvirra och vara en anledning till att fel klassificering sker. Arbetet begränsades av den insamlade datan eftersom den data som hämtades från IMDB var relativt liten då IMDB inte har ett öppet API.

Generating haikus

In this project we have created a system that generates haiku poems. We chose to generate haiku poems as they have clear rules and did thereby shape how we developed the system. The main purpose was to create and evaluate the haiku generator to get a greater understanding of the difficulty and feasibility of our system. Two tests were done to measure the quality of the generator; a word frequency test and a user test. The user test included 35 participants, 57% women and 43% men. Participants were given a survey containing six poems, three of them were generated by our system and three of them were written by a poet. They then had to determine if the poem was written or generated. 54% thought the generated poems were made by a generator, and 46% thought they were made of a real poet. The poems written by real poets got the opposite results; 46% thinking that they were generated. The main results showed that there was no major difference in word frequency between the two sets and that it was difficult for the participants to see the difference between the written and the generated poems.

2016

Bibelförfalskning

Projektet utvärderar en utökning av ordprediceringsmetoden som vi lärt känna under kursens gång. Vår hypotes inför projektet var att vi skulle kunna generera Bibelliknande meningar genom att först göra en syntaktisk analys av texten i Bibeln och sedan använda oss utav den samt ngrampredicering. Analysen bestod av att skapa dependensträd för alla meningar i Bibeln och sedan baserat på sannolikheter generera nya dependensträd för meningarna. Baserat på det genererade dependensträdet frågar vårt system en n-gramsmodell efter ord. Vi implementerade en bi- och trigramsmodell för att generera texter och gjorde en jämförande utvärdering av vår implementation och implementationen som vi använt i labbarna. Vi lät tio personer läsa tio par av meningar där den ena meningen genererades av vår implementation och den andra meningen av labsystemet. Vi bad dem avgöra vilken av meningarna som var mest bibelaktig. 41 gånger av 100 föredrog personerna vår lösning framför labsystemets.

Konsten att göra rim

Det här arbetet presenterar ett system som genererar korta rim på två meningar. Syftet är att ta reda på om vårt system kan generera rim som kan uppfattas som om de var skrivna av människor. Systemet fick träna på rim skriva av människor och testades därefter på ett tiotal personer som fick avgöra om de tyckte att rimmen var skriva av en människa eller en dator. Det var tio olika rim de fick titta på varav sju var framtagna av vår rimgenerator och tre var hämtade från Internet.

Musikgenreklassifikation med sångtexter

Hur enkelt är det att enbart baserat på en sångtext kunna klassificera vilken genre låten tillhör? I detta arbete testades det genom att implementera ett eget textklassificeringssystem vars mål var att givet en mängd guldstandarddata klassificera nya sångtexter utefter fyra distinkta genrer: country, hiphop, metal och reggae. Utifrån frekvensen av genretypiska ord beräknades sannolikheten för att en sångtext tillhörde en viss genre. Systemet tränades på sångtexter som var typiska för varje genre och testades sedan på andra texter inom samma fyra genrer. Därefter utvärderades systemets precision, täckning och korrekthet. Systemets totala korrekthet uppmättes till 70% medan precision och täckning varierade för de olike genrerna. Körningarna visade bland annat att systemet klassificerade alla metallåtar korrekt men hade svårt för att skilja reggae från de andra genrerna. Detta kan bero på att språk och ordval i metallåtar ofta är specifika för just den genren och därför sällan förekommer i övriga tre genrer. Orden som förekom i de reggaelåtar som analyserades är däremot inte genretypiska utan förekommer även frekvent i andra sorters sångtexter. Resultatens validitet begränsas dock av mängden data, vilka hämtats manuellt. För att förbättra systemet skulle en större datamängd behövas, förslagsvis från en databas med sångtexter taggade efter genre.

Trumping Twitter, a text classifier for Tweets about Trump

The natural language processing application we studied for this project was text classifying. We looked into tweets from Twitter that was about Donald Trump to see if they were positive or negative. We got the tweets from Twitter with the help of the data gatherer. Then we made a simple webpage for annotation where we annotated over 1500 tweets if they were postive, negative or irrelevant. After that we built a Naive Bayes Classifier and evaluated its result by comparing it to a Maximum Entropy Classifier and a Decision Tree Classifier. Since some of the classifiers required tokenized sentences, we also built a tweet tokenizer which used regular expressions to filter out all links and tokenize all sentences.

Hillary vs Sanders

Inom detta projekt ville vi predicera attityder hos twitteranvändare gällande de två amerikanska presidentkandidaterna Hillary Clinton och Bernie Sanders. Detta gjordes genom en textklassificerare utifrån träningsdata som hämtats från Sentiment140 och testdata som hämtats från twitter med hjälp av en webcrawler. Webcrawlern baserades på twitter-modulen Tweepy. Delar av data vi fick ut taggades sedan manuellt för varje tweet efter två klasser, vilket var om de tolkades som positiva eller negativa. Vi ville alltså predicera vem av Hillary och Sanders som låg bäst till inför det demokratiska primärvalet utifrån positiva och negativa attityder gentemot dessa två på twitter. På detta sätt kan man delvis förutspå kandidatens popularitet bland väljarna och därigenom få en indikation på vem som kan tänkas vinna primärvalet. Vi utvärderade korrekthet, precision och täckning hos vår klassificerare. Slutligen diskuterar vi huruvida Naive Bayes är en lämplig metod och de problem vi stött på.

Textgenerering av låttexter

Detta arbete handlar om textgenerering av hip-hop texter. Syftet med detta arbete är att skapa en textgenerator som predicerar låttexter och sedan utvärderar systemet genom att mäta entropin. Vi ville göra en textgenerator med de tekniker som vi har lärt oss i kursen. Arbetet är även baserat på en studie, i vilken vi hittade länkarna till de databaser vi hämtade våra låttexter ifrån. Varför vi valde just hip-hop-text är för att melodin inte är lika viktig utan texterna ”pratas” fram snarare än sjungs enligt en given melodi vilket gör det lättare att generera texten utan att behöva tänka på tempot, dvs. beräkning av antal stavelser. Vi normaliserar även data genom att ta bort rubriker, skiljetecken och ändrar versaler bokstäver till gemener. Meningarna plockas ut per rad och läggs till i ett lexikon. Vi använder oss av n-grammodellgeneratorn från labb 2 för att tokenisera vår data och vi utvärderar genom att jämföra antalet gånger det måste anropa genereraren innan det genererar en ny mening.

Förbearbetning i sentimentanalysis

Syftet med detta projekt var att testa vilken roll förbearbetning spelar i sentimentanalys. Sentimentanalys innebär att stora mängder data som finns på till exempel Twitter hämtas och analyseras för att ta reda på allmänhetens åsikter om till exempel en vara, ett parti eller en offentlig person. Insamling av data gjordes genom ett Twitter-API som samlade in tweets i realtid, på engelska, som innehöll ordet ”Trump”. I samband med insamling av data skedde förbearbetning av den, delvis med hjälp av reguljära uttryck. Bland annat togs retweets, hashtags, URL:s, bilder, användarnamn, specialtecken och stoppord bort. Texten gemeniserades, tokeniserades och lemmatiserades även och alla tweets tilldelades ID-nummer. Systemet Sentistrength användes för att tagga tweets efter dess sentiment. Systemets måttstandard omvandlades till en trinär skala bestående av värdena positiv, neutral och negativ. Med måtten korrekthet, precision och täckning utvärderades resultaten med hjälp av excel och jämfördes med en guldstandard taggad av människor. Utvärderingen gjordes på ett stickprov av 200 tweets som slumpats fram.

Översättning av ordspråk och koppling till relevans i Google Translate mellan svenska och engelska

Vi har undersökt hur väl Google Translate översätter ordspråk mellan engelska och svenska där vi mätt semantisk likhet mellan översättningarna. Utvärderingen har skett genom jämförelse av korrekthet mellan översättningarna från svenska till engelska och engelska till svenska. Genom att ta fram listor med ordspråk på engelska och svenska och lista de matchningar som finns och språkliga motsvarigheter har vi framställt en guldstandard. Resultaten vi fick fram efter översättningarna jämfördes mot guldstandard och mellan sig själva för att se skillnad mellan översättning svenska till engelska och engelska till svenska. Genom detta fick vi resultatet att Google Translate är bättre på att matcha och översätta från engelska till svenska, samt att det inte använder sig av helt ekvivalenta modeller och metoder om man byter riktning på översättningen. Ett exempel på detta är engelska: All is fair in love and war som blir översatt till svenska: Allt är tillåtet i kärlek och krig medans översatt från svenska: Allt är tillåtet i kärlek och krig får man på engelska: Everything is permitted in love and war. Översättning från svenska ger resultat på 22% matchning och från engelska 28% matchning vilket visar att Google Translate oftast ger direktöversättningar och inte matchar.

Automatisk karaktärsanalys i böcker

Målet med projektet var att skriva ett program som analyserar ordklasstaggade och dependenstaggade texter och producerar karaktärssammanfattningar för viktiga karaktärer i texten. Dessa karaktärssammanfattningar är baserade på adjektiv och substantiv som används för att beskriva karaktären i texten. Programmet sammanfattar även titlar som ges till karaktären i texten samt karaktärens kön baserat på namnstatistik från Statistiska centralbyrån. Ett separat program genererar korta meningar utifrån det första programmets output. För att tagga texten används det verktyg som erbjuds av forskningsenheten Språkbankens annoteringslabb. Den taggade texten analyseras sedan av ett egenskrivet Python-skript där bokens alla namngivna karaktärer identifieras samt de ord som används för att beskriva dem. Resultaten skrivs sedan ut i ett mer läsbart format av skriptet. Projektens slutliga syfte är att undersöka huruvida det går att producera karaktärsbeskrivningar som liknar dem som finns att tillgå på många sidor på internet. Dessa typer av karaktärsbeskrivningar används även för att utvärdera resultatet.

Siri

I projektet undersökte vi Apples digitala personliga assisten Siri. Vi ville undersöka hur effektiv Siri kunde lösa givna uppgifter, och för att göra det utförde vi ett test. Testet bestod av fem deltagare, och varje deltagarna fick lösa fem uppgifter dels med hjälp av Siri, och dels med hjälp av det inbyggda gränssnittet i iPhone. Syftet var att se ifall Siri kunde lösa uppgifter lika bra som när en människa gjorde det manuellt. I resultatet kunde vi se att det generellt sett tog lite längre tid att lösa en uppgift med hjälp av Siri kontra det mer konventionella sättet. Två av deltagarna föredrog att lösa uppgifterna utan Siri, två deltagare tyckte det var bättre med Siri, och en tyckte att båda alternativen var lika bra. Det gick också att se en klar koppling mellan hur snabbt en uppgift löstes med hjälp av Siri och om testdeltagaren var van att använda Siri. De deltagare som var ovana tyckte det var svårt att veta hur de kunde formulera sig för att bli fatt bli förstådda.

2015

Hur Google Translate hanterar idiom

Detta arbete undersöker hur maskinöversättningsverktyget Google Translate översätter idiom både mellan svenska och engelska, med båda språk som källspråk respektive målspråk, med avseende för att undersöka huruvida den klarar detta utan att idiomen tappar den semantiska innebörden. Guldstandard har erhållits från en hemsida som listar alla svenska idiom med deras engelska, motsvarande, idiom. Syftet med arbetet är att undersöka hur ett känt maskinöversättningsverktyg som Google Translate hanterar idiom som inte kan direktöversättas och vilket inte är skrivet i en kontext. Frågeställningen lyder ”Hur hanterar Google Translate idiom – vad är dess korrekthet?”. För att besvara frågan genomfördes både en undersökning samt en litteraturstudie berörande hur Google Translate översätter idiom. Resultatet av undersökningen återfinns i resultatdelen och dess metod i metoddelen. Resultatet av litteraturstudien återfinns i teoridelen och de båda diskuteras i diskussionsdelen.

Automatgenerering av haikudikter

En haikugenerator har implementerats och utvärderats mot 30 olika testpersoner. Haikudikter valdes för att de har tydliga regler och därför passar väl in i kursen och den korta tidsram som tilldelats. Syftet med detta projekt är att designa, implementera och utvärdera en haikugenerator. Generatorn har implementerats genom att få träna på haikudikter skrivna av människor. Utvärderingen visar att vissa av dikterna skapade av generatorn kan uppfattas som om de vore skrivna av människor.

Kvalitativ och kvantitativ utvärdering av chatterbots

En chatterbot är ett datorprogram, programmerat i syfte att hålla konversationer med människor. Ett sådant analyserar mänskliga ledtrådsord och fraser för att sedan returnera en passande respons, antingen genom interaktion via text eller auditivt. Chatterbots designas oftast i syfte att kunna småprata med människor, med målet att klara Turingtestet genom att lyckas lura användaren att programmet i själva verket är en människa. Dock har de även använts i en rad praktiska system och implementeras allt oftare i områden såsom kundtjänster eller där annan informationsinsamling krävs. Chatterbotens ökade spridning innebär att programmens förmåga att processera och hantera naturligt språk blir allt viktigare. Denna rapport syftar till att utvärdera två olika chatterbotsystem, A.L.I.C.E och Cleverbot, både kvalitativt och kvantitativt, för att utforska vilka problem och missförstånd som uppstår, samt att ställa dem mot varandra. Resultaten visar på att de båda systemens svagheter uppstår i väldigt olika konversationella situationer och utvärderingen föreslår därför att en sammanslagning av de båda systemens kvalitéer vore fördelaktig.

Extrinsic Evaluation of Text Summarization

The goal of this study was to investigate if a machine summarizing system (SUMMA) could select the right information from a given text, and provide them in a summarized form with the relevant information that would enable a person to correctly answer questions about the text. An extrinsic evaluation was adopted to evaluate how well the system performed, through an experimental study. Two groups of participants were given original texts with questions to answer, while another group were given summarized versions. The results showed that, for 75% of the times, the summarized texts provided enough information for the participants to answer equally well as the participants who got the full length texts. Based on these results, the Summa system achieved seemingly good results, but further studies could compare it to different text summarizing system to determine the goodness of its function.

Att använda Twitter för att förutsäga vem som går vidare i Melodifestivalen

Den här rapporten beskriver vårt försök att förutsäga vilka bidrag som skulle gå till final från andra chansen i Melodifestivalen 2015. För att lösa denna uppgift har vi skapat ett program som analyserar en mängd tweets för att avgöra vilka bidrag som flest människor är mest troliga att rösta på. Vi har utvärderat systemets förmåga att analysera tweets och vi redgör för de problem vi har stött på under genomförandet av projektet och de största svårigheterna med analys av sociala medier.

Utvärdering av informationsförmedling vid maskinöversättning

Att det i ett land finns mer än ett officiellt språk är världen över normen snarare än undantaget. Det finns därför ett mycket stort behov av översättning mellan många olika språk och gärna snabbt samtidigt som det är viktigt att så pass lite väsentlig information som möjligt försvinner i processen. På många platser i världen är det framför allt mellan närbesläktade språk som behövs med olika varianter och dialekter av språk som är mycket lika varandra men där det finns en betydande del av befolkningen som inte kan läsa båda språken som exempelvis spanska och ekatalanska. Det finns dock även gott om platser där det finns språk som inte alls har mycket gemensamt, som till exempel många afrikanska länder där antingen engelska eller franska är officiella språk tillsammans med ett eller flera afrikanska språk. Det finns många tillvägagångssätt för maskinöversättning, men mest resurser investeras av naturliga skäl ofta till översättning mellan världens större språk. Syftet med denna studie är att undersöka hur olika system klarar av att bibehålla förmågan att förmedla information vid översättning mellan två mindre språk, i det här fallet svenska och danska. Två system med olika tillvägagångssätt kommer därför att användas.

Översättningsverktyg

I denna studie undersöktes vilket av sex översättningsverktyg som var bäst på att översätta tre aspekter som maskinöversättare generellt har problem med: oräknebara ord i engelskan, infinitiv form och prepositioner. Översättningsverktygen utvärderades genom att en mängd testmeningar översattes och sedan jämfördes mot en guldstandard. Resultatet påvisade svagheter i de olika systemen, i synnerhet gällande deras förmåga att översätta prepositioner vars syntaktiska roll inte är prepositionslik.

Ett frågebesvarande system baserat på svenska Wikipedia

Vi har gjort ett frågebesvarande system där en användare kan skriva in en faktafråga och förhoppningsvis ska systemet returnera ett svar taget från svenska Wikipedia. Detta är en intressant tillämpning sett från en språkteknologisk synvinkel då det blandar en hel del olika aspekter som har med språkteknologi att göra, som till exempel ordklasstaggning, parsning, tokenisering, normalisering, informationsextraktion, dokumentsökning samt svarstypsdetektering.

Sentimentanalys på Twitter

Den enorma mängd data som finns i sociala nätverk bjuder in till att analysera och hitta trender i datan. Det skulle kunna vara intressant att undersöka hur människors interaktion med varandra ser ut online, hur det politiska klimatet är, vilka nyheter som människor bryr sig om, och annat. Datan som finns på Twitter är speciellt intressant eftersom den är begränsad till 140 tecken per tweet. Detta gör att de ord som finns i ett tweet, i snitt, är viktigare för innebörden av texten än i andra texter, vilket i sin tur gör det lättare att analysera texten med ett datorsystem. Enligt vår undersökning är det möjligt att med stor säkerhet predicera sentiment i texter på Twitter, även med ett förhållandevis enkelt system.

2014

Utvärdering av Stanfords sentimentanalys

En jämförelse mellan ett trivialt system och ett sofistikerat system är grunden i detta projekt. Stanfords Sentiment Analysis system kommer jämföras med ett eget-implementerat Naive Bayes system. Syftet i projektet är att skapa förståelse för sentimentetsanalys samt hur klassificerare tränas och presterar inom detta område. Vidare utforskas systemen för att se på vilka punkter Stanford presterar bättre än Naive Bayes och varför. Projektet ska bredda förståelsen över huruvida det är värt att lägga tid och resurser på ett system som inte nödvändigtvis presterar bättre, relativt sätt, än ett primitivt system.

Utvärdering av iTranslate, en maskinöversättningsapplikation

Det finns idag en uppsjö olika översättningsapplikationer på marknaden – en av dessa är iTranslate. Målet med denna applikation är att snabbt kunna få en översättning av vanliga ord och fraser från ett språk till ett annat, vilket ska underlätta konverserande mellan människor som talar för varandra främmande språk. Det dyker upp allt fler system på marknaden som använder sig av MT-system. Det är därför viktigt att utvärdera dessa för att veta huruvida de går att lita på eller inte. Syftet med detta projekt var att utvärdera applikationen iTranslates funktion för maskinöversättning (MT). Detta gjordes både genom en intrinsisk utvärdering, med ett mått inspirerat av utvärderingsmåttet BLEU, och genom en extrinsisk utvärdering baserat på vad applikationen kan tänkas användas till.

Hur bra är Semantria på att avgöra attityder i hotellrecensioner?

Hur bra är ett system på att klassa attityder i en text? Gruppen har valt att utvärdera hur bra systemet Semantria klassar attityder i hotellrecensioner från Tripadvisor som positiva, negativa eller neutrala. Systemet är användbart för människor som ska ut och resa, de kan snabbt få reda på hur bra ett hotell är enligt tidigare gäster samt för hotellet som företag då det indikerar vad som behöver förbättras. Det är även användbart som rekommendationssystem för att veta vilka hotell som ska föreslås. Metoden för utvärderingen har varit att ge systemet data för analys. Gruppmedlemmarna har skapat en guldstandard där recensionernas attityd först har annoterats individuellt, sedan har ett medelvärde utifrån de individuella analyserna beräknats och jämförts med systemets attitydsanalys.

Twittityder

Detta projektarbete gick ut på att göra en utvärdering på Sentiment140.com och dess underliggande system. Sentiment140 analyserar Twitterflödens polaritet med hjälp av en Maximum Entropy Classifier. I rapporten redogörs det för vilken precision och recall Sentiment140 har på två olika typer av entiteter, ”iPad” och ”Ukraine”. Det diskuteras även kring svårigheterna med Sentiment Analysis och varför polariteten är ett generellt genomsnitt av hela tweetens innehåll och inte tagen utifrån en viss entitet i tweeten.

Sentimentklassificerare för olika domäner

Projektet har baserats på av författarna till denna rapport utvecklad kod, i enlighet med specifikationer för uppgift fyra, laboration tre i Linköpings universitets kurs i språkteknologi (729G17/TDDD01). Det gick ut på att skapa en inställningsklassificerare baserad på Naive Bayes med Laplace Smoothing. Data i form av redan klassificerade filmrecensioner fanns tillgänglig för träning, utveckling och testning av systemet. Huvudsaklig fokus på projektet låg i utvärderingen av systemets prestanda på klassificering av skribentens inställning i två olika domäner, nämligen (1) den domän som systemet var tränat på, det vill säga filmrecensioner; och (2) en för systemet främmande domän, i detta fall twitterflöden om fotboll. Utvärderingstypen var intrinsisk, och baserades på en guldstandard för respektive domän. För att få in ännu en aspekt av anpassningsarbetet domänerna emellan utvecklades domänanpassade hjälpfunktioner vilket bland annat resulterade i förbättrad accuracy.

Utvärdering av Google Translate

I denna rapport beskrivs en utvärdering av maskinöversättningssystemet Google Translate. En kod som deltagarna själva skrev användes för att utvärdera Google Translate med hjälp av BLEU-måttet. Som guldstandard användes översättningar från fyra olika professionella översättare. Anledningen till att Google Translate valdes som system att utvärdera var för att gruppen ansåg att det skulle vara kul att utvärdera något språkteknologiskt system som gruppmedlemmarna känner till väl och använder sig av nästan dagligen, vilket Google Translate är. 2012 hade Google Translate över 200 miljoner användare så det är även väl känt över jordens internetanvändare (Mitchell, 2012). Syftet var alltså att utvärdera Google Translates precisions-mått utifrån BLEU-utvärderingen.

Google Translate och Word Sense Disambiguation på homonymer

Denna studie utfördes för att utvärdera hur väl Google Translate översätter homonymer (ord med flera betydelser) från svenska till engelska. Studien utfördes för att undersöka om ett såpass stort verktyg för maskinöversättning som Google Translate är har något sätt att hantera svårigheten som finns med att översätta homonymer. För utvärderingen användes en guldstandard med 87 svenska meningar. Meningarna översattes sedan av Google Translate. Översättningen utvärderades intrinsiskt utav gruppmedlemmarna. Översättningarna utvärderades utifrån ifall Google Translate kunde tolka homonymer på rätt sätt, fokus låg på det homonyma ordet. Resultatet utvärderades och Google Translates korrekthet räknades ut. Google translates korrekthet låg på 57% vilket betyder att den klassificerade 50 meningar av 87 korrekt.

Textsammanfattning för Wikipedia

Målet med projektet var att finna, implementera samt utvärdera en algoritm för sammanfattning av encyklopedisk text. Av tidsmässiga skäl begränsades implementationen till att behandla artiklar från engelskspråkiga Wikipedia som avhandlade personer eller länder. Sammanfattningen skulle ge en så god bild av artikeln som möjligt. Algoritmen fick inte ge upphov till några felaktigheter som inte redan existerade i källtexten. Slutligen skulle resultatet utvärderas, både extriniskt och intrinsiskt, samt att förslag på eventuella brister ges.

Textgenerering av låttexter

Vi valde att bygga ett system som genererar låttexter, för att hjälpa låtskrivare och roa allmänheten. Detta system är tänkt att först och främst användas för nöjets skull. Tänk att du t.ex. är på en fest. Helt plötsligt bestämmer någon att det är dags att sjunga karaoke. Och festens värd kan alla låttexter utantill. Orättvist, eller hur? Men det är nu vårt system kommer till användning. Helt plötsligt är alla låtar okända för alla och tävlingen sker nu på lika villkor. Denna låttextgenerator kan också användas av låtskrivare för att ge inspiration och idéer till nya låtar. Då systemet behandlar låttexter behöver det kunna hantera följande fenomen: generera meningar av specificerad stavelselängd; kunna generera rimmande meningar; generera text med ett övergripande tema; generera låtar som innehåller en röd tråd. För att uppfylla våra krav använde vi oss av trigramsmodeller och taggning, mycket likt hur labb 1 var uppbyggd, och på grund av detta användes den i stor del som grund till vårt program.