Geen reacties datingsite euro dating agency

Gender recognition has also already been applied to Tweets. (2010) examined various traits of authors from India tweeting in English, combining character N-grams and sociolinguistic features like manner of laughing, honorifics, and smiley use.

With lexical N-grams, they reached an accuracy of 67.7%, which the combination with the sociolinguistic features increased to 72.33%. (2011) attempted to recognize gender in tweets from a whole set of languages, using word and character N-grams as features for machine learning with Support Vector Machines (SVM), Naive Bayes and Balanced Winnow2.

In this paper we restrict ourselves to gender recognition, and it is also this aspect we will discuss further in this section.

For each blogger, metadata is present, including the blogger s self-provided gender, age, industry and astrological sign. The creators themselves used it for various classification tasks, including gender recognition (Koppel et al. The men, on the other hand, seem to be more interested in computers, leading to important content words like software and game, and correspondingly more determiners and prepositions.

One gets the impression that gender recognition is more sociological than linguistic, showing what women and men were blogging about back in A later study (Goswami et al.

Their highest score when using just text features was 75.5%, testing on all the tweets by each author (with a train set of 3.3 million tweets and a test set of about 418,000 tweets). (2012) used SVMlight to classify gender on Nigerian twitter accounts, with tweets in English, with a minimum of 50 tweets.

Their features were hash tags, token unigrams and psychometric measurements provided by the Linguistic Inquiry of Word Count software (LIWC; (Pennebaker et al. Although LIWC appears a very interesting addition, it hardly adds anything to the classification.

The age component of the system is described in (Nguyen et al. The authors apply logistic and linear regression on counts of token unigrams occurring at least 10 times in their corpus.

The paper does not describe the gender component, but the first author has informed us that the accuracy of the gender recognition on the basis of 200 tweets is about 87% (Nguyen, personal communication). (2014) did a crowdsourcing experiment, in which they asked human participants to guess the gender and age on the basis of 20 to 40 tweets. on this, we will still take the biological gender as the gold standard in this paper, as our eventual goal is creating metadata for the Twi NL collection. Experimental Data and Evaluation In this section, we first describe the corpus that we used in our experiments (Section 3.1).Then we describe our experimental data and the evaluation method (Section 3), after which we proceed to describe the various author profiling strategies that we investigated (Section 4). Gender Recognition Gender recognition is a subtask in the general field of authorship recognition and profiling, which has reached maturity in the last decades(for an overview, see e.g. Even so, there are circumstances where outright recognition is not an option, but where one must be content with profiling, i.e.Then follow the results (Section 5), and Section 6 concludes the paper. For whom we already know that they are an individual person rather than, say, a husband and wife couple or a board of editors for an official Twitterfeed. the identification of author traits like gender, age and geographical background.Two other machine learning systems, Linguistic Profiling and Ti MBL, come close to this result, at least when the input is first preprocessed with PCA. Introduction In the Netherlands, we have a rather unique resource in the form of the Twi NL data set: a daily updated collection that probably contains at least 30% of the Dutch public tweet production since 2011 (Tjong Kim Sang and van den Bosch 2013).However, as any collection that is harvested automatically, its usability is reduced by a lack of reliable metadata.With only token unigrams, the recognition accuracy was 80.5%, while using all features together increased this only slightly to 80.6%. (2014) examined about 9 million tweets by 14,000 Twitter users tweeting in American English.