News

Managing reader comments: ‘Algorithms decide more objectively’

Comments on articles are a valuable feedback tool and reader retention instrument for newspapers. However, these are accompanied by challenges such as inflammatory comments and hate speech. “Managing reader comments is a difficult task for publishers,” says Dr. Nicolai Erbs, a researcher for computer science from the Technical University (TU) of Darmstadt in Germany.

by WAN-IFRA Staff executivenews@wan-ifra.org | August 21, 2017

Hate speech, spam, opinions that overstep a certain boundary – Erbs gained further insights into these phenomena from a research project at Darmstadt’s TU. To support moderators, Erbs and a team of researchers at the TU developed intelligent algorithms to automate the classification of comments.

In this interview, Erbs, who will be speaking at IFRA / DCX during a session on 12 October in Berlin, discusses the learnings from this research work.

You have developed a technology for the classification of reader comments. What exactly is the background to this?

Nicolai Erbs: Managing reader comments is a difficult task for publishers. This subject is increasingly becoming the focus of attention as a consequence of the fake news and hate comments discussion.

For this reason, as a first step, we wanted to investigate what types of comments exist. It was purely in the interest of research. In the course of our investigation, we developed algorithms that are of interest for managing reader comments.

We now wish to find out whether moderation can be automated. A leading German-language newspaper and an Indian publishing house are our cooperation partners, and we are working together to evaluate the research results.

Which types of reader comments were you able to identify?

Erbs: To begin with, many comments are double. Another phenomenon are comments that although certainly valid, are highly subjective.

They attempt to “push” an opinion, but have nothing to do with the subject of the article. If, for example, the topic concerned is the cost of higher education, the comment is “[German Chancellor Angela] Merkel is in favour of immigration.” That is an indication that the comment could have been written by bots.

In cases of an absence of relevance to the subject concerned, it is possible to make a further distinction: There is classical spam such as we know from e-mails – e.g. with links to commercial offers. Then we have hate comments. Heated discussions frequently provoke contradicting comments, i.e. comments on comments, that no longer refer to the article itself. These sometime include personal insults directed towards the editor, and even more often towards other commentators.

We had our own research project aimed at recognising where the threshold to normal discussion is overstepped. That is very difficult, as the right of freedom of expression covers a great many things. But news media do not want to have everything in their forums.

Are hate comments really such a widespread phenomenon or just highly problematic?

It depends a lot on the medium and its forum. For example, our German-language partner has to deal with an extremely small number of hate comments. This is due to the fact that the newspaper does advance moderation. When a comment is written, it is not published immediately, but first moderated and checked. Consequently, comments must first overcome relatively high hurdles before publication. Naturally, this frightens off persons who want to disseminate irrelevant comments without any reference to the article or comments and with no foundation in facts. Such comments can be seen much more frequently in social media, e.g. Facebook.

Therefore publishers have to deal in the main with irrelevant and commercial comments?

Erbs: Yes, exactly. Frequently the comments are of a poor quality. Especially on Facebook you see many grammatical errors, short and poorly expressed comments. This is not such a problem for Facebook, as the general public ensure that the better comments are ranked higher and more frequently displayed because they received more “likes.”

All comments in newspaper and publisher forums should have equal rights. A high standard is to be desired, also for liability reasons, as it is the publisher who bears the ultimate responsibility.

It is up to each publisher to set and apply his own standards. It must be decided whether preference should be given to more comments and more content at the expense of a high quality, or to small numbers of high-quality comments.

How does an algorithm work that filters accordingly?

Erbs: An algorithm’s intelligence relates directly to the characteristics of the language used. A well-founded understanding of the language is essential for it to work. But if a system works, it can be applied also to other media.

We selected a classical algorithm for pattern recognition instead of Deep Learning, as we want to be always able to give a reason why something was classified in a certain way.

When a comment is recommended for filtering, we give the moderators a reason why – e.g. absence of relevance to the subject of the article.

For this purpose, we make a word comparison between article and comment and check semantic relations.

A word does not have to be mentioned explicitly for a connection to be detected. For example, if the subject of the article is culture and an operatic performance and the word “singer” is not mentioned explicitly, but is so in the comment, we can still make the connection.

We also observe grammatical constructions, e.g. how complicated sentences are and how sophisticated the line of argument is. Facts used to provide the basis for a line of argument are more likely to be contained in a secondary sentence. Of course, these are just indications. The classification system draws on many characteristics to recognise whether or note a comment should be filtered.

How can the system recognise spam and hate comments?

Erbs: Classical spam is relatively simple to recognise as, for example, it contains a link or specific signal words, such as viagra, hot, or something along those lines.

The algorithm also recognises grammatical errors. Naturally, not in all fine detail, but wrongly written words or poor sentence constructions, e.g. many exclamation marks.

Hate comments also often contain certain signal words, such as IS or ISIS. We are able to filter semantic variations, such as Islam and Islamist, in correct and incorrect grammatical forms.

We compile word lists based on comments made to date. Even if the author of a comment were to know these lists it would still be difficult for him to get around them. Just because an author avoids the use of certain words does not mean that he can slip his comment through undetected.

How do publishers handle the filtered and categorised comments?

Erbs: There are two options. One is a semi-automated Dashboard solution. We can filter some comments as most likely spam or hate comments and others as qualified comments that we can publish directly.

All that remains for the moderator to do is to manually adjust a small number of comments. On top of this, the moderator receives additional comments that are perhaps not in order.

The alternative is total automation – especially for forums for which no moderators are available.

In Germany, this is somewhat frowned upon, though the practice is more widespread in an international context, where it is customary for the moderator only to become active after someone has marked something as spam or inappropriate.

With our method we can act much earlier on, before another reader discovers it. Comments that are suspect are immediately blocked and checked.

What are the potential economies for a publisher?

Erbs: We know the moderators are mostly students doing temporary work for an hourly wage of 10 to 15 euros. For a publisher receiving about 1,000 comments per hour, which compared to Facebook is not really a lot, we calculated time savings of eight hours per day.

For a 30-day month, this represents potential savings of about 3,000 euros. Moreover, it can also produce added value.

As a reader, I am naturally not so inclined to write a comment if I know it will take half a day before it is published. If I can communicate directly with other readers it is clearly more attractive and strengthens customer loyalty – and consequently advertising income.

For larger publishing companies with their own teams this is often a relief, whereas at smaller publishing houses there is so far mostly no one to take charge of this task. Journalists would have to do it, which is why the publishers do not to allow it.

For publishers it is also interesting to know which topics incite readers to participate in a discussion, and what they think about the topics concerned.

To what extent can this be taken into account in topic development?

Erbs: We compile the hot topics based on comments. These are then both sections as well as individual topic clusters. Right now, the new standard hot topic compared to 2016 is Donald Trump.

We also analyse sentiment, therefore very positive and negative comments. It is interesting to note that although many articles tend to be more contra Trump, an unusually large number of comments are pro Trump.

People are more inclined to write a comment if they are of the opposite opinion. Publishers want to encourage discussion, so that this is a desirable effect.

To what degree is objective reporting endangered if too much heed is given to readers’ opinions?

Erbs: The danger exists, but the problem is always there anyway and is only shifted. Moderators currently filter out inappropriate comments. But that is a purely subjective decision.

We have frequently seen comments that got through one day but not the next day – simply because it suited the moderator or not.

The students working as temporary assistants often lack proper training. Algorithms decide much more objectively.

On condition that the algorithm is based on objective, comprehensible rules …

Erbs: Yes, that’s right. The algorithm learns from past decisions. Initially the moderator is in a certain way copied according to how he decided in the past. And if he displayed a very strong tendency in a certain direction, the algorithm will show the same tendency.

This interview was conducted by Stefanie Hornung, Head of Communications & Media Cooperations, Publishing Exhibition GmbH & Co. KG, a partner company with WAN-IFRA.

Share via
Copy link