image copyrightGutzembergimage captionThe Cambridge University study analysed 2.7 million tweets and Facebook posts from US media outlets and political figures
Social media posts are twice as likely to go viral if they are negative about politicians they oppose rather than positive about those they support, a Cambridge University study suggests.
It analysed 2.7 million tweets and Facebook posts from US media outlets and political figures over five years.
The negative posts were also twice as likely to be commented on.
They attracted more angry or laughing-emoji reactions on Facebook than the positive received hearts or thumbs ups.
"If the post was coming from a Republican and it was referring to [Joe] Biden or the liberals, it was much more likely to go viral than when it was referring to any other topic or an in-group politician," co-author Steve Rathje said.
An unflattering screenshot of the US president, with a caption about his "latest brain freeze", from conservative media outlet Breitbart News, had proved wildly popular, he said.
And a tweet by left-leaning Vermont senator Bernie Sanders: "Donald Trump has lied more than 3,000 times since taking office but Republicans refuse to say Trump is a liar," had been retweeted more than 6,000 times and received about 14,900 "likes".
"You can call it trolling, some people call it 'dunking'," Mr Rathje said.
"Social-media companies desire engagement and virality from us at all costs to produce ad revenue, and we as individuals desire engagement and virality to get our message out or promote a political campaign."
The peer-reviewed study is published in the journal Proceedings of the National Academy of Sciences.
Social-media algorithms are often designed to promote the most popular material – meaning the more engagement a post has, the more likely it is to pop up in the feeds of a wider audience.
And technology companies have faced criticism this encourages polarising, hateful and extreme content.
Last month, Instagram announced users could choose to hide the number of "likes" a post received, despite its own testing suggesting this would have little impact on either behaviour or wellbeing.