Posts Tagged 'words'

Trump’s continuing success

As I posted earlier, our study of previous successful presidential candidates shows that success is very strongly correlated with a particular language model, consisting of:

  • Uniformly positive language
  • Complete absence of negative language
  • Using uplifting, aspirational metaphors rather than policy proposals, and
  • Ignoring the competing candidates

Trump presumably polls well, to a large extent, because he uses this language model (not so much ignoring of the competing candidates recently, but maybe that’s the effect of a primary). This language pattern tends to be used by incumbent presidents running for re-election, and seems to derive from their self-perception as already-successful in the job they’re re-applying for. Trump, similarly, possesses huge self confidence that seems to have the same effect — he perceives himself as (automatically, guaranteed) successful as president.

The dynamic between the successful self-perception issue and the competence issue was hard to separate before; and we’ve used ‘statesmanlike’ to describe the model of language of electoral success. All of the presidential incumbents whom we previously studied had a self-perception of success and a demonstrated competence and we assumed that both were necessary to deploy the required language comfortably and competently. Trump, however, shows that this isn’t so — it’s possible to possess the self-perception of success without the previously demonstrated competence. In Trump’s case, presumably, it is derived from competence in a rather different job: building a financial empire.

The media is in a frenzy about the competence issue for Trump. But our language model explains how it is possible to be popular among voters without demonstrating much competence, or even planned competence, to solve the problems of the day.

Voters don’t care about objective competence in the way that the media do. They care about the underlying personal self-confidence that is revealed in each candidate’s language. The data is very clear about this.

It may even be the rational view that a voter should take. Presidents encounter, in office, many issues that they had not previously formulated a policy for, so self-confidence may be more valuable than prepackaged plans. And voters have learned that most policies do not get implemented in office anyway.

It’s silly to treat Trump as a front runner when no actual vote has yet been cast. But it wouldn’t be surprising if he continues to do well for some time.  Of the other candidates, only Christie shows any sense of the use of positive language but, as a veteran politician, he cannot seem to avoid the need to present policies.

Advertisements

The secret of Trump’s success

Looking at US presidential elections through the lens of empirical investigation of word use shows that there’s a pattern of language that is associated with electoral success. Those who use it win, and the difference in the intensity of the pattern correlates well with the margin of victory.

The effective pattern is, in a way, intuitive: use positive language, eliminate negative language completely, talk in the abstract rather than about specific policies, and pay no attention to the other candidates.

In other words, a successful candidate should appear “statesmanlike”.

Candidates find it extremely difficult to use this approach — they feel compelled to compare themselves to the other candidates, dragging in negativity, and to explain the cleverness of their policies. Only incumbent presidents, in our investigation, were able to use this language pattern reliably.

I listened to some of Trump’s speech in Texas last night, and I’ve come to see that the media are completely and utterly wrong about why he is doing so well in the polls. It’s not that he’s tapping into a vein of disaffection with the political system; it is that he’s using this language model. In previous cycles, it’s only been incumbent presidents who’ve had the self-confidence to use it, but Trump, of course, has enough self-confidence to start a retail business selling it.

Let’s look at the components of the model:

Positive language: Trump’s positivity is orders of magnitude above that of the other candidates, and in two ways. First, he is relentlessly positive about the U.S. and about the future (catchphrase: “we can do better”). Second, he’s positive about almost everyone he mentions (catchphrase: “he’s a great guy”).

Negative language: Trump doesn’t avoid negativity altogether, but he uses it cleverly. First, his individual negative targets are not the other candidates (by and large) but pundits — Karl Rove and George Will were mentioned last night, but I doubt if more than 1% of the audience could have identified either in a line-up; so this kind of negativity acts as a lightning rod, without making Trump seem mean. And the negative references to others lack the bitterness that often bleeds through in the negative comments of more typical candidates. Second, when he mentions negative aspects of the Obama administration and its policies and actions, he does it be implication and contrast (“that’s not what I would do”, “I could do better”).

Vision not policies: the media cannot stand that Trump doesn’t come out with detailed policy plans, but it’s been clear for a while that voters don’t pay a lot of attention to policies. They’ve learned that (a) there’s a huge gap between what a president can want to do and what he can actually make happen, and (b) policies are generated with one eye on the polls and focus groups, so they often aren’t something that the candidate has much invested in doing in the first place. [It’s incredible that Secretary Clinton ran focus groups to prep her “apology”, which was actually a meta-apology for not having apologized better or earlier.]

Trump has one huge “policy” advantage — he isn’t beholden to donors, and so is freer of the behind-the-scenes pressure that most candidates face. In the present climate, this has to be a huge selling point.

Ignore the other candidates: Trump doesn’t quite do this (and it gets him into trouble), But he’s learning fast — in last night’s speech, he only mentioned a handful of his competitors and his comments about all of them were positive.

If Trump continues to give this kind of speech, then the more exposure he gets, the more voters are going to like him. I remain doubtful that he will be the Republican nominee, but I don’t see him flaming out any time soon. Even if he makes some serious gaffe, he’ll apologize in seconds and move on (in contrast to Clinton who seems determined to make acute issues into chronic ones).

Canadian election 2015: Leaders’ debate

Regular readers will recall that I’m interested in elections as examples of the language and strategy of influence — what we learn can be applied to understanding jihadist propaganda.

The Canadian election has begun, and last night was the first English-language debate by the four party leaders: Stephen Harper, Elizabeth May, Thomas Mulcair, and Justin Trudeau. Party leaders do not get elected directly, so all four participants had trouble wrapping their minds around whether they were speaking as party spokespeople or as “presidential” candidates.

Deception is a critical part of election campaigns, but not in the way that people tend to think. Politicians make factual misstatements all the time, but it seems that voters have already baked this in to their assessments, and so candidates pay no penalty when they are caught making such statements. This is annoying to the media outlets that use fact checking to discover and point out factual misstatements, because nobody cares, and they can’t figure out why.

Politicians also try to present themselves as smarter, wiser, and generally more qualified for the position for which they’re running, and this is a much more important kind of deception. In a fundamental sense, this is what an election campaign is — a Great White Lie. Empirically, the candidate who is best at this kind of persona deception tends to win.

Therefore, measuring levels of deception is a good predictor of the outcome of an election. Recall that deception in text is signalled by (a) reduced use of first-person singular pronouns, (b) reduced use of so-called exclusive words (“but”, “or”) that introduce extra complexity, (c) increased use of action verbs, and (d) increased use of negative-emotion words. This model can be applied by counting the number of occurrences of these words, adding them up (with appropriate signs), and computing a score for each document. But it turns out to be much more effective to add a step that weights each word by how much it varies in the set of documents being considered, and computing this weighted score.

So, I’ve taken the statements by each of the four candidates last night, and put them together into four documents. Then I’ve applied this deception model to these four documents, and ranked the candidates by levels of deceptiveness (in this socially acceptable election-campaign meaning of deceptiveness).

wordseffectsThis figure shows, in the columns, the intensity of the 35 model words that were actually used, in decreasing frequency order. The rows are the four leaders in alphabetical order: Harper, May, Mulcair, Trudeau; and the colours are the intensity of the use of each word by each leader. The top few words are: I, but, going, go, look, take, my, me, taking, or. But remember, a large positive value means a strong contribution of this word to deception, not necessarily a high frequency — so the brown bar in column 1 of May’s row indicates a strong contribution coming from the word “I”, which actually corresponds to low rates of “I”.

deceptdocsThis figure shows a plot of the variation among the four leaders. The line is oriented from most deceptive to least deceptive; so deception increases from the upper right to the lower left.

Individuals appear in different places because of different patterns of word use. Each leader’s point can be projected onto this line to generate a (relative) deception score.

May appears at the most deceptive end of the spectrum. Trudeau and Harper appear at almost the same level, and Mulcair appears significantly lower. The black point represents an artificial document in which each word of the model is used at one standard deviation above neutral, so it represents a document that is quite deceptive.

You might conclude from this that May managed much higher levels of persona deception than the other candidates and so is destined to win. There are two reasons why her levels are high: she said much less than the other candidates and her results are distorted by the necessary normalizations; and she used “I” many fewer times than the others. Her interactions were often short as well, reducing the opportunities for some kinds of words to be used at all, notably the exclusive words.

Mulcair’s levels are relatively low because he took a couple of opportunities to talk autobiographically. This seems intutively to be a good strategy — appeal to voters with a human face — but unfortunately it tends not to work well. To say “I will implement a wonderful plan” invites the hearer to disbelieve that the speaker actually can; saying instead “We will implement a wonderful plan” makes the hearer’s disbelief harder because they have to eliminate more possibilities’ and saying “A wonderful plan will be implemented” makes it a bit harder still.

It’s hard to draw strong conclusions in the Canadian setting because elections aren’t as much about personalities. But it looks as if this leaders’ debate might have been a wash, with perhaps a slight downward nudge for Mulcair.

Election winning language patterns

One of the Freakonomics books makes the point that, in football aka soccer, a reasonable strategy when facing a free kick is to stay in the middle of the goal rather than diving to one side or the other — but goaltenders hardly ever follow this strategy because they look like such fools if it doesn’t pay off. Better to dive, and look as if you tried, even if it turns out that you dived the wrong way.

We’ve been doing some work on what kind of language to use to win elections in the U.S. and there are some similarities between the strategy that works, and the goal tending strategy.

We looked at the language patterns of all of the candidates in U.S. presidential elections over the past 20 years, and a very clear language pattern for success emerged. Over all campaigns, the candidate who best deployed this language won, and the margin of victory relates quite strongly to how well the language was used (for example, Bush and Gore used this pattern at virtually identical levels in 2000).

What is this secret language pattern that guarantees success? It isn’t very surprising: high levels of positive words, non-existent levels of negative words, abstract words in preference to concrete ones, and complete absence of reference to the opposing candidate(s).

What was surprising is how this particular pattern of success was learned and used. Although the pattern itself isn’t especially surprising, no candidate used it from the start; they all began with much more conventional patterns: negativity, content-filled policy statements, and comparisons between themselves and the other candidates. With success came a change in language, but the second part of the surprise is that the change happened, in every case, over a period of little more than a month. For some presidents, it happened around the time of their inaugurations; for others around the time of their second campaign, but it was never a gradual learning curve. This suggests that what happens is not a conscious or unconscious improved understanding of what language works, but rather a change of their view of themselves that allows them to become more statesmanlike (good interpretation) or entitled (bad interpretation). The reason that presidents are almost always re-elected is that they use this language pattern well in their second campaigns. (It’s not a matter of changing speechwriting teams or changes in the world and so in the topics being talked about — it’s almost independent of content.)

So there’s plenty of evidence that using language like this leads to electoral success but, just as for goaltenders, no candidate can bring himself or herself to use it, because they’d feel so silly if it didn’t work and they lost.

Verbal mimicry isn’t verbal (well, not lexical anyway)

One of my students, Carolyn Lamb, has been looking at deception in interrogation settings.

The Pennebaker model of deception, as devoted readers will know, is robust only for freeform documents. Sadly, the settings in which deception is often most interesting tend to be dialogues (law enforcement, forensic) and it’s known that the model doesn’t extend in any straightforward way to such settings.

We started out with the idea that responses would be mixtures of language elicited by the words in a question and freeform language from the respondent, and developed a clever method to separate them. Sadly, it worked, but it didn’t help. When the effect of question language was removed from answers, the differences between deceptive and truthful responses decreased.

Digging a little deeper, we were able to show that the influence of words from the question must impact response language at a higher level (i.e. earlier in the answer construction process than simply the lexical). Those who are being deceptive respond in qualitatively different ways to prompting words than those being truthful. A paper about this has been accepted for the IEEE Intelligence and Security Informatics Conference in Seattle next month.

Part of the explanation seems to be mirror neurons. There’s a considerable body of work on language acquisition, and on responses to single words, that uses mirror neurons as a big part of the explanation; I haven’t seen anything at an intermediate level where these results fit.

There are some interesting practical applications for interrogators. One strategy would be to reduce the presence of prompting words (and do so consistently across all subjects) so that responses become closer to statements, and so closer to freeform. My impression from my acquaintance is that smarter law enforcement personnel already know this and act on it.

But our results also suggest a new strategy: increase the number of prompting words because that tends to increase the separation between the deceptive and the truthful. This needs a good understanding of what kinds of response words to look for (and, for most, this has to be done offline because we as humans are terrible at estimating rates of words in real-time, especially function words). But it could be very powerful.

You heard it here first

As I predicted on August 8th, Obama has won the U.S. presidential election. The prediction was made based on his higher levels of persona deception, that is the ability to present himself as better and more wonderful than he actually is. Romney developed this a lot during the campaign and the gap was closing, but it wasn’t enough.

On a side note, it’s been interesting to notice the emphasis in the media on factual deception, and the huge amount of fact checking that they love to do. As far as I can tell, factual deception has at best a tiny effect on political success, whether because it’s completely discounted or because the effect of persona is so much stronger. On the record, it seems to me to be a tough argument that Obama has been a successful president, and indeed I saw numerous interviews with voters who said as much — but then went on to say that they would still be voting for him. So I’m inclined to the latter explanation.

Including the results of the third debate

Just a quick update from the persona deception rankings from yesterday, to include the text of the third debate (assuming that each statement is free form, which is slightly dubious).

Here’s the figure:

Persona deception scores after the third debate

You can see that they are running neck and neck when it comes to persona deception. Adding in the third debate changes the semantic space because the amount of text is so large compared to a typical campaign speech. The points corresponding to debates lie in the middle of the pack suggesting that neither is trying to hard to present themselves as better than they are — this is probably typical of a real-time adversarial setting where there aren’t enough cognitive resources to get too fancy.