Skip to content
On this page

Fake News

The concept of 'Fake News' fundamentally relates to a content artifact that seeks to make representations that are sought to be consumed as #Non-Fiction assets, when in-fact it is either a complete work of #Fiction or incorporates #fictionalReferences or #Opinions or other content that is not in-fact - #Fact or as is otherwise defined - #NonFiction.

This in-turn relates to the functions and processes related to categorisation of content using genres and other categories / category theory related techniques.

False information does not necessarily infer that it must have been done with malice. There are various #modal considerations related to the management of artifacts that have complex systemic factors associated with them. Nonetheless, this area of social attack vectors can have seriously harmful consequences; and the most dangerous future types of these sorts of attacks may well be carried out by #AiAgents in a personalised manner, which is intended to invalidate FreedomOfThought related principals.

Summary of Considerations

Whilst the term commonly used by media is 'fake news' the underlying issues relate to representations that may be intentionally false and misleading and intended to illicit or engender a particular response or act to pervert the ability of persons to gain a comprehension of a situation that is consistant with the actual facts of a matter. This also relates to various forms of TemporalAttacks and other SocialAttackVectors more broadly.

The underlying notion of 'fake news' may be due to various underlying circumstances; and the way in which any records are updated are in some ways as important to address, as the underlying issues that may relate to the original statements having been improperly communicated. Fundamentally, the concept relates more broadly to Dishonesty, which is a problem that is far greater than the effect of what occurs via news media content alone.

EliPariser Google Docs contribution by me

At around november 2016 Eli Pariser produced a Open Google Document to seek out collaborative support for solutions to address fake-news. around that time, i made some contibutions. A News Article from wired talks about the situation and google doc (noting, its important to review the historical versions of the document as its often defaced)

A version of my contributions are provided below (i'm unsure if or how its been altered); noting, that the content was authored to highlight solutions rather than the problem.

Considerations → Principles → The Institution of Socio - Economic Values

by: Timothy Holborn 

A Perspective by Eben Moglen from re:publica 2012

The problem of ‘fake news’ may be solved in many ways.  One way involves mass censorship of articles that do not come from major sources, but may not result in news that is any more ‘true’.  Another way may be to shift the way we use the web, but that may not help us be more connected. Machine-readable documents are changing our world.

It is important that we distill ‘human values’ in assembly with ‘means for commerce’. As we leave the former world of broadcast services where the considerations of propaganda were far better understood; to more modern services that serve not millions, but billions of humans across the planet, the principles we forged as communities seem to need to be re-established.  We have the precedents of Humans Rights, but do not know how to apply them in a world where the ‘choice of law’ for the websites we use to communicate, may deem us to be alien.  Traditionally these problems were solved via the application of Liberal Arts, however through the advent of the web, the more modern context becomes that of Web Science incorporating the role of ‘philosophical engineering’ (and therein the considerations of liberal arts via computer scientists).

So what are our principle, what are our shared values? And how do we build a ‘web we want’ that makes our world a better place both now, and into the future? 

It seems many throughout the world have suffered mental health issues as a result of the recent election result in the USA.  A moment in time where seemingly billions of people have simultaneously highlighted a perceived issue where the results of a populous exacting their democratic rights resulted in global issues that pertained to the outcome being a significant surprise.   So perhaps the baseline question becomes; how will our web better provide the means in which to provide us (humans) a more accurate understanding of world-events and circumstances felt by humans, via our ‘world wide web’.

**# Linked-Data, Ontologies and Verifiable Claims

By:  @Ubiquitous 

Linked-Data is a technology that produces machine and human readable information that is embedded in webpages.  Linked-Data powers many of the online experiences we use today, with a vast array of the web made available in these machine-readable formats.  The scope of linked-data use, even within the public sphere, is rather enormous.

Right now, most websites are using ‘linked data’ to ensure their news is being presented correctly on Facebook and via search, which is primarily supported via Schema.org .

The first problem is: that these ontologies do not support concepts such as genre.  This means in-turn that rather than ‘news’ becoming classified, as it would in any ordinary library or newspaper, the way in which ‘news’ is presented in a machine-readable format is particularly narrow and without (machine readable) context. 

This means, in-turn, that the ability for content publishers to self-identify whether their article is an ‘advertorial’, ‘factual’, ‘satire’, ‘entertainment’ or other form of creative work - is not currently available in a machine-readable context. 

This is kind of similar to the lack of ‘emotions’ provided by ‘social network silos’ to understand ‘sentiment analysis’ through semantic tooling that offer means to profile environments and offer tooling for organisations.  Whilst Facebook offers the means to moderate particular words for its pages product this functionality is not currently available to humans (account holders).  

The mixture of a lack of available markup language for classifying posts, alongside the technical capabilities available to ‘persona ficta’ in a manner that is not similarly available to Humans, contributes towards the lack of ‘human centric’ functionality these platforms currently exhibit. 

Bad Actors and Fact-Checking

In dealing with the second problem (In association to the use of Linked-Data), the means in which to verify claims is available through the application of ‘credentials’ or Verifiable Claims which in-turn relates to the Open Badges Spec.

These solutions allow an actor to gain verification from 3rd parties to provide their audience greater confidence that the claims represented by their articles.  Whether it is the means to “fact check” words, ensure images have not been ‘photoshopped’ or other ‘verification tasks’, one or more reputable sources could use verifiable claims to in-turn support end-user (reader / human) to gain confidence in what has been published.  Pragmatically, this can either be done locally or via the web through 3rd parties through the use of Linked-Data.  For more information, get involved in W3C, you’ll find almost every significant organisation involved with Web Technology debating how to build standard to define the web we want.

General (re: Linked Data)

If you would like to review the machine-readable markup embedded in the web you enjoy today, one of the means to do so is via the Openlink Data Sniffer  An innovative concept for representing information was produced by Ted Nelson via his Xanadu Concept

Advancements in Computing Technology may make it difficult to trust media-sources in an environment that seemingly has difficulty understanding the human-centric foundations to our world; and, where the issues highlighted by many, including Eben Moglen, continue to grow.  Regardless of the technical means we have to analyse content, it will always be important that we consider virtues such as kindness; and, it is important that those who represent us, put these sorts of issues on the agenda in which “fake news” has become yet another example (or symptom) of a much broader problem (imho).

A simple (additional) example of how a ‘graph database’ works as illustrated by this DbPedia example.  The production of “web 3.0” is remarkably different to former versions due to the volume of pre-existing web-users.  Whilst studies have shown that humans are not really that different, the challenge becomes how to fund the development costs of works that are not commercially focused (ie: in the interests of ‘persona ficta’) in the short-term, and to challenge issues such as ‘fake news’ or indeed also even, how to find a ‘Toilets’.  As ‘human centric’ needs continue to be unsupported via the web or indeed also, the emerging intelligent assistants working upon the same datasets; the problem technologists have broadly produced becomes that of a world produced for things that ‘sell’, without support for things we value. Whether it be support for how to help vulnerable people, receipts that don’t fade (ie: not thermal, but rather machine-readable), civic services, the means to use data to uphold ‘rule of law’, vote and participate in civics or the array of other examples in which we have the technology, but not the accessible application in which to apply the use of our technology to social/human needs.  

Indeed the works we produce and contribute on the web are for the most-part provided not simply freely, but at our own cost.   The things that are ‘human’ are less important and indeed, poorly supported.**

Edit this page
Last updated on 2/9/2023