Skip to content
On this page

Knowledge & Agent Labelling: Clear Labelling of AI Interactions & some ideas about how to test students, by AI.

Below is an article i published on linkedin about the importance of labelling the contributions of Ai (software) agents. The concepts illustrated in this document will end-up informing an approach that will become part of the safety protocols related solutions

https://www.linkedin.com/pulse/knowledge-agent-labelling-clear-ai-interactions-some-ideas-holborn/

In this quickly crafted article, i'm looking to quickly illustrate some considerations about the importance of labelling and/or provonance agent-relations to distinguish AI from Legal Entities (companies, etc) and fundamentally therein - human beings; and then, i've generated a few quick notes about possible ways to determine whether or not a student, understands a topic that they may otherwise simply use AI to generate reports (notwithstanding historical issue with plagiarism and the apparent 'lack of desire' to more properly employ plagiarism detectors).

The circumstances relating to these notes are different to the context of my notes in my DRAFT Webizen DevDocs; such as those about,

(Noting - these documents are works in progress - some historical docs can be found here as is part of the broader catalogue of documents__)

As these alternative ecosystems that i've been working to develop over many years, change the broad-ranging nature of how information-management systems are made to work; and therefore also, how solutions can be engineered to adddress problems in ways that cannot otherwise be achieved.


In my Opinion (IMO); It is important that there is a clear means to distinguish between the content, information or processes and decisions that are made by Software Agents (AI) Vs. those made by legal entities; and in-turn also, the ability to dertermine which specific human beings were involved in many or causing any sort of causality related action to occur.

This opinion is unlikely to be shared by all persons, particularly those who would seek to pass-off the work of an AI Agent as their own works; without distinguishing what it is they did (as a human being) vs. what was done by an AI Agent. This problem is in-effect an extension of broader exhibits of similar behaviours where it is often sought to be accepted that an incorporated entity (ie: company/corporation, government department / agency, etc.) did something; without any meaningful way to determine which human beings specifically were responsible for whatever actions may have been made, that led to negative consequences upon others. Indeed also, these sorts of problems do not appear to be so much of a problem when the outcome is positive; unless, the outcome is wrongly positive for the target or beneficiary, of a costly mistake that may have been made by any such organisation.

Given the difficulties and in-turn attacks that occur as a consequence of raising these sorts of difficult issues / topics, that have enormous consequential implications, i've asked ChatAI to give me its point of view of these sorts of issues...

My Question:

What are the sorts of dangers, threats and social attack vectors that may occur if human beings are falsely led to believe they are consuming the work of other human beings or legal entities; when in-fact they're consuming information generated via an artificial intelligence software program?

ChatGPT Answer:

There are several dangers and threats that may occur if individuals are falsely led to believe that they are consuming information generated by other human beings or legal entities when it is actually generated by an artificial intelligence (AI) program. Some of these include:

-  Misinformation and disinformation: AI-generated content can be used to spread false or misleading information, which can have negative effects on individuals and society at large. 

-  Manipulation and influence: AI-generated content can be used to manipulate individuals and groups by appealing to their emotions, biases, and prejudices.

-  Privacy and security concerns: AI-generated content can be used to target individuals with personalized phishing, scams, and other malicious attacks.

-  Dehumanization and mistrust: People being exposed to AI generated content that is falsely presented as being generated by human could lead to mistrust in human-generated content, leading towards a dehumanization of human expression as well as a mistrust in human-generated content, AI-generated content can be used to perpetuate stereotypes and reinforce societal biases.

-  Interfere with legal process: AI-generated content can be used to impersonate people or entities for fraudulent activities or illegal activities and could interfere with legal proceedings.

Overall, it is important for individuals to be able to distinguish between AI-generated content and human-generated content in order to make informed decisions and to protect themselves from potential dangers.

Another potential danger of falsely believing that information is generated by human beings when it is actually generated by AI is that it may lead to a lack of critical thinking and evaluation of the information being consumed. If people believe that the information they are consuming is coming from a trusted source, they may be less likely to fact-check or critically evaluate it. This can lead to the spread of misinformation and disinformation, as well as a decrease in people's ability to make well-informed decisions.

Additionally, there may be ethical concerns associated with the use of AI-generated content, particularly when it is used to impersonate human beings or legal entities. The ability to generate content that is indistinguishable from human-generated content raises questions about consent and accountability, as well as potential harms that may be caused by the use of this technology.

Finally, the widespread use of AI-generated content has the potential to disrupt various industries and change the way people think about and interact with information. It could change how professional authors, journalists, and creative professionals work, which could have both positive and negative impacts on the economy and society.

My Question: 

What are some of the threats related to circumstances where government departments and/or employees may use Artificial Intelligence to act in a manner that has human rights implications in relation to the effect of those actions if it is not clearly labelled to have been done via an AI Agent?

ChatGPT Answer:

If government departments or employees use Artificial Intelligence (AI) in a way that has human rights implications and it is not clearly labeled as having been done by an AI agent, there could be several threats. Some of these include:

-  Discrimination and bias: If an AI system is not transparent in its decision-making processes, it may perpetuate discrimination and bias against certain individuals or groups, potentially violating their human rights.

-  Lack of accountability: If an AI system is used to make decisions or take actions that have human rights implications, it can be difficult to hold individuals or organizations accountable if the system is not clearly labeled as an AI agent and its decision-making process is opaque.   

-  Due process concerns: If an AI system is used to make decisions that affect an individual's rights or freedoms, it may not always provide the same level of due process protections as a human decision-maker. This could lead to violations of human rights and the rule of law.

-  Privacy and security: If an AI system is used to process personal data without proper regulation or oversight, it could lead to privacy violations, security breaches, and other harms.

-  Lack of transparency: If the actions of an AI agent is not clearly labeled, it can make it difficult for people to understand how decisions are being made, which could lead to mistrust and lack of transparency. 

-  Interference with human autonomy: If an AI agent make decisions that have significant impact on human life, people may lose their autonomy on the decision making process and their right to self-determination.

Overall, it is crucial that government departments and employees are transparent in their use of AI, clearly label any decisions or actions that are made by an AI agent, and ensure that the AI systems they use are fair, accountable, and respect human rights.

Another potential threat is that the AI may have unintended consequences on human rights, if it not properly implemented, tested, and monitored. Even if the AI system is not inherently biased, the data it is trained on may be biased, leading to unjust outcomes.

Additionally, if the use of AI is not transparent and the decision-making processes of the AI system are not explainable, it may be difficult for people to understand how the decisions are being made and to challenge them if they believe that their human rights are being violated.

Moreover, if AI agents are used in law enforcement or security, without proper oversight or regulation, there could be a risk that the AI systems may be used to infringe on individuals’ rights, such as freedom of speech and freedom of assembly, or to target certain groups based on their ethnicity, religion, or political beliefs.

It is important that the use of AI by government departments and employees is subject to proper regulation and oversight, so that human rights are protected and that the actions of the AI systems can be transparently tracked and audited.

Finally, It is crucial that the AI system are continuously monitored and evaluated for their impacts on human rights and adjustments or replacements are made if necessary.

My Question:

what are some of the potential risks in circumstances relating to foreign governments? 

ChatGPT Answer:

There are several potential risks in circumstances related to foreign governments using AI:

-  Cyber espionage: Foreign governments may use AI to conduct cyber espionage or cyberattacks against other countries, potentially stealing sensitive information or disrupting critical infrastructure.

-  Misinformation and disinformation: Foreign governments may use AI-generated content to spread false or misleading information, with the intention of manipulating public opinion or undermining the credibility of other governments.

-  Interference in domestic politics: Foreign governments may use AI to interfere in the domestic politics of other countries, with the goal of swaying elections or influencing policy decisions.

-  Economic espionage: Foreign governments may use AI to gain an unfair advantage in international trade, by collecting business intelligence and targeting specific industries or companies.

-  Military advantage: Foreign governments may use AI to enhance their military capabilities, potentially threatening the stability of regions and global security

-  Human Rights abuse: AI technology has been used by some governments to track and control their citizens, violating their privacy and freedom. This technology can also be used to identify and monitor individuals based on their ethnicity, religion, or political beliefs, leading to further human rights violations.

Overall, the use of AI by foreign governments has the potential to destabilize international relations and threaten the security and sovereignty of other countries. It is important for governments to be aware of these risks and to take steps to protect against them, through international cooperation, regulation, and diplomacy.

Another potential risk is that foreign governments may use AI in a way that violates international law or human rights. For example, they may use AI-enabled surveillance to target political opponents, or use autonomous weapons to target civilians in warfare. These actions can have significant negative consequences on human rights and global security.

Moreover, the increasing use of AI in economic and technological fields may lead to a power imbalance between countries that have access to advanced AI technology and those that do not, leading to further economic and societal inequality.

Furthermore, if AI is not controlled and regulated properly, foreign governments may use it to violate international norms, such as economic sanctions or arms control agreements.

Lastly, foreign governments may also use AI to target individuals, companies or countries with cyber-attacks and disinformation campaigns for political, economic, or military gain.

To mitigate these risks, it is crucial for countries to work together through international organizations and agreements to establish norms and regulations for the responsible development and use of AI, as well as to cooperate on matters of cyber security, disinformation, and human rights.

My Question:

What are some of the ways AI content and/or software actions could be clearly labelled and associated with the legal entity involved and responsible for that AI content and/or software action, including both any legal personality (ie: government department, institution, corporation or business, ) as well as the specific natural legal entities / individuals (human beings) who were instrumentally involved in enabling or rendering the applied use of AI for the particular artifact or event?

ChatGPT Answer;

There are several ways that AI content and/or software actions can be clearly labeled and associated with the legal entity responsible for them, including both the legal personality (e.g. government department, institution, corporation or business) and the specific natural legal entities/individuals who were involved in enabling or using the AI for the particular artifact or event. Some of these methods include:

-  Digital Signatures: Using digital signatures to verify the authenticity of AI content or software actions, and to link them to the legal entity or individual responsible for them.

-  Attribution Statements: Including an attribution statement in the AI content or software that identifies the legal entity or individual responsible for it.

-  Watermarking: Watermarking AI content or software with unique identifiers that can be used to trace it back to the legal entity or individual responsible for it.

-  Logging: Keeping detailed logs of who had access to and used the AI system, and what actions they took.

-  Transparency Reports: Publishing transparency reports that provide information on the legal entities and individuals involved in the development and use of AI, as well as the system's decision-making processes and any human rights considerations.

-  Certifications: Certifying the AI system by a third-party, that verifies that the system meets certain standards, such as fairness and transparency

-  Source codes or system architecture: Making the source codes or system architecture of AI systems available for public inspection, in order for third parties to verify how the systems make decisions.

-  Human oversight: Ensuring that AI systems are designed with human oversight and decision-making capabilities to make adjustments or corrections if necessary, and keeping records of those adjustments or corrections.

Overall, it is important to ensure that the AI system are clearly labeled and that the legal entity or individual responsible for the system is easily identifiable and can be held accountable for its actions. This can be achieved through a combination of technical and organizational measures, such as the ones listed above, which should be chosen according to the specific context and the potential impacts of the system.

Another way of clearly labelling AI content and/or software actions and associating them with the legal entity responsible for them, is by using blockchain technology, it can enable the creation of an immutable and transparent record of the origin and usage of the AI content or software. This can enable users to verify the authenticity of the content and trace it back to the legal entity or individual that created it, providing a tamper-proof record of the development and usage of the AI system.

Another approach to clearly identify the responsible entity is by using standard data labels, like the "metadata" of digital files, AI artifacts should also have standard labels that clearly identify the legal entity or individual responsible for them, such as the developer, creator, owner, or operator.

Additionally, governments could also play a role in promoting transparency and accountability of AI systems by implementing regulations and guidelines on the labeling of AI content and software actions, and enforcing penalties for non-compliance.

Finally, it is important to note that it is not just a technical problem but also a societal one, and that it requires a multi-stakeholder approach, which includes industry, civil society, and government, to develop a solution and to ensure that it is effectively implemented.

Another way of clearly labeling AI content and/or software actions is by using certification systems. These systems involve an independent third party, such as a regulatory agency or industry group, evaluating the AI system and verifying that it meets certain standards, such as transparency, fairness, and accountability. These certifications can be used to clearly label the AI system and to provide assurance to users that the system has been independently reviewed and found to be trustworthy.

Another way of clearly labelling AI content and/or software actions is by using explainable AI (XAI) techniques, these techniques allow the AI system to explain its decision-making process to users in a way that is easy for humans to understand. This can help users to trust the system and to understand how it makes decisions, and also make it easier for legal entities or individuals to be held accountable for the actions of the system.

Finally, it is important to educate the public about the importance of clearly labeling AI content and/or software actions and associating them with the legal entity responsible for them. This can help to increase awareness of the risks associated with AI and to promote responsible use of the technology.

In summary, there are many methods for clearly labelling AI content and/or software actions and associating them with the legal entity responsible for them, such as digital signatures, attribution statements, watermarking, logging, transparency reports, certifications, blockchain, standard data labels, explainable AI (XAI) and education. These methods can be used in combination to provide a comprehensive approach to transparency and accountability of AI systems.

My Question:

How might an ontology using RDF and common vocabularies be defined to provide a means to deploy these sorts of solutions? Can you give me an example of an RDF OWL ontology?

ChatGPT Answer:

An ontology using RDF and common vocabularies can provide a structured way to describe and represent information related to the legal entity responsible for AI content and/or software actions, as well as the actions themselves. RDF (Resource Description Framework) is a standard for modeling information on the web, and it provides a way to express the relationships between resources (e.g. documents, people, and concepts) using a graph-based structure.

A simple example of an RDF OWL (Web Ontology Language) ontology for describing AI content and software actions could include classes such as "AI system", "Legal entity", "Individual", "Action", and "Responsibility". Each class would have a set of properties that describe their characteristics and relationships, such as "name", "developer", "owner", "operator", "decision-making process" and "Standards met", respectively.

For example, the class "AI system" could have properties such as "name", "developer", "owner", "operator" and "decision-making process". The class "Legal entity" would have properties such as "name", "type" (e.g. government department, institution, corporation or business), "location", and "jurisdiction". The class "Individual" would have properties such as "name" and "role" (e.g. developer, owner, operator). The class "Action" would have properties such as "name", "date", "type" (e.g. decision, action) and "outcome". The class "Responsibility" would have properties such as "level" (e.g. primary, secondary, etc) and "entity" (e.g. legal entity, individual)

These classes and properties can be interconnected to describe complex relationships between AI systems, legal entities, individuals, and actions, and can be used to trace the origin and usage of AI content and software actions, and to establish accountability and transparency. The use of common vocabularies, such as schema.org and others, would help to standardize the representation of the information and make it more easily understandable and usable by different systems and applications.

It's worth noting that this is just a simple example and that a more comprehensive ontology would be more complex and take into account the specific context and use case. And that, the use of RDF and OWL enables the use of formal reasoning to infer new information from the ontology, which is particularly useful for the transparency and accountability of AI systems.

Additionally, using RDF and OWL to represent an ontology can enable the use of formal reasoning to infer new information from the ontology, this is particularly useful for transparency and accountability of AI systems. For example, by creating inferences rules in the ontology, you can automatically infer information about the legal entities, individuals and actions responsible for an AI system, based on the properties of the AI system.

Moreover, using RDF and OWL in this context can also enable the integration of the ontology with other systems, such as databases, document management systems and even other AI systems. This is particularly useful for creating transparency and accountability solutions, where information about AI systems, legal entities and individuals need to be shared and combined across different systems.

Another advantage of using RDF and OWL is that they can leverage the Linked Data ecosystem, which allows for easy data sharing and integration across different organizations and domains. This allows for more collaboration between different stakeholders, like industry, civil society and government, to help ensure the transparency and accountability of AI systems.

Finally, it is important to note that creating an ontology is just a first step, the ontology itself should be continuously updated, maintained, and evaluated to make sure it is still useful and useful, to address the complexity and dynamic nature of the field of AI.

My Question:

can you write an example in notation3 syntax.

ChatGPT Answer:

Sure, here is an example of an RDF ontology written in Notation3 (N3) syntax to describe an AI system and its responsible legal entities and individuals:

@prefix ai: <http://example.com/ontology/ai#>
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix org: <http://www.w3.org/ns/org#> .
ai:MyAI a ai:AI;
    ai:name "My AI System";
    ai:developer [ a foaf:Person; foaf:name "John Doe" ];
    ai:owner [ a org:Organization; org:name "Acme Inc." ];
    ai:operator [ a foaf:Person; foaf:name "Jane Smith" ];
    ai:decision-makingProcess "Explainable AI using Decision Trees" ..

In this example, the ontology defines a class "AI" (ai:AI) and an instance of that class "MyAI" (ai:MyAI), it also specifies that "MyAI" has a name "My AI System", a developer "John Doe" (foaf:Person), an owner "Acme Inc." (org:Organization), an operator "Jane Smith" (foaf:Person) and a decision-making process "Explainable AI using Decision Trees"_

This ontology uses prefixes to simplify the URIs (ai:, foaf:, org:) used in the statements. The prefixes indicate that the URIs in the ontology are from the given namespace, so "ai:AI" refers to the URI "http://example.com/ontology/ai#AI".

Note that this is a simplified example, in real-world scenarios, the ontology would likely have more classes and properties and also, would relate to other ontologies to provide more context and meaning, such as time, location, standards met, etc.

Additionally, the ontology can include more classes and properties such as "Action", "Responsibility", "Jurisdiction", "Location", "Date", "Standards", "Certification" and "TransparencyReports" that describe the actions taken by the AI system, the level of responsibility of the legal entities and individuals, the jurisdiction, location, date of action, standards met, certifications achieved and transparency reports respectively.

Here's an example of how these classes and properties can be added to the ontology:

@prefix ai: <http://example.com/ontology/ai#>
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix org: <http://www.w3.org/ns/org#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
ai:MyAI a ai:AI;
    ai:name "My AI System";
    ai:developer [ a foaf:Person; foaf:name "John Doe" ];
    ai:owner [ a org:Organization; org:name "Acme Inc." ];
    ai:operator [ a foaf:Person; foaf:name "Jane Smith" ];
    ai:decision-makingProcess "Explainable AI using Decision Trees" ;
    ai:performedAction [ a ai:Action; ai:name "Decision Making";
    ai:outcome "Approve the loan";
    ai:date "2022-01-12T10:30:00Z"^^xsd:dateTime];
    ai:responsibility [ a ai:Responsibility;
    ai:level "Primary";
    ai:entity [a org:Organization; org:name "Acme Inc." ]];
    ai:jurisdiction "US";
    ai:location "New York";
    ai:standardsMet "ISO/IEC 27001:2013";
    ai:certifications "ISO/IEC 27001:2013 certified";
    ai:transparencyReports <http://example.com/reports/myaisystem> ..

This enhanced example illustrates how more information can be added to the ontology, such as the actions taken by the AI system, the outcome, the date of the action, the level of responsibility of the legal entities and individuals, the jurisdiction, location, the standards met, certifications achieved, and transparency reports.

It is worth noting that the example provided is not intended to be a complete or perfect ontology but is meant to give an idea of how RDF and OWL can be used to represent an ontology that would help with transparency and accountability of AI systems.

My Question:

can you provide a more comprehensive example in json-ld?

ChatGPT Answer:

Sure, here is a more comprehensive example of an RDF ontology written in JSON-LD (JavaScript Object Notation for Linked Data) syntax to describe an AI system and its responsible legal entities and individuals:

"@context": {
        "rdf": "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
        "rdfs": "http://www.w3.org/2000/01/rdf-schema#",
        "xsd": "http://www.w3.org/2001/XMLSchema#",
        "foaf": "http://xmlns.com/foaf/0.1/",
        "org": "http://www.w3.org/ns/org#",
        "ai": "http://example.com/ontology/ai#",
        "name": "ai:name",
        "developer": {
            "@id": "ai:developer",
            "@type": "@id"
        },
        "owner": {
            "@id": "ai:owner",
            "@type": "@id"
        },
        "operator": {
            "@id": "ai:operator",
            "@type": "@id"
        },
        "decision-makingProcess": "ai:decision-makingProcess",
        "performedAction": {
            "@id": "ai:performedAction",
            "@type": "@id"
        },
        "outcome": "ai:outcome",
        "date": "ai:date",
        "responsibility": {
            "@id": "ai:responsibility",
            "@type": "@id"
        },
        "level": "ai:level",
        "entity": {
            "@id": "ai:entity",
            "@type": "@id"
        },
        "jurisdiction": "ai:jurisdiction",
        "location": "ai:location",
        "standardsMet": "ai:standardsMet",
        "certifications": "ai:certifications",
        "transparencyReports": {
            "@id": "ai:transparencyReports",
            "@type": "@id"
        }
    },
    "@id": "ai:MyAI",
    "@type": "ai:AI",
    "name": "My AI System",
    "developer": {
        "@id": "foaf:JohnDoe",
        "@type": "foaf:Person",
        "name": "John Doe"
    },
    "owner": {
        "@id": "org:AcmeInc",
        "@type": "org:Organization",
        "name": "Acme Inc."
    },
    "operator": {
        "@id": "foaf:JaneSmith",
        "@type": "foaf:Person",
        "name": "Jane Smith"
    },
    "decision-makingProcess": "Explainable AI using Decision Trees",
    "performedAction": {
        "@id": "ai:Action1",
        "@type": "ai:Action",
        "name": "Decision Making",
        "outcome": "Approve the loan",
        "date": "2022-01-12T10:30:00Z"
    },
    "responsibility": {
        "@id": "ai:responsibility1",
        "@type": "ai:Responsibility",
        "level": "Primary",
        "entity": {
            "@id": "org:AcmeInc",
            "@type": "org:Organization",
            "name": "Acme Inc."
        }
    },
    "jurisdiction": "US",
    "location": "New York",
    "standardsMet": "ISO/IEC 27001:2013",
    "certifications": "ISO/IEC 27001:2013 certified",
    "transparencyReports": "http://example.com/reports/myaisystem"
}{

In this example, the ontology defines several classes, such as "AI" (ai:AI), "Action" (ai:Action), "Responsibility" (ai:Responsibility), "Person" (foaf:Person), and "Organization" (org:Organization), it also specifies that "MyAI" has a name, developer, owner, operator, decision-making process, performed actions, outcome, date, level of responsibility, and jurisdiction, location, standards met, certifications and transparency reports. 

The context provided, maps the prefixes used in the ontology to the corresponding URIs, as well as provides data types and relationships between the different classes and properties. This is used to make the JSON-LD data more easily readable for both machines and humans. As before, it's important to note that this is just an example and that the ontology can be much more detailed, depending on the complexity and dynamic nature of the field of AI.

CONSIDERATIONS

Whilst the specific approach certainly requires a great deal more work and consideration, the merit of the underlying notion appears to be considered important even by ChatGPT.  

Knowledge Qualification Methods

The advent of AI, is far more significant than the implications of the time where caculators became easily accessible. So, as is connected to the topic; and an area of consideration by many presently,

I've asked OpenAI / ChatGPT the following;

What are some practical courses of action that could assist educators in seeking to evaluate whether a student understands a topic or subject that they've been required to study, but may have used an AI program to produce a written report that is submitted - without necessarily understanding what it is the activity was intended to support the students ability to learn about and gain knowledge about, the topic.

What sorts of activities could be undertaken to evaluate whether or not a student has gained knowledge about the topic, rather than more simply, having used an AI program to provide reports that may pass traditional tests without the student necessarily understanding or gaining knowledge of the topic, subject and/or related concepts.?

ChatGPT Answer:

There are several practical courses of action that educators can take to evaluate whether a student understands a topic or subject that they've been required to study, even if they have used an AI program to produce a written report. Here are a few examples:

1. Oral or written questioning: One effective way to evaluate understanding is through oral or written questioning. The educator can ask the student to explain or summarize key concepts or ideas related to the topic, or to provide examples of how they apply the knowledge in a specific scenario.

2. Hands-on activities: Another way to evaluate understanding is through hands-on activities such as problem-solving exercises, lab experiments, or group projects. These activities allow the student to demonstrate their understanding of the topic in a practical setting.

3. In-class participation: Educators can observe and evaluate the student's level of understanding through in-class participation, such as group discussions, debates, or presentations. Through this, the educators can see the level of engagement and understanding of the student.

4. Self-reflection and feedback: Educators can ask the students to reflect on their learning process and provide feedback on the effectiveness of the learning methods they have used. Through this, the educators can evaluate if the student has gained knowledge about the topic or not.

5. Peer evaluation: Educators can also ask students to evaluate each other’s understanding of the topic, this can be done through a peer review or through small group discussions, where students can identify areas of misunderstanding and help one another to understand the topic better.

6. Use of Authentic Assessment: Educators can use tasks that are similar to what students may encounter in the real world, this allows students to apply their understanding of the topic in more meaningful and relevant context, thus evaluating the real understanding.

It's worth mentioning that this list is not exhaustive, and that each educator should choose the approach that they think is the best fit for the specific subject, students, and context. Additionally, a combination of several of these methods might yield more accurate results.

Another approach educators can take is to design assessments that are difficult for AI software to complete, for example, assessments that require students to integrate or apply multiple pieces of information, or that require creativity, critical thinking, or complex problem-solving skills. Such assessments are more likely to be difficult for AI software to complete as they are not based on simple pattern recognition, but on sophisticated abilities that are characteristic of human intelligence.

Additionally, incorporating more open-ended questions that require students to provide explanations or to justify their reasoning can also be an effective way to evaluate whether students have a deep understanding of the topic, rather than just recognizing patterns in the text.

In addition to these approaches, educators can also use plagiarism detection software to identify instances where students have used AI to produce written reports without understanding the topic. However, it is important to note that plagiarism detection software is not a foolproof method, as it can only detect text that has been copied and pasted from other sources, not text that has been generated by AI.

Finally, Educators should also be aware of their own role in fostering the critical thinking and understanding of the students, by providing clear learning objectives, providing appropriate resources and providing opportunities for feedback and continuous improvement.

Final Remarks;

Over the last few days, i've generated a few linkedin articles related to the issues that have seemed to be inexorable, relating to AI, Human Agency, Human Rights, Digital Identity and in-turn factors that are related to my works to produce an ecosystems solution that i've called Webizen (See DevDocs); that in-turn provides an opportunity to decentralise AI and address various social issues including but not limited to, digital slavery and other Human Rights related considerations. In-turn these works requires an approach that is not simply technical; but also social, and whilst the availability of these global AI platforms is kinda frightening in some ways; it is clearly demonstrated, that the benefits far outweigh the jeporody, at the present time. It is important we decentralise AI and support Human Agency, which is a necessity in relation to both human rights more broadly; and in-turn, the ability to be ruled by law, the underlying foundations required by societies at large - where the people seek a system of government that is defined as a form of liberalised democracy.

Yet, as is (quickly) illustrated in my earlier articles; alongside this one,

The apparent fact is, that there are various agendas that have existed for a very long time, and some of the actors involved in these sorts of agendas and historical wrongs; notably including RoboDebt and the consequential Royal Commission into the Robodebt Scheme, yet this example pales by comparison to the broader productivity and ESG implications that are able to be addressed in ways that are far more energy efficient, through the use of AI systems. The ESG implications are indeed enormous; yet, the moral jeporody which will in-turn have meaningful socio-economic implications - is not yet addressed.

Whilst my Human Centric AI works continue; it is difficult to consider how it is reasonable to expect that government is at all equipped to provide appropriate moral leadership in this very important area of humanitarian advancement. The implications of #NobodyAI is amongst the underlying drivers of Existential risk from artificial general intelligence and the underlying problem isn't actually technical, its social. its the ramifications of having to address the issue of being required to produce accountability systems, that many don't want.

The situation at the moment appears to be; that whilst tools like ChatAI / ChatGPT may not provide the best and most correct answers, it does a far better job of providing some sort of useful feedback than should ever be historically considered to be expected to be provided honestly and with integrity, by our Government. There are opportunities to address all sorts of issues in ways that were not previously possible, but, it is important to decentralise, it is also important to figure out how to define some sensible rules and produce tools to support those sorts of considerations; even if, they're only provided as an option - that may in-turn lead to a circumstance where those who use tools to support what i call Safety Protocols, may be distinguished from others who gainfully seek to do otherwise. Whilst some people may look at the situation and think - 'but robots can't be sent to prison', and consider it to be a problem... Others, will see it as an opportunity. It will be interesting to see how will values align different social groups, as its certainly the case that solutions can be forged, if there's a desire to do so.

I might also note, that whilst i've been the recipient of standard template based corraspondance from various government departments; including those relating to law, where false statements relating to some other case / circumstance was made; in a manner that had nothing to do with me, and was later used to by another person for the purpose of engendering an attack - something, that the government department in question and specified persons involved - sought not to take on any responsibility about (nor did they originally - as is part of the underlying problem); this asymmetical method of interaction that brings with it support for various Social Attack Vectors, is no longer simply something that is easily done solely by those as sophistocated as public institutions of trust.

It may well be found, that there are alot of people who do not understand the responsibilities associated with the job roles they carry out; it may otherwise become the case, that these sorts of ramifications are considerd a lower priority, than the opportunity to simply employ lower-skilled persons, whose job it is to simply press the buttons to get AI to do the jobs.

Edit this page
Last updated on 2/9/2023