Artificial Intelligence in Today's World

undefined
 
March 2024
 
Artificial intelligence (AI)
Overview of concepts and issues
 
 
Agenda
 
 
What is AI ?
The challenges 
of AI
Regulatory framework and guidelines
Toolbox and practical
 guides
AI@Smals
undefined
 
What is AI ?
 
What is artificial intelligence
 
Fields of AI
 
Non-generative AI, e.g.
problem solving and learning: AI learns patterns from large amounts of data to perform tasks
such as fraud detection
virtual assistant: it can answer questions and perform tasks based on commands expressed in
natural language – e.g. Siri, Google Assistant
recommendations and personalization: recommending products, services, films and music
based on user behavior - e.g. Amazon purchase recommendations based on buying habits.
social network analysis: optimization of content distribution based on individual preferences
health: help with medical diagnosis and drug discovery
autonomous systems: autonomous cars, drones, robots
Generative AI:
creation of new content (text, image, audio, video, code) that appears to be made by humans
the best known are ChatGPT, DALL-E and Bing Copilot
 
Examples of AI in our daily lives
 
Example of generative AI - DALL-E 2
 
 
7
undefined
 
The challenges of AI
 
Problems of AI
 
Bias and discrimination: AI can reproduce and amplify the biases present in training
data
Threat to privacy
often available in the cloud
collection and analysis of personal data
Little transparency: “blackbox” algorithms => need for explainable AI
Sustainability: energy cost of AI (10
6
 times human brain)
Security : malicious use of AI
Technological dependency: over-reliance on AI can lead to dependency and
vulnerability in the event of failure
Specific to generative AI:
deepfake
hallucinations
disclosure of sensitive data
 
Example of bias produced by generative AI
 
Generated
image for
“Lawyer”
(DALL-E 2)
 
Generated
image for
“Flight
attendant”
(DALL-E 2)
 
Source: Sigal Samuel, “A new AI draws delightful and not-so-delightful images”, Vox, 22/04/2022
 
Example of disclosure of sensitive data
«almost half
 of the users (42%) share sensitive
company data
»
 .
(https://www.rtbf.be/article/chatgpt-pres-de-50-des-employes-belges
-partagent-trop-de-donnees-professionnelles-sensibles-11232702)
 
https://www.wired.com/story/chatgpt-poem-forever-security-roundup/
undefined
 
Regulatory framework and
ethics guidelines
 
Regulation of AI based on the risk it poses, different rules for different levels of risk:
unacceptable risk (e.g. social score) -> EU ban
high risk (e.g. migration management, public services and benefits)
-> pre-market and lifecycle assessment, compliance monitoring
limited risk (e.g. image generation) -> transparency requirement
 
See
 
EU website on AI act
 
Vote by the European Parliament scheduled for March/April 2024, 
entry into force 12
months later
 
Target audience
any organization that develops, maintains and deploys artificial intelligence systems
 
European Union: AI Act
 
Ethical AI must meet the following requirements:
human action and control
technical robustness and security
privacy and data governance
transparency
diversity, non-discrimination and equity
societal and environmental well-being
responsibility
See 
Ethics guidelines for trustworthy AI
Target audience
data scientists and AI system developers
procurement officers
front-end users of AI systems
legal officers, DPOs
 
European Union: ethics guidelines
 
AI Charter for the public sector
 
 
BOSA initiative 
‘Charter for responsible use of artificial intelligence in the public sector
(draft)’
 
Defines the commitment to be met in order to:
purchase AI systems
develop and deploy of AI systems
 
Target audience
civil servants
general public
 
Other initiatives
 
 
@BOSA – Guidelines for the use of generative AI
:
for 
internal use
 only
in preparation
 
@Smals - Code of conduct for the use of generative AI:
for 
internal use
 only
finalised
 
Reading
 
recommandation: 
Prototyping The EU AI Act’s Transparency Requirements
(Kenniscentrum Data en Maatschappij)
undefined
 
Toolbox and practical guides
 
Ethics guidelines for trustworthy AI
 have been transcripted 
into a practical assessment
tool
 
A
vailable:
in pdf 
format
: 
Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment
in the 
form
 of an open-source application by AI4Belgium: 
AI Assessment Tool
 
Target audience
data scientists and AI system developers
procurement officers
front-end users of AI systems
legal officers, DPOs
 
European Union guidelines – assessment tool
 
Online course - AI and discrimination
 
Organizers: European Council in partnership with 
BOSA/AI4Belgium/Unia/SPF Justice
 
Objectives
:
acquire knowledge of current and future Belgian and European regulations and standards
be able to identify cases of potential discrimination linked to the use of AI systems
 
See 
Log in to the site | Council of Europe HELP (coe.int)
 
Target audience:
people working for government bodies
civil society organizations in the field of digitization
undefined
 
AI@Smals
 
AI@Smals
 
 
The Smals Research section has been investigating AI for many years, e.g.
chatbots and conversational interfaces (Student@Work (RSZ), RVA)
synthetic data production (KSZ Datawarehouse)
 
 
October 2023 - Smals sets up a AI/LLM skills center dedicated to the analysis and
deployment of AI solutions in order to
enable its employees to use AI-based software to assist them in carrying out their daily tasks
integrate innovative solutions into the projects it develops on behalf of its customers
 
AI@Smals
 
Main activities of the AI/LLM skills center
defining governance for the use of AI-based tools in the work processes of Smals teams
coordinating the industrialization of a proof of concept (POC) centered on implementing a
conversation interface on top of "Student@Work" documentation
launch of POCs for various tools incorporating AI
GitHub Copilot
code generation
developers / architects
scope master
IT project sizing / requirements quality analysis
project managers / analysts
 
Contact: 
AICompetencyCenter@smals.be
undefined
 
Final considerations
 
 
AI lacks common sense
 
 
AI lacks concepts and abstraction
 
 
AI lacks concepts and abstraction
 
 
AI easily sees correlations, but fails to establish causality
 
 
 
undefined
 
Useful links on
https://www.frankrobben.be/artificial-intelligence
 
Slide Note
Embed
Share

Explore the concepts and challenges of Artificial Intelligence (AI) through an overview of its capabilities, fields of application, examples in daily life, and potential issues such as bias, privacy concerns, and security threats associated with AI technologies.

  • AI
  • Technology
  • Challenges
  • Applications
  • Ethics

Uploaded on Mar 23, 2024 | 2 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Artificial intelligence (AI) Overview of concepts and issues March 2024

  2. Agenda What is AI ? The challenges of AI Regulatory framework and guidelines Toolbox and practical guides AI@Smals 2

  3. What is AI ?

  4. What is artificial intelligence Systems (algorithms) capable of performing tasks that require intelligence similar to human intelligence learning reasoning planning dialogue language comprehension visual perception 4

  5. Fields of AI Non-generative AI, e.g. problem solving and learning: AI learns patterns from large amounts of data to perform tasks such as fraud detection virtual assistant: it can answer questions and perform tasks based on commands expressed in natural language e.g. Siri, Google Assistant recommendations and personalization: recommending products, services, films and music based on user behavior - e.g. Amazon purchase recommendations based on buying habits. social network analysis: optimization of content distribution based on individual preferences health: help with medical diagnosis and drug discovery autonomous systems: autonomous cars, drones, robots Generative AI: creation of new content (text, image, audio, video, code) that appears to be made by humans the best known are ChatGPT, DALL-E and Bing Copilot 5

  6. Examples of AI in our daily lives 6

  7. Example of generative AI - DALL-E 2 7 7

  8. The challenges of AI

  9. Problems of AI Bias and discrimination: AI can reproduce and amplify the biases present in training data Threat to privacy often available in the cloud collection and analysis of personal data Little transparency: blackbox algorithms => need for explainable AI Sustainability: energy cost of AI (106 times human brain) Security : malicious use of AI Technological dependency: over-reliance on AI can lead to dependency and vulnerability in the event of failure Specific to generative AI: deepfake hallucinations disclosure of sensitive data 9

  10. Example of bias produced by generative AI Generated image for Lawyer (DALL-E 2) Generated image for Flight attendant (DALL-E 2) Source: Sigal Samuel, A new AI draws delightful and not-so-delightful images , Vox, 22/04/2022 10

  11. Example of disclosure of sensitive data almost half of the users (42%) share sensitive company data . (https://www.rtbf.be/article/chatgpt-pres-de-50-des-employes-belges -partagent-trop-de-donnees-professionnelles-sensibles-11232702) https://www.wired.com/story/chatgpt-poem-forever-security-roundup/ 11

  12. Regulatory framework and ethics guidelines

  13. European Union: AI Act Regulation of AI based on the risk it poses, different rules for different levels of risk: unacceptable risk (e.g. social score) -> EU ban high risk (e.g. migration management, public services and benefits) -> pre-market and lifecycle assessment, compliance monitoring limited risk (e.g. image generation) -> transparency requirement See EU website on AI act Vote by the European Parliament scheduled for March/April 2024, entry into force 12 months later Target audience any organization that develops, maintains and deploys artificial intelligence systems 13

  14. European Union: ethics guidelines Ethical AI must meet the following requirements: human action and control technical robustness and security privacy and data governance transparency diversity, non-discrimination and equity societal and environmental well-being responsibility See Ethics guidelines for trustworthy AI Target audience data scientists and AI system developers procurement officers front-end users of AI systems legal officers, DPOs 14

  15. AI Charter for the public sector BOSA initiative Charter for responsible use of artificial intelligence in the public sector (draft) Defines the commitment to be met in order to: purchase AI systems develop and deploy of AI systems Target audience civil servants general public 15

  16. Other initiatives @BOSA Guidelines for the use of generative AI: for internal internal use use only in preparation @Smals - Code of conduct for the use of generative AI: for internal internal use use only finalised Reading recommandation: Prototyping The EU AI Act s Transparency Requirements (Kenniscentrum Data en Maatschappij) 16

  17. Toolbox and practical guides

  18. European Union guidelines assessment tool Ethics guidelines for trustworthy AI have been transcripted into a practical assessment tool Available: in pdf format: Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment in the form of an open-source application by AI4Belgium: AI Assessment Tool Target audience data scientists and AI system developers procurement officers front-end users of AI systems legal officers, DPOs 18

  19. Online course - AI and discrimination Organizers: European Council in partnership with BOSA/AI4Belgium/Unia/SPF Justice Objectives: acquire knowledge of current and future Belgian and European regulations and standards be able to identify cases of potential discrimination linked to the use of AI systems See Log in to the site | Council of Europe HELP (coe.int) Target audience: people working for government bodies civil society organizations in the field of digitization 19

  20. AI@Smals

  21. AI@Smals The Smals Research section has been investigating AI for many years, e.g. chatbots and conversational interfaces (Student@Work (RSZ), RVA) synthetic data production (KSZ Datawarehouse) October 2023 - Smals sets up a AI/LLM skills center dedicated to the analysis and deployment of AI solutions in order to enable its employees to use AI-based software to assist them in carrying out their daily tasks integrate innovative solutions into the projects it develops on behalf of its customers 21

  22. AI@Smals Main activities of the AI/LLM skills center defining governance for the use of AI-based tools in the work processes of Smals teams coordinating the industrialization of a proof of concept (POC) centered on implementing a conversation interface on top of "Student@Work" documentation launch of POCs for various tools incorporating AI GitHub Copilot code generation developers / architects scope master IT project sizing / requirements quality analysis project managers / analysts Contact: AICompetencyCenter@smals.be 22

  23. Final considerations

  24. 24

  25. AI lacks common sense 25

  26. AI lacks concepts and abstraction 26

  27. AI lacks concepts and abstraction 27

  28. AI easily sees correlations, but fails to establish causality 28

  29. 29

  30. 30

  31. 31

  32. Useful links on https://www.frankrobben.be/artificial-intelligence

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#