In this article, I will discuss Important Questions in Artificial Intelligence Class 10. The CBSE board question paper of AI Class 10 consists of 1 mark, 2 marks, and 4 marks questions.
This article – Important Questions Artificial Intelligence Class 10 will provide similar kinds of questions. So here we go!
Topics Covered
Unit 1 Introduction to AI Class 10
Unit 1 Introduction to AI consists of the following topics and sub-topics. I have prepared questions for you as per the topic. Follow the links to access them
Sub Unit No | Sub Unit | Questions Link |
1 | Foundational Concepts of AI | 25+ Questions Foundational Concepts of AI |
2 | Introduction to AI Domains (Data, CV & NLP) | 20+ Questions AI Domains |
3 | AI ethics | 40+ Questions AI Ethics |
Now in the next section of this article, you will get important questions for the unit 2 AI Project cycle.
Unit 2 AI Project Cycle Class 10
Unit AI Project cycle class 10 has 5 subunits. Let’s explore the questions from Unit 2 AI project cycle class 10.
Sub Unit No | Sub Unit | Questions Link |
1 | Problem Scoping | 15+ Questions Problem Scoping |
2 | Data Acquisition | 30+ Questions Data Acquisition |
3 | Data Exploration | 15+ Questions Data Exploration |
4 | Data Modelling | 15+ Questions Data Modelling |
5 | Neural Network | 15+ Questions Neural Network |
Follow this link for MCQ Questions based on Introduction to Artificial Intelligence and AI Project Cycle.
The important questions Artificial intelligence class 10, 1 mark questions may need one word, MCQ, fill in the blanks, or True/False as the answer. So let’s start!
Unit 6 Natual Language Processing AI Class 10
Let us start with 1-mark questions which include MCQs, Fill in the blank and other objective-type questions. Here we go!
Watch the video for more understanding:
Natural Language Processing AI Class 10 – 1 Mark Questions
Here we go with Natural Language Processing AI Class 10 – 1 Mark Questions.
[1] What are the domains of Artificial Intelligence?
[2] Identify: I work around numbers and tabular data
[3] I am all about visual data like images and video. – Who am I?
[4] What do you mean by natural language processing?
[5] Name the AI game you played which uses NLP.
Follow this link to know more about the game: Mystery Animal
[6] Sagar is collecting data from social media platforms. He collected a large amount of data. but he wants specific information from it. Which NLP application would help him?
[7] What are the features of automatic summarization?
[8] I used to identify opinions online to help what customers think about the products and services. Who am I?
[9] Which application of NLP assigns predefined categories to a document and organize it to help customer to find the information they want?
[10] What is an example of text classification?
[11] What is an example of automatic summarization?
[12] I am helping humans to keep notes of their important tasks, make calls for them, send messages and many more. Who am I?
[13] Name popular virtual assistants.
[14] Give the full form of CBT.
[15] How does CBT help human beings?
[16] what do you mean by chatbots?
Note: Write any definition.
[17] What are the types of chatbots?
[18] The customer care section of various companies includes which type of chatbot?
[19] The virtual assistants like Siri, Cortana, google assistant etc. can be taken as which type of chatbots?
[20] Give the full form of NLP.
[21] What do you mean by syntax?
[22] How the computer interprets the syntax of a language?
[23] What do you mean by semantics?
[24] Write an example of different syntax, same semantics.
[25] Write an example of different semantics, same syntax.
[26] What do you mean by perfect syntax, no meaning?
[27] What do you mean by corpus?
[28] What is sentence segmentation in text normalization?
[29] Name the term is used for any word or number or special character occurring in a sentence.
[30] In which processes every word or number or special character is considered separately?
Watch this video for more understanding:
Natural Language Processing AI Class 10 – Short Answer Questions
Let’s see some short answer questions from Natural Language Processing AI Class 10.
[31] Mr. Dimpesh Chavda is an English teacher. He suggested removing the frequently used words which do not add any value to the paragraph. If someone wants to do this using NLP. Suggest the term is used for these types of words.
[32] Write examples of a few stop words.
[33] Which step comes after stop words removal?
[34] What do you mean by stemming?
[35] The stemmed words are not meaningful quite often. (True/False)
[36] Write stemmed words for these: reading, reader, flies,flying,
[37] What do you mean by lemmatization?
[38] Nainesh wants to convert the tokens into numbers. Suggest the algorithm to do the same.
[39] What will be the output produced by a bag of words algorithm?
[40] Write step by step approach to implement the bag of words algorithm.
[41] What do you mean by dictionary?
[42] How to create a document vector?
[43] What is a document vector table?
[44] What is the full form of TFIDF?
[45] What is TFIDF?
[46] Differentiate between stop words and frequent words.
[47] What are rare words?
[48] What is term frequency?
[49] Rajvi is learning NLP. She wants to know what is document frequency, Suggest your answer.
[50] Radhika is learning TFIDF. She is not getting the term inverse document frequency. Explain to her what is inverse document frequency.
[51] Write the formula of TFIDF.
[52] What are the applications of TFIDF?
[53] Does the vocabulary of a corpus remain the same before and after text normalization?
[54] Name the package used for Natural Language Processing in Python.
[55] Arjun wanted to learn the leading platforms for building python programs that can work with human language data. Suggest the package name.
[56] Do the segmentation of the following sentence.
This is a chatbot. A chatbot is an application of NLP.
[57] Tokenize the above sentence.
[58] Gagan is working on NLP model. Observe the following words and write the step which is being followed for them:
Natural, NaTuRaL, NATural, NATural, NaturaL, NAturAL
[59] Shiv is learning NLP. Write one example of common syntax but no meaning for him.
[60] Which one do you prefer over a smart bot or script bot for better functions?
[61] “Lemmatization is more complex than stemming.” Justify this statement.
[62] “Chickens feed extravagantly while the moon drinks tea.” This is an example of
[63] Observe the following table and write which algorithm will return the same output?
Class 10 Artificial Intelligence syllabus – Artificial intelligence is new subject in skill courses. In Class 10 you will learn about NLP. | 10: 2 class: 2 artificial: 2 intelligence: 2 in: 2 syllabus: 1 -: 1 is: 1 new: 1 subject: 1 skill: 1 courses: 1 you: 1 will: 1 learn: 1 about: 1 nlp: 1 |
[64] Rearrange the steps of the Bag of words algorithm in proper order:
Create Dictionary -> Create document vector for all documents -> Create document vector -> Text normalization
[65] What will be the output of “bodies” in stemming and lemmatization?
[66] How many tokens are there in the following sentence?
Artificial intelligence (AI) – is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.
[67] Identify any 2 stop words, frequent words and rare/valuable words from the following paragraph:
Data preprocessing involves preparing and “cleaning” text data for machines to be able to analyze it. preprocessing puts data in the workable form and highlights features in the text that an algorithm can work with.
[68] What are the subfields of AI?
Unit 6 Natural Language Processing – 2 Marks Questions
In this section of Natural Language Processing AI Class 10, we are going to discuss the questions of 2 marks. Here we go!
[1] What do you mean by Natural Processing? What is the main aim of NLP?
[2] Pratik is working on a huge amount of data that is full of information. Now he wants to access a specific, important piece of information from it. Explain which NLP application helps him.
[3] How does sentiment analysis help companies to understand what customer thinks about their product?
[4] Seema is a maths teacher. She has conducted a class test and pre-defined three categories for students based on marks scored by students. Now she wants to use the NLP application to organize students’ data. Explain the NLP application in detail to help her.
[5] Which NLP applications access our data, and they can help us in keeping notes of our tasks, making calls for us, sending messages, and a lot more. Explain this NLP application in detail.
[6] List out any four chatbots along with their developer.
[7] What do you mean by syntax? Explain in detail.
[8] Explain with an example:
- Perfect Syntax, no meaning
- Multiple meanings of a word
[9] Rekha is learning NLP. She wants to know about the concept that is used to convert natural language to machine language. Explain in the concept detail.
[10] Explain different syntax, same semantics with example.
[11] Manoj is working in Python 2.7 whereas Anuj is working in Python 2. They have written the same statement 2/3. But both get different outputs. Help them to understand the concept.
[12] Saurabh is working on NLP based project. He wanted to know about the concept where the entire corpus is being divided into sentences. Help hip and explain the concept.
[13] “Under tokenisation, every word, number and special character is considered separately and each of them is now separate token”. Is this statement is correct? Explain.
[14] In a few cases, special characters are not considered as stop words and they are not removed from the corpus. Explain such a case in short.
[15] Raman has a single word in different cases. Somewhere it’s written in capital, somewhere it is written in small letters. Suggest the step which is being followed after stop word removal in NLP data processing and explain it.
[16] What is stemming? What is the purpose of stemming?
[17] Write affixes and stemmed words for the following words:
controlled, controlling, controller, buddies, hobbies, thoughtful, meaningful, powerful
[18] What do you mean by lemmatization?
[19] Explain the bag of words algorithm implementation as step by step process.
[20] Create document vector for given corpus.
Document 1: We are going to play cricket.
Document 2: We love cricket.
Document 3: We are playing cricket of 20 overs for each inning.
Document 4: We are playing cricket every Sunday.
We | are | going | to | play | cricket | love | playing | of | 20 | overs | for | each | inning | every | Sunday |
1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 |
1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
[21] Do sentence segmentation and tokenization for the following:
‘Tokenization does not use a mathematical process to transform the sensitive information into the token. There is no key or algorithm that can be used to derive the original data for a token. ‘
[22] Differentiate between scriptbot and smartbot.
Important Questions Artificial Intelligence Class 10 – Natural Language Processing 4 Marks Questions
In this section, I will discuss some competency-based questions for AI class 10.
[1] Calculate TFIDF for the given corpus and mention the word(s) having the highest value.
Document 1: Radha is an intelligent girl.
Document 2: She is studying in class X.
Document 3: She has opted AI in Class X.
Document 4: She is enjoying AI.
Ans.: Term Frequency refers to the frequency of words in one document. As step by step process firstly we have to calculate Term Frequency then Inverse Document Document Frequency. So let’s prepare the document vector table for the given corpus.
Radha | is | an | intelligent | girl | she | studying | in | class | X | has | opted | AI | enjoying | |
Document 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Document 2 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 |
Document 3 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
Document 4 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
Now calculate the document frequency for the exemplar vocabulary. Document frequency refers to the number of documents in which the word occurs irrespective of how many times it has occurred in those documents. The document frequency is calculated as per the following table:
Radha | is | an | intelligent | girl | she | studying | in | class | X | has | opted | AI | Enjoying |
1 | 3 | 1 | 1 | 1 | 3 | 1 | 2 | 2 | 2 | 1 | 1 | 2 | 1 |
Now we need to put the document frequency in the denominator while the total number of documents is the numerator. Here total no. of documents are 4. Therefore the inverse document frequency is as the following table:
Radha | is | an | intelligent | girl | she | studying | in | class | X | has | opted | AI | enjoying |
4/1 | 4/3 | 4/1 | 4/1 | 4/1 | 4/3 | 4/1 | 4/2 | 4/2 | 4/2 | 4/1 | 4/1 | 4/2 | 4/1 |
Finally, the IDF values will be derived by multiplying the IDF values to TF values. The formula of TDIDF for any word W is:
TDIDF(W) = TF(W)*log(IDF(W))
So the table would be:
Radha | is | an | intelligent | girl | she | studying | in | class | X | has | opted | AI | enjoying | |
Document 1 | 1*log(4) | 1*log(4/3) | 1*log(4) | 1*log(4) | 1*log(4) | 0*log(4/3) | 0*log(4) | 0*log(2) | 0*log(2) | 0*log(2) | 0*log(4) | 0*log(4) | 0*log(4) | 0*log(4) |
Document 2 | 0*log(4) | 1*log(4/3) | 0*log(4) | 0*log(4) | 0*log(4) | 1*log(4/3) | 1*log(4) | 1*log(2) | 1*log(2) | 1*log(2) | 0*log(4) | 0*log(4) | 0*log(4) | 0*log(4) |
Document 3 | 0*log(4) | 0*log(4/3) | 0*log(4) | 0*log(4) | 0*log(4) | 1*log(4/3) | 0*log(4) | 1*log(2) | 1*log(2) | 1*log(2) | 1*log(4) | 1*log(4) | 1*log(4) | 0*log(4) |
Document 4 | 0*log(4) | 1*log(4/3) | 0*log(4) | 0*log(4) | 0*log(4) | 1*log(4/3) | 0*log(4) | 0*log(2) | 0*log(2) | 0*log(2) | 0*log(4) | 0*log(4) | 1*log(4) | 1*log(4) |
Now put the approx values for each word as mentioned in the following table:
Radha | is | an | intelligent | girl | she | studying | in | class | X | has | opted | AI | enjoying | |
Document 1 | 0.602 | 0.125 | 0.602 | 0.602 | 0.602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Document 2 | 0 | 0.125 | 0 | 0 | 0 | 0.125 | 0.602 | 0.301 | 0.301 | 0.301 | 0 | 0 | 0 | 0 |
Document 3 | 0 | 0 | 0 | 0 | 0 | 0.125 | 0 | 0.125 | 0.301 | 0.301 | 0.602 | 0.602 | 0.602 | 0 |
Document 4 | 0 | 0.125 | 0 | 0 | 0 | 0.125 | 0 | 0 | 0 | 0 | 0 | 0 | 0.602 | 0.602 |
[2] Summarize the concept of TFIDF.
[3] The world is competitive nowadays. People face competition in even the tiniest tasks and are expected to give their best at every point in time. When people are unable to meet these expectations, they get stressed and could even go into depression. We get to hear a lot of cases where people are depressed due to reasons like peer pressure, studies, family issues, relationships, etc. and they eventually get into something that is bad for them as well as for others. So, to overcome this, Cognitive Behavioural Therapy (CBT) is considered to be one of the best methods to address stress as it is easy to implement on people and also gives good results. This therapy includes understanding the behaviour and mindset of a person in their normal life. With the help of CBT, therapists help people overcome their stress and live a happy life.
For the situation given above,
- Write the problem statement template
- List any two sources from which data can be collected.
- How do we explore the data?
Ans.:
- The problem statement for above-given scenario would be:
Our | people facing a stressful situation | Who? |
have a problem that | not able to share their feelings | What? |
while | need to help to go out their emotions | Where? |
An ideal solution would be | to provide a platform to share their thoughts anonymously and suggest help whenever required | Why? |
2. Data can be collected from various sources, a few of them are as follows:
- Surveys
- Interviews of therapists
- Online Databases
- Observations of different people and therapists
3. Once the textual data has been collected, it needs to be processed and cleaned so that an easier version can be sent to the machine. Thus, the text is normalised through various steps and is lowered to minimum vocabulary since the machine does not require grammatically correct statements but the essence of it.
[4] Information overload is a real problem when we need to access a specific, important piece of information from a huge knowledge base. Automatic summarization is relevant not only for summarizing the meaning of documents and information, but also to understand the emotional meanings within the information, such as in collecting data from social media. Automatic summarization is especially relevant when used to provide an overview of a news item or blog post, while avoiding redundancy from multiple sources and maximizing the diversity of content obtained.
For the situation given above,
- What do you understand by automatic summarization?
- In which condition automatic summarization is relevant?
- Prepare a problem statement for the given situation.
Ans.
- Automatic Summarization is relevant not only for summarizing the meaning of documents and information but also for understanding the emotional meanings within the information such as collecting data from social media.
- Automatic summarization is especially relevant when used to provide an overview of a news item or blog post, while avoiding redundancy from multiple sources and maximizing the diversity of content obtained.
- The problem statement for the given situation is as following:
Our | Internet users and Social Media Users | Who? |
have a problem that | Overloaded information and accessing specific pieces of information from a knowledge base | What? |
while | a news item or blog post or multiple sources of information | Where? |
An ideal solution would be | to provide a platform to summarize the information needed | Why? |
[5] There are three domains of AI: Data Science works around numbers and tabular data while Computer Vision is all about visual data like images and videos. The third domain, Natural Language Processing (commonly called NLP) takes in the data of Natural Languages which humans use in their daily lives and operates on this.
Natural Language Processing, or NLP, is the sub-field of AI that is focused on enabling computers to understand and process human languages. AI is a subfield of Linguistics, Computer Science, Information Engineering, and Artificial Intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyse large amounts of natural language data.
Answer these questions based on the above text:
- What are the three domains of AI?
- State various domains of AI for the following:
- Student result data
- Google Map
- Google Assistant
- Spam Filter
- CSV downloaded from Kaggle.com
- Define NLP.
Ans.:
- There are three domains of AI.
- Data Science
- Computer Vision
- Natural Langauge Processing
- Various Domains of AI
- Student result data: Data Science
- Google Map: Computer Vision
- Google Assistant: Natural Language Processing
- Spam Filter: Natural Language Processing
- CSV downloaded from Kaggle.com: Data Science
- NLP stands for Natural Language Processing. Natural Language Processing, or NLP, is the sub-field of AI that is focused on enabling computers to understand and process human languages.
[6] Nowadays Google Assistant, Cortana, Siri, Alexa, etc have become an integral part of our lives. Not only can we talk to them but they also have the ability to make our lives easier. By accessing our data, they can help us in keeping notes of our tasks, making calls for us, sending messages and a lot more. With the help of speech recognition, these assistants can not only detect our speech but can also make sense of it. According to recent research, a lot more advancements are expected in this field in the near future.
Answer the following questions:
- Name a few virtual assistants.
- What is the role of virtual assistants? How do they impact our life?
- How do virtual assistants work?
Ans.:
- Google Assistant, Cortana, Siri, Alexa
- Virtual Assistants can assist us in various tasks. They can talk to us, keep notes of our tasks, make calls for us, and send messages. They make out lives easier by accessing our data and detecting our speech.
- Virtual Assistants use speech recognition and try to understand the command given through speech. They detect our speech and make sense of it.
[7] Observe the graphs carefully and classify them according to how well the model’s output matches the data samples:
Ans.:
Figure 1: the model’s performance matches well with the true function which states that the model has optimum accuracy and the model is called a perfect fit.
Figure 2: The model’s output does not match the true function at all. Hence the model is said to be underfitting and its accuracy is lower.
Figure 3: model performance is trying to cover all the data samples even if they are out of alignment to the true function. This model is said to be overfitting and this too has lower accuracy.
[8] Suhas is working in a company named Krishna Enterprise as HR Head. He is facing problems from remote workers of his company. Remote workers are not able to communicate with one another seamlessly and easily. Sometimes network issues occur, hence the communication gap remains there. Sometimes it creates misunderstanding.
Answer the following questions based on the above situation:
- Prepare 4ws canvas for the problem.
- Write few methods to visualize the data.
- Why human communication is complex for machines?
Ans.:
Our | remote workers and company professionals | Who? |
have a problem that | Effective Communication | What? |
while | remote workers work in the mines or remote area | Where? |
An ideal solution would be | to provide a platform to communicate easily, | Why? |
2. To visualize data there are various methods, Suhas can use any of these:
- MS Excel
- Python Matplotlib
- QlikView
- Tableau
- MS Power BI
3. Human language contains various rules based on grammatical structure. The machines need to be prepared in such a manner that can identify these rules and apply them to the model. Machines have their own rules to understand the language rules. As the computer understands 0s and 1s only.
In the next section of Important Questions Artificial Intelligence Class 10, we are going to discuss questions from Unit 7 Evaluation.
Unit 7 Evaluation Important Questions Artificial Intelligence Class 10
Let’s see 1 marks questions first. As you know 1 mark questions include short definitions, one-word answers, fill in blanks etc. Here we go! If you looking for notes for the same unit follow this link:
Unit 7 Evaluation AI Class 10 – 1 mark Questions
[1] Define: Evaluation
[2] Name two parameters considered for the evaluation of a model.
[3] What is not recommended to evaluate the model?
[4] Define overfitting.
[5] Enlist the data sets used in AI modeling.
[6] What do you mean by prediction?
[7] What is reality?
[8] What are the cases considered for evaluation?
[9] Write the term used for the following cases for heavy rain prediction:
Case | Prediction | Reality |
1 | Yes | Yes |
2 | No | No |
3 | Yes | No |
4 | No | Yes |
[10] What do you mean by True Positive?
[11] What is True Negative?
[12] What is a False Positive?
[13] What is False-negative?
[14] Ritika is learning evaluation. She wants to recognize the concept of evaluation from the below-given facts:
- A comparison between prediction and reality
- Helps users to understand the prediction result
- It is not an evaluation of matric
- A record that helps in the evaluation
Help Ritika by giving the name to recognize the concept of evaluation.
[15] What is the need for a confusion matrix?
[16] Devendra is confused about the condition when is the prediction said to be correct, support your answer to help him to clear his confusion.
[17] Mentions two conditions when prediction matches reality.
[18] Rozin is a student of class 10 AI. She wants to know the methods of evaluation. Support her with your answer.
[19] Mihir is trying to learn the formula of accuracy. What is the formula?
[20] If a model predicts there is no fire where in reality there is a 3% chance of forest fire breaking out. What is the accuracy?
[21] What do you mean by precision?
[22] Which cases are taken into account by precision?
[23] Which cases are taken into account by the recall method?
[24] Which measures are used to know the performance of the model?
[25] Rohit is working on the AI model. He wanted to know the balance between precision and recall. What it is?
[26] The task is to correctly identify the mobile phones as each, where photos of oppo and Vivo phones are taken into consideration. Oppo phones are the positive cases and Vivo phones are negative cases. The model is given 10 images of Oppo and 15 images of Vivo phones. It correctly identifies 8 Oppo phones and 12 Vivo phones. Create a confusion matrix for the particular cases.
Ans.: The confusion matrix is as follows:
Prediction | Prediction | ||
Negative | Positive | ||
Reality | Negative | True Negative: 12 | False Positive: 3 |
Reality | Positive | False Negative: 2 | True Positive: 8 |
[27] There are some images of boys and girls. The girls are positive cases and boys are negative cases. The model is given 20 images of girls and 30 images of boys. The machine correctly identifies 12 girls and 23 boys. Create a confusion matrix for the particular cases.
Ans.: The confusion matrix is as follows:
Prediction | Prediction | ||
Negative | Positive | ||
Reality | Negative | True Negative: 23 | False Positive: 7 |
Reality | Positive | False Negative: 8 | True Positive: 12 |
[28] There is data given for Facebook and Instagram users. The model is given data for 200 Facebook users and 250 Instagram users. The machine identified 120 Facebook users correctly and 245 users of Instagram correctly. Create a confusion matrix for the same.
Ans.: The confusion matrix is as follows:
Prediction | Prediction | ||
Negative | Positive | ||
Reality | Negative | True Negative: 120 | False Positive: 80 |
Reality | Positive | False Negative: 5 | True Positive: 245 |
[29] Consider that there are 10 images. Out of these 7 are apples and 3 are bananas. Kirti has run the model on the images and it catches 5 apples correctly and 2 bananas correctly. What is the accuracy of the model?
[30] There are 16 images, 9 are cat images and 7 are dog images. The cat images are positive cases and dog images are negative cases. The model identifies 5 cat images correctly and 3 cat images as dog images. Similarly, it identifies 4 of them correctly as dog images. Find the accuracy of the model.
[31] There are 20 images of aeroplanes and helicopters. The machine identifies 12 images correctly and 3 images incorrectly for aeroplanes. Similarly 2 images correctly as helicopters. Find the accuracy of the model.
[32] The prediction of the model is 1/4 and the recall of the same is 2/4. What is the F1 score of the model?
[33] Out of 300 images of Lions and Tigers, the model identified 267 images correctly. What is the accuracy of the model?
[34] There are 400 images of fruits the AI model is able to predict correctly so the accuracy of the model is exactly 0.5. How many correct predictions does the machine make?
[35] The recall comes 0.65 and the precision 0.70 for an AI model. What is the F1 score based on these metrics?
[36] The recall comes 0.80 and the precision 0.40 for an AI model. What is the F1 score based on these metrics?
Watch this video for more understanding:
I have used this F1 score calculator to compute the F1 score. Follow the below-given link:
Unit 7 Evaluation Class 10 Artificial Intelligence – 2 Marks Questions
[1] Explain the precision formula.
[2] Explain the recalled formula.
[3] What is the importance of evaluation?
[4] In which situation evaluation metric is more important for any case?
[5] Which value for the F1 score is the perfect F1 score? Explain with context.
[6] Explain the evaluation metrics for mail spamming.
[7] How evaluation metrics would be crucial for gold mining?
[8] How false-negative conditions will be hazardous in evaluation? Explain with an example.
[9] State and explain some possible reasons why AI model is not efficient.
[10] High accuracy is not usable. Justify this with an example.
Watch this video for more understanding:
[11] High precision is not usable. Justify this with an example.
[12] Suppose, the AI model has to bifurcate volleyball and football. Volleyballs are positive cases and footballs are negative cases. There are 30 images of volleyball and 26 images of footballs. The model has predicted 28 out of 30 volleyball and 2 volleyball as football. It predicts 24 footballs and 2 footballs as volleyball. Compute accuracy and precision both.
[13] There are 14 images of cows and buffalos. There are 8 images of cows and 6 images of buffalos. The model has predicted 5 cows and 4 buffalos. It identifies 1 cow as buffalo and 2 buffalos as cows. Compute the accuracy, precision, recall and F1 score.
[14] For a model, the F1 score is 0.85 and precision is 0.82. Compute the recall for the same case.
[15] Out of 100 pictures of buses and trucks, 80 are actually buses and 20 are trucks. What is the minimum number of buses identified by a model to have a recall of 90% or more?
[16] Out of 40 pictures of cats and rabbits, There are 25 rabbits, how many cats most the model needs to identify correctly along with 15 rabbits images correctly identified by the model, to achieve more than 75% accuracy?
[17] Draw a confusion matrix for the following:
Positive/Negative: White Pages/ Yellow Pages | No. of images: 150 |
Number of actual white pages images: 90 | True Positives: 85 |
False Positives: 20 | False Negatives: 25 |
Ans.: The confusion matrix is as follow:
True | False | |
Positive (White Pages) | 85 | 20 |
Negative (Yellow Pages) | 5 | 25 |
[18] Find the F1 score from the given confusion matrix:
Ans.: The confusion matrix is as follow:
True | False | |
Positive | 44 | 6 |
Negative | 8 | 15 |
Unit 7 Evaluation – Long Answer Questions (4 Marks)
[1] Shweta is learning NLP. She read about the F1 score but did not understand the need for the F1 score formulation. Support her by giving an answer.
[2] Calculate accuracy, precision, recall and F1 score for the following Confusion Matrix. Suggest which metric would not be a good evaluation parameter and why?
Reality:1 | Reality:0 | |
Prediction:1 | 50 | 30 |
Prediction:0 | 15 | 25 |
Watch this video for more understanding:
I hope this article will help you to prepare well for the board examinations. If you have any doubts or queries feel free to share in the comment section.
Thank you for reading this. Share this article with your friends and classmates to help them.