Performance Measure of Agent
This terminology used to measure the performance of an agent while a task is going on. It determines how successful an agent is.
Behavior of Agent
The agent performs some actions after any given sequence of percepts. These actions are called as behavior of the agent. That is how an agent will react when some sort of steps are taken while the task was running.
It tells about the perceptual inputs to the agent at a given point of time .Percept Sequence
It is the full information of all that an agent done from the first date to till the date.
It is a mapping from the precept sequence to the corresponding action of the agent.
Simple reflex Agents are cannot act on the basis of past perceptions or past views. They can act only on the basis of current percept. The agent function is based on the condition-action rule.
Condition-action rule : It is a rule that maps a state to an action. If the state or condition is true, then the action takes place, if the condition is false means the action doesn’t happen.
This rule will be successful when the environment is fully observable.
If the AI environment is partially observable means, It leads to infinite loops. Then It may be possible to running out from the infinite loops if the agent try to randomize its actions while the task was running.
If an agent found a stick in a particular place and the agent needs to collect the stick then it would collect the stick. If the agent is simple reflex agent means then if that agent found the same stick in a different place also it would still pick that stick. It doesn’t take it into count that the stick is already picked.
This is useful when a immediate reaction is needed while doing a particular task. For example if our finger try to touch the fire means we pulls our hand immediately while seeing it. Our brain gives the order to our hand and our hand immediately react. So, it doesn’t harm our fingers.
We can connect it to a condition-action rule:
if finger is in fire then pull away hand from the fire
Here condition is : finger is in fire
Action is : pull away hand from the fire
Problems with Simple Reflex Agents:
This agents has very limited intelligence.
There is No additional information of non-perceptual things of state.
This kind of agent is too big to generate and store.
The collection of rules needs to be updated if there is any changes occurred in the AI environment.
It searches the condition-action rule that which condition of the rule matches the current situation of the task while running it.
This agent can works on partially observable environments by use of the basic model about the world.
This agent continuously watches the state of the particular problem while solving. Because that state should be adjusted by each and every percepts of the environment and that depends on the percept history that has the full information of an agent.
The current state is always stored or maintained by the agent.
If we want to update the state means it must needs the information about :1. how the world evolves independently from the agent
Example: If a mars Lander picked up the rock next to the one it was going to the world around it would carry on as normal
2. how the actions of the agent affects the world.
E.g. If a mars Lander took a sample under a precarious ledge it could displace a rock and it could be crushed.
These kind of agents has some goal and they can take any sort of decision based on the distance between where they are currently standing and how they should travel to achieve their goal.
Each and every action of this agents is trying to reduce the distance between the current state of it and the goal to achieve it. So, the agent should take the right path from multiple possibilities and that path should reaches the goal immediately compared to other paths.
These agents has the knowledge that supports to take decisions about its choices and it can be represented explicitly, those decisions can be modified based on the environment. These kind of situations and possibilities makes the agent more flexible.
To reach the goal they required to search from more possibilities and planning to achieve them. This goal based agent’s behavior can be changed easily because there is a comparison while choosing the right one.
The agent program combines the environment model with the goal information to choose the right actions so that the agent can achieve that goal within the particular time period.
Utility based agents are agents where their end uses are developed as building blocks.
When there are many alternatives to decide the best ,utility based agents serve the best as they choose actions based on users preference of the state.
Some end users may not be satisfied on reaching the goal and so they choose something that would be quick, save and cheap to reach the destination.
Preference or the utility takes agent happiness as an important factor, such that it states how happy the agent would be.
An agent prefers the action that utilizes the expected utility ,as of the uncertainty in the real world. An associated degree of happiness is described when a utility function maps a normal state to a real digit.
Programming agents by hand can be very hard and tough.
Four conceptual components:
Learning element: It helps in improvement.
Performance element: It helps in selecting agents such as external actions.
Critic: It helps in finding how well the agent does with respect to fixed performance standards.
Problem generator: Explore all the edges of the problem.
Reactive machines AI
Reactive machines in AI are purely reactive as they work on the current decision, without using past experience nor to form memories .Example : Deep Blue, IBM’s chess playing supercomputer where it did beat the grandmaster of chess.
Deep Blue is programmed in such a way that it identifies all the pieces on a chess board and knows its moves. It can also predict what move would be next by its opponent and such that this AI would provide an optimal solution. This would ignore all the past moves and concentrates only on the present elements that are present in the chess board and that is how it would choose the possible next move.
Google’s Alpha Go, is another reactive AI machine where it has beaten up its Go experts, this AI’s analysis method is much more complicated than Deep Blue where it uses neural network to evaluate its gaming contents.
Limited Memory AI
Limited Memory AI are used in self driving cars, where they detect the movements of the vehicles statically. The unchangeable data’s such as the traffic lights, the curves in the road and even the lane marks are added to it, such that it would avoid getting hit by a nearby vehicle, They are well programmed such that it knows when to change lanes and where to stop ,it would take nearly 100 seconds to make a considered decision while driving.
Theory of Mind AI
This AI is an advanced technology, as in terms of psychology, it would really understand people and things that has emotions, research says that in order to make these ,they must first create a robot as an initial stage where it could detect eye as well as facial expressions and react to it accordingly.
This would be a revolution in the history if it where to be created. It would be something to understand human intelligence on its own.
But when it happens, it would tune itself into cues from people like attention seeking, emotional behaviors and also it would display self driven reactions. It would be really crucial if we design this AI also it would be exceptional at classifying contents that see in front of them.
Artificial Narrow Intelligence (ANI)
This is a common technology that is used often in our daily lives, here we find them in our smartphones like Siri and Cortana that immediately responds to the problems that we request, It is also called as the Weak AI, as it is not strong as it needs to be.
Artificial General Intelligence (AGI)
These work like humans and are also called as the Strong AI. Some of the robots are AGI, where Pillo robot is an example of it as it can answer all the questions regarding the health of the family. It is like a full time live doctor ,and also that it provides guidance regarding health issues and also provides us with pills.
Artificial Superhuman Intelligence (ASI)
This can actually do everything that a human does or more than he can do. Alpha 2 is the first humanoid robot created mainly for a family where it is capable of managing and operating everything at home. It would actually make you feel like the robot is one of the member of the family, it can also tell some interesting stories too. And also it can predict the weather conditions.
This uses a lot of heuristic knowledge as the machine needs to think of n number of probabilities as they play a very crucial role in games such as chess, poker, etc.,.Natural Language Processing
Interaction with the system is much easier as it understands all the languages that a human would speak.
Speech recognition is something where the intelligent systems can hear and comprehend the languages, while it talks to a human. It also handles the change in human voice due to cold, also it can understand the slang used while using some words and different accents too.
Visual systems are something that scans, understands, interprets and also provide the visual input on the user’s system. Some of the examples of vision systems are,
A spying airplane which takes photographs, and provide the contents such as the map of the area
Doctors diagnose patients using his expert clinical system.
There are some applications which integrate machine, software, and special information to impart reasoning and advising. They provide explanation and advice to the users.
This system actually provides advices as well as solutions to the users by using certain applications that integrate machines and software’s to impart advising and reasoning.
Humans give the commands to the robots to perform some tasks. Robots do those tasks based on the commands. They have sensors to detect the real world things such as temperature, movement, light, heat, sound, bump, and pressure. They can detect the above things and recollect the physical data of it.
Because they have efficient processors, multiple sensors and huge memory to store all the information and recollect it while the information is needed or any tasks given by the humans. They can adapt themselves with the new environments and they can learn from their mistakes.
Handwriting & Speech Recognition
There is a software for handwriting recognition. That software can reads the text written on paper by a pen or it can reads the text on the screen by its styles. Those kind of intelligent software can easily understand or recognize the shapes of the letters and convert it into editable text format.
There are some intelligent systems. If humans talks to those systems means they can hear and comprehend the language in terms of sentences and their meanings. It can recognize different kind of accents, slang of the words, noise in the background while hearing the tasks from the humans, change in human’s noise due to cold, etc.
2. Fraud Detection
This Application helps in finding the unusual activity on the account to block potential fraud. It deals with fraud detection in credit card or bank card. It actually sends us the text or an email or a phone call, if it finds an unusual activity during transaction. You will be asked to authorize the transaction and reply it was you.
Smart Homes in AI would ease our living as it can take over all the work that a human does. For instance it can automate building heating and cooling, it can predict when to turn on the boiler for optimal comfort, future ovens could have our food ready when you get home. Lighting is another example as it works with sensors, They automatically turn lights on or off as we move around the house.
Think, how it is if there is a car without any driver. The cars have all the routes registered with it using google maps. Those cars are self-driving cars.
Self-driving cars are may become a real thing in the future. Example : Google has the self-driving car project and Tesla’s autopilot feature. There are some high end vehicles come with AI parking systems already.
Idea of the self-driving car is that, the car can look at the road and make decisions such as turning right, left side based on the destination and its route chosen by the car to travel with, all are done with what it really sees through the camera fixed inside it.
The car has Multi-domain controller to manages inputs from camera, radar, Lidar. Camera takes pictures of the road and those images are interpreted by the software inside the car. Radio waves are sent out and they analyze the objects in the picture. This radar can work all the weather conditions but it cannot differentiate objects. It sees like all the objects are same. Light waves are sent to the objects and the reflected lights can define the edge lines of the objects in the road and these Lidar can works in the dark also.
Siri and Cortana are widely known as the Virtual assistants, as it helps us finding the data that we are in need of .It works in a way like we speak(request for something) to them and they respond to the request. It could be anything like location, general information or schedule your day etc., These are clever assistants that provide the requested information.
Security Surveillance is all about training computers to monitor the cameras which would be of a great deal without making a single person to monitor all the cameras, which would be difficult for him to maintain. With much training, security algorithms takes input from security cameras and finds whether there would be a threat. In case of warning signs ,it would alert all the human security officers.
Microsoft Azure Machine Learning
This platform simplifies machine learning for business designed by cloud based advanced analytics. They can model in a way with the best use of class algorithms from packages such as Xbox, Python and Bing.
In this platform, it simplifies the core business processes by capturing the data and know things from people across fragmented and complex systems. It also gives users a delightful experience encouraging the start-of-the-art technology.
RainbirdIt enables to combine our data with that of the existing data and also business as well as human knowledge to automate knowledge work and deliver the result that transforms the way how a staff and customers interact with each other.
Fastest and more accurate way to track time for teams. You’ll never have to bug your team to deliver their timesheets again.