top of page
Green Suede

Bridging The Gap

Explore AI World

By John R. Williams

image.png
TeslaBot.jpg
Bridging AI.jpg

Most people think AI will not be creative and that artistic and creative jobs will be safer in the future, but creativity is basically discarding bad ideas and being able to identify good ones.   

​​

On one side, we see examples of AI replacing humans in many professions, like publishing a novel or brewing and serving coffee at San Francisco Baristas; but there are other examples where AI has proved why a human-in-the-loop is needed.

​​​

Artificial Intelligence, Automation, Machine Learning, are new-age technologies designed to make human lives better and these technologies are supposed to work effortlessly without any human-in-the-loop.

 

Many are already planning to hand over the keys of control to our AI overlords, BUT before we do so, we need to pause and consider the hindsight of increasing our AI dependency and the COST.

image.png
Dojo System.jpg

LINKS

The following links are provided for additional information on

"Bridging The Gap"!!!​​

X (Twitter)

Patreon

YouTube

​​Please contact me at my email address below if  you require any additional information - jrwilliams5mv@gmail.com​

Thank you!

AI FACTS

image.png
AI Timeline.jpg

AI FACTS

History

Milestone / Innovation -

​

  • 1923 - Karel ÄŒapek plays named “Rossum’s Universal Robots, the first use of the word “robot” in English.

  • 1943 - Foundations for neural networks laid.

  • 1945 - Isaac Asimov, a Columbia University alumni, use the term Robotics.

  • 1956 - John McCarthy first used the term Artificial Intelligence. Demonstration of the first running AI program at Carnegie Mellon University.

  • 1964 - Danny Bobrow’s dissertation at MIT showed how computers could understand natural language.

  • 1969 - Scientists at Stanford Research Institute Developed Shakey. A robot equipped with locomotion and problem-solving.

  • 1979 - The world’s first computer-controlled autonomous vehicle, Stanford Cart, was built.

  • 1990 - Significant demonstrations in machine learning

  • 1997 - The Deep Blue Chess Program beat the then world chess champion, Garry Kasparov.

  • 2000 - Interactive robot pets have become commercially available. MIT displays Kismet, a robot with a face that expresses emotions.

  • 2006 - AI came into the Business world in the year 2006. Companies like Facebook, Netflix, Twitter started using AI.

  • 2012 - Google has launched an Android app feature called “Google now”, which provides the user with a prediction.

  • 2018 - The “Project Debater” from IBM debated complex topics with two master debaters and performed exceptionally well.

Goals of Artificial Intelligence

Here are the main Goals of AI:

​

  • It helps you reduce the amount of time needed to perform specific tasks.

  • Making it easier for humans to interact with machines.

  • Facilitating human-computer interaction in a way that is more natural and efficient.

  • Improving the accuracy and speed of medical diagnoses.

  • Helping people learn new information more quickly.

  • Enhancing communication between humans and machines.

Subfields of Artificial Intelligence

Here, are some important subfields of Artificial Intelligence:

​

  1. Machine Learning: Machine learning is the art of studying algorithms that learn from examples and experiences. Machine learning is based on the idea that some patterns in the data were identified and used for future predictions. The difference from hardcoding rules is that the machine learns to find such rules.

  2. Deep Learning: Deep learning is a sub-field of machine learning. Deep learning does not mean the machine learns more in-depth knowledge; it uses different layers to learn from the data. The depth of the model is represented by the number of layers in the model. For instance, the Google LeNet model for image recognition counts 22 layers.

  3. Natural Language Processing: A neural network is a group of connected I/O units where each connection has a weight associated with its computer programs. It helps you to build predictive models from large databases. This model builds upon the human nervous system. You can use this model to conduct image understanding, human learning, computer speech, etc.

  4. Expert Systems: An expert system is an interactive and reliable computer-based decision-making system that uses facts and heuristics to solve complex decision-making problems. It is also considered at the highest level of human intelligence. The main goal of an expert system is to solve the most complex issues in a specific domain.

  5. Fuzzy Logic: Fuzzy Logic is defined as a many-valued logic form that may have truth values of variables in any real number between 0 and 1. It is the handle concept of partial truth. In real life, we may encounter a situation where we can’t decide whether the statement is true or false.

image.png

A neural network has been out since the nineties with the seminal paper of Yann LeCun. However, it started to become famous around the year 2012. Explained by three critical factors for its popularity are:

 

1. Hardware

2. Data

3. Algorithm

 

Machine learning is an experimental field, meaning it needs data to test new ideas or approaches. With the boom of the internet, data became more easily accessible. Besides, giant companies like NVIDIA and AMD have developed high-performance graphics chips for the gaming market.

Neural Networks.jpg

Hardware

 

In the last twenty years, the CPU’s power has exploded, allowing the user to train a small deep-learning model on any laptop. However, you need a more powerful machine to process a deep-learning model for computer vision or deep learning. Thanks to the investment of NVIDIA and AMD, a new generation of GPU (graphical processing unit) are available. These chips allow parallel computations, and the machine can separate the computations over several GPUs to speed up the calculations.For instance, with an NVIDIA TITAN X, it takes two days to train a model called ImageNet against weeks for a traditional CPU. Besides, big companies use clusters of GPU to train deep learning models with the NVIDIA Tesla K80 because it helps to reduce the data center cost and provide better performances.

Data

 

Deep learning is the structure of the model, and the data is the fluid to make it alive. Data powers artificial intelligence. Without data, nothing can be done. The latest Technologies have pushed the boundaries of data storage, and it is easier than ever to store a high amount of data in a data center.Internet revolution makes data collection and distribution available to feed machine learning algorithms. If you are familiar with Flickr, Instagram or any other app with images, you can guess their AI potential. There are millions of pictures with tags available on these websites. Those pictures can train a neural network model to recognize an object on the picture without the need to collect and label the data manually.Artificial intelligence combined with data is the new gold. Data is a unique competitive advantage that no firm should neglect, and AI provides the best answers from your data. When all the firms can have the same technologies, the one with data will have a competitive advantage. To give an idea, the world creates about 2.2 exabytes, or 2.2 billion gigabytes, every day.A company needs exceptionally diverse data sources to find the patterns and learn in a substantial volume.

Algorithm

 

Hardware is more powerful than ever, data is easily accessible, but one thing that makes the neural network more reliable is the development of more accurate algorithms. Primary neural networks are a simple multiplication matrix without in-depth statistical properties. Since 2010, remarkable discoveries have been made to improve the neural network.Artificial Intelligence uses a progressive learning algorithm to let the data do the programming. It means the computer can teach itself how to perform different tasks, like finding anomalies becoming a chatbot.

bottom of page