An artificial neuron is a connection point in an artificial neural network. Artificial neural networks, like the human body's biological neural network, have a layered architecture and each network node (connection point) has the capability to process input and forward output to other nodes in the network. In both artificial and biological architectures, the nodes are called neurons and the connections are characterized by synaptic weights, which represent the significance of the connection. Artificial neurons are modelled after the hierarchical arrangement of neurons in biological sensory systems. In the visual system, for example, light input passes through neurons in successive layers of the retina before being passed to neurons in the thalamus of the brain and then on to neurons in the brain's visual cortex.
In both artificial and biological networks, when neurons process the input they receive, they decide whether the output should be passed on to the next layer as input. The decision of whether or not to send information on is called bias and it’s determined by an activation function built into the system. For example, an artificial neuron may only pass an output signal on to the next layer if its inputs sum to a value above some particular threshold value. Because activation functions can either be linear or non-linear, neurons will often have a wide range of convergence and divergence. Divergence is the ability for one neuron to communicate with many other neurons in the network and convergence is the ability for one neuron to receive input from many other neurons in the network.
Neuromorphic Computing is Inspired by biology and implement the theory of human brain modelling by neurons and synapses. Neuromorphic system provides unique and novel solution to AI. Machine Learning is a Study of algorithms that improve their performance at some task with experience to optimize a performance criterion using example data or past experience. Role of statistics are Inference from a sample, Role of computer science, Efficient algorithms to solve the optimization problem and representing and evaluating the model for inference
The bridging gap between the reinforcement learning and neuromorphic computing and discussion on autonomous vehicles, its features and performance and various classifications of artificial intelligence. The session also includes the aspects involved in the control system approach and real-time applications. It consists of emerging fields that has influence on human life, computing process of brain and attempts to replicate modern electronics.
Artificial Neural Network, Combination/Connection of several neurons result in Artificial Neural Networks, ANN are classified based on the Architecture, Training procedure/Algorithm. Based on the architecture, all the ANN are classified as Single layer ANN, Multilayer ANN. One of the significant difference between ANN and BNN is that neurons are available in a random fashion. However, all the neurons are arranged in an orderly fashion in ANN.
The various algorithms and its merits and demerits were discussed with few examples. Discussed few possible applications like Biological - Learning more about the brain and other systems, Modelling retina, cochlea Environmental - Analysing trends and patterns, Forecasting weather Business - Mining corporate databases, Optimizing airline seating and fee schedules, Face recognition, Gesture recognition, Gait recognition, Video Surveillance, Speaker identification, Satellite image processing, Biometrics person identification ands its feature’s along with merits and limitations.
The Evolutionary Algorithms for Optimization, Optimization is a procedure of finding and comparing feasible solutions until no better solution can be found. Single objective optimization - When an optimization problem involves only one objective function. Multi objective optimization - When an optimization problem involves more than one objective function. Classification of optimization problems is based on Constraints: Constrained optimization problem, Unconstrained optimization problem, Nature of the design variables: Static optimization problems, Dynamic optimization problems.
During the session the resource person explored on why neuromorphic computing is going to be latest trend and compared the various architectures involved in the design of AI applications. Then the session moved on with neuromorphic computing features like rapid response system, higher adaptation etc.,
Introduction to the Machine learning, deep learning and artificial intelligence general perspective with real time examples. Then the session was focussed on classification of machine learning and its algorithms. The supervised and unsupervised algorithms are listed out and a real time example was discussed in the session.
An introduction to National Education Policy 2020 and continued with its features and briefed about Early Childhood Care and Education, Foundational Literacy and Numeracy, Universal access to education at all levels Curriculum and pedagogy in schools, Testing and assessments, Teachers and teacher education Equitable and inclusive education, School complexes, Standard setting and school accreditation.
Basics of neural network mapping and continued with Neuromorphic hardware implements biological neurons and synapses to execute a spiking neural network (SNN)-based machine learning. A design methodology to map SNNs to crossbar-based neuromorphic hardware, minimizing spike latency and energy consumption. SpiNeMap operates in two steps: SpiNeCluster and SpiNePlacer. SpiNeCluster is a heuristic-based clustering technique to partition an SNN into clusters of synapses, where intracluster local synapses are mapped within crossbars of the hardware and intercluster global synapses are mapped to the shared interconnect. SpiNeCluster minimizes the number of spikes on global synapses. SpiNePlacer then finds the best placement of local and global synapses on the hardware using a metaheuristic-based approach to minimize energy consumption and spike latency.
Neuromorphic computing refers to a form of processing that mirrors the structure and functionality of the human brain [2]. Professionals have been developing effective neuromorphic systems for years; however, the field is still relatively new. It does not have as many present-day applications in comparison to its more traditional counterparts. However, even in the early stages of its existence, neuromorphic computing provides great promise. Its unique characteristics will make it a beneficial tool for innovators in a number of fields.
Basics of Deep learning and Deep learning is a type of machine learning and artificial intelligence (AI) that imitates the way humans gain certain types of knowledge. Deep learning is an important element of data science, which includes statistics and predictive modeling. It is extremely beneficial to data scientists who are tasked with collecting, analyzing and interpreting large amounts of data; deep learning makes this process faster and easier. At its simplest, deep learning can be thought of as a way to automate predictive analytics. While traditional machine learning algorithms are linear, deep learning algorithms are stacked in a hierarchy of increasing complexity and abstraction.
Stress is a normal psychological and physical reaction to the demands of life. A small amount of stress can be good, motivating you to perform well. But many challenges daily, such as sitting in traffic, meeting deadlines and paying bills, can push you beyond your ability to cope. Your brain comes hard-wired with an alarm system for your protection. When your brain perceives a threat, it signals your body to release a burst of hormones that increase your heart rate and raise your blood pressure. This "fight-or-flight" response fuels you to deal with the threat.
Once the threat is gone, your body is meant to return to a normal, relaxed state. Unfortunately, the nonstop complications of modern life and its demands and expectations mean that some people's alarm systems rarely shut off. Stress management gives you a range of tools to reset and to recalibrate your alarm system. It can help your mind and body adapt (resilience). Without it, your body might always be on high alert. Over time, chronic stress can lead to serious health problems.
Neural networks have been proved to be powerful tools for real world tasks, such as pattern recognition, classification, regression, and prediction. However, their high computational demands are not ideally suited to modern computer architectures. This constraint has so far often prohibited their use in applications that need real-time control, such as interactive robotic systems. On the other hand, scientists have been developing hardware platforms that are optimised for neural networks over the past two decades. However, these systems are not capable of synthesising large-scale neural networks for these real world tasks from subnetworks and therefore are not very suitable. Here, we present a generic hardware architecture that uses the Neural Engineering Framework (NEF) to implement large-scale neural networks on FPGAs, which are capable of processing up to millions of pattern recognitions in real time.
During this session, AI accelerator is a category of specialized hardware accelerator or automatic data processing system designed to accelerate computer science applications, particularly artificial neural networks, machine visualization and machine learning. Typical applications embrace algorithms for AI, Internet of things and different data-intensive or sensor-driven tasks. Machine learning is widely employed in several modern artificial intelligence applications. Varied hardware platforms are enforced to support such applications. Among them, graphics process unit (GPU) is the most widely used because of its quick computation speed and compatibility with varied algorithms. Field programmable gate arrays (FPGA) show higher energy potency as compared with GPU when computing machine learning algorithm at the cost of low speed. Varied application-specific integrated circuits (ASIC) design are projected to realize the most effective energy potency at the value of less reconfigurability that makes it appropriate for special varieties of machine learning algorithms like a deep convolutional neural network.