How organisational learning and Artificial Intelligence combined can add benefits to your organisation

Research from the MITSloan (2020) shows that ‘Only 10% of companies obtain significant financial benefits from artificial intelligence technologies.’ This is due to the lack of continual organisational learning whilst deploying AI. Organisational learning is about the “ detection and correction of error as well as the process of improving actions through better knowledge and understanding.

‘Most companies developing AI capabilities have yet to gain significant financial benefits from their efforts. Only when organizations add the ability to learn with AI do significant benefits become likely.’ (MITSloan, 2020)

An Organisation may be described as an open, dynamic and purposeful system. The nature, purpose and structure of an organisation will be shaped by outside influences – Who are its customers? Where are its resources to be found? What legal system applies? What is the state of the economy?

Since an organisation is not a living being, it is axiomatic that it cannot learn as it has no real ‘brain’. However, the individuals and groups within organisations make up its ‘brain’ and can learn. They may be led to engage in specific kinds of learning for the benefit of the entire organisation.

When implementing any AI into an organisation, it is important for the entire organisation to be on board with the implementation and to learn from the AI as well as work together with the AI. Over time, this will enable a more efficient and effective output from both the AI and the humans as they should better understand each other’s needs and wants.

Beyond AI being used for automating the organisation and industry, it is key for organisations to be more proficient. Learning to work with the AI is one thing and the organisations who intend for this to happen are usually the ones that view AI as more a cost cutting and automation tool.

Other organisations who see a real benefit to AI and therefore have a more successful output from it both financially and internally usually view AI to be a part of their organisational core. Not only does implementing AI mean you will save costs in the longer term, it also adds to the precision, speed and learning of your organisation. If your organisation manages to keep up with this, your organisation will see a wider benefit than just financially.

At Cognino, we believe that the real power of AI rests in its ability to significantly drive transformation across today’s enterprises. AI can be used to improve customer engagement, drive employee empowerment and power product innovation. To achieve meaningful success in pursuit of these transformations, organisations will need to establish a long-term strategy and work together with a technology partner that can act as a strategic thought partner. This will enable holistic thinking across the organisation.

Learn more about how AI can be implemented to your core through Cognino


The next wave of AI led transformation:

Over last 8 months, enterprises have seen unprecedented scale of digital transformation. This has resulted in exponential growth of organisational data within individual systems, conversation, documents, emails, online meeting, videos, chats etc. There are billions of internal and external events that are generated daily which could impact your decision making, which can’t be mined and reasoned through, using existing machine learning approach.

In a re-imagined world contextual knowledge mining will provide competitive edge, where ‘Collective intelligence’ are the new gold. While the first wave of AI involved many narrow applications, such as training a single model over one data source of a certain type for a single problem, contextual knowledge mining is the next wave of AI, generating a dynamic understanding of relationships and patterns in a corpus of information. It needs to be a key part of enterprise digital transformation initiatives that fundamentally change how organizations make sense of real-world information.

Organizations clearly need a means to discover and gain intelligence from all their data reliably and consistently. Contextual knowledge mining can transform the way businesses understand and leverage their content, exploiting the value in vast troves of previously untapped knowledge to create more insights and opportunities.

The next generation AI engine is a knowledge constellation dynamically connects events through various context and scenarios autonomously. Continuously learns and expands from events to create your own ‘Collective Intelligence’ which is ‘Always On’ intelligence, that enables auditable trustworthy decision making. It transforms your organisation to an intelligence enterprise.

Contextual knowledge mining can help solve three of the key challenges of handling unstructured information: time, scale, and insights. It drives better customer engagement, empowers employees, understand risks, predicts outcome while transforms products and services.

Here are some of the example use-case where I see a collective Intelligent Hub created by a neural engine preferably using knowledge graph married with deep learning to create a relational neural network. Not easy though to build and implement such engine !! Learning from real world data and creating dynamic relationship and insights is a recent progression in the AI research, if done well can provide the next generation knowledge mining.

No alt text provided for this image

At Cognino, after years of research we have developed an Artificial Intelligence engine that addresses the volume , variety, velocity & veracity of data. The platform has the capability to continuously learn , comprehend the changing world events and providing a cognitive outcome, an “intelligent core” for the organization.

The platform benefits from next generation design, to deliver an ‘Always On” capability to address concerns around data and intelligence currency. Additionally, the solution also provides an in-depth view of all decisioning steps, bringing much needed transparency through explainable AI.

Written by Priti Padhy

artificial intelligence

Reimagining building blocks of an intelligent AI

In my previous piece, “How intelligent Is your AI?[i]”  the premise was that an intelligent AI would ‘learn, understand, and reason.’ That has not been the case with existing AI. Yet they are very productive tools and are helping us create and capture tremendous value. PwC estimates the incremental value-add or TFP (Total Factor Productivity) by AI to the global GDP could be worth US$15.7 trillion by 2030. Massive indeed. If only some intelligence was fuelled the numbers could multiply. Here is a narrative for possible building blocks to get there.

On a cold winter day of November 2019, Imperial War Museum (IWM) London was assessing various AI engines to choose one to help its archives become vibrant. A picture of an exploding atom bomb was ingested into the engines. One of the most renowned Deep Learning-based engines identified it as a mushroom! (Turns out, this picture sample had yet to find its way into their library of encyclopedias.) Others did not get it either. One company got it. But it had no library or encyclopedias. It did have some data set, as all AI engines need to ‘learn’ (ingest data.) So, what could have been different from this company’s approach? Turns out that the engine was able to create its own ‘context’ when it was asked to identify the picture. Simply put, when the question was asked it ‘also’ considered the fact that the picture was sourced from the war museum enabling it to ignore mushroom as an option. Learning is critical to context creation, which in turn is critical to understanding, which in turn is critical to reasoning. All three elements need to be addressed.  To do that, I will refer to the human decision-making process and paint a series of ‘what-if’ scenarios with parallels from another empirical subject that gained prominence about the same time as AI, though not as popular, called Neuroeconomics.  

Neuroeconomics is a specialization in the human decision-making process that converges concepts of Neuroscience (biological sciences of neural systems), Psychology (the study of human behavior), and Economics (the idea of rational choices and decisions). This discipline aims to provide a single general theory of human behavior of decision making. It focuses on the processes that connect sensation and action by revealing the neurobiological mechanisms by which decisions are made and integrating elements of rational aspects from Economics as well as behavioral aspects from Psychology. I will not dwell on emotions, societal framework, or other factors that form part of our decisions. The idea is to limit our thoughts to human brain functions of learning, understanding, and reasoning and draw parallels with AI.


For beginners, our brain is bombarded with an estimated 11 million bits of information every second via our sensory organs, but the brain can ‘consciously’ process only about 40 bits per second[1]. Therefore, even if we wanted, we would not be able to ingest (learn) a library (forget library of encyclopedias). So, we ingest differently – first the basics, next intermediate, and then advanced. Example: the learning structure that, perhaps, went into writing this article- I first learned the English alphabet with its 26 letters when I was in pre-primary or primary school. Then I learned words, composed sentences and grammar, writing a paragraph, and so on. Similarly, I learned basic sciences and mathematics, which led me to advanced levels as well as an introduction to computer science, which led me to AI. The sciences also led to Neuroeconomics. So, when I present my views today my learning for all the years acts as a context and constructs meaning out of all learning I have had so far. Not just that, I am just about using the learning (or ingested data) that I need for this one.

On the other end, the learning in AI is flawed – you must have heard “AI is not magic, to even begin it requires volumes of data.” Why?  So, what-if instead of creating libraries or libraries of encyclopedias our AI learned (ingested data) in a structured manner with basic bearings to form a context?


Human beings have a limit to consciously process data despite the 100 billion neuron cells communicating with each other. A computer does not. Therefore, it may seem obvious that we can feed in any amount of data in an AI engine. That is indeed happening, but it is not helping to make it understand the data. It is just neat tagging and classification of data in a library, where you can extract data from the shelf when you want. AI scientists have created various forms of Artificial Neural Networks (ANNs) to replicate human neural networks but the sum effect of all them is to store data in a certain manner that can be retrieved basis the need. Fair. We need those.

But these neural networks do not communicate with each other like human neural networks, which use neurotransmitters to communicate dynamically. So, what-if we had the structured learning data stored in neural networks that are fluid or dynamic structure rather than a pre-fixed one?


If learning and understanding is achieved, the reasoning would be easier because the computing power is far ahead in the game. However, if we limit our input-output algorithm to enable the neural networks to self-create connectivity we are likely to get better outcomes as we (as humans) may have ignored certain aspects due to our biases, which is beyond our control to steer away from. As one reader noted in his comments in the previous article, the AI can possibly connect the data set in an unknown way to find something that has never existed before.

This means we go beyond the correlation that Machine Learning based system can do and get to causality into data. That would be a leap, not just another innovation to solve huge problems.  So, what if we left the AI to connect the dots rather than doing it through our algorithms?

Perceptual decisions

Finally, even before we begin assessing AI’s capability to ‘learn and understand’, we must consider that data is limited to Natural Language Processing (NLP) of raw data. NLPs act as sensory organs to convert the real-world data into 0s, 1s, and their combinations to allow AI to understand and differentiate every real-world composition. The human core brain uses a diffusion model to process sensory information. The model has three pre-conditions: brain decides in a non-random manner, the decision is goal-oriented, and there is a choice. The model stresses that the choice should be made as soon as the difference between the evidence supporting the losing alternative exceeds a threshold – sort of reward (and risk or fear) system. Advanced research implementation is on to use these insights to help people with disability to get sensory information to and from their brain cells and have robotic limbs do the basic motor functions. But where we could find the most value for AI is to take perceptual decisions, where the decision aims to categorize ambiguous sensory information (in computer sciences terminology it is called noisy data.)

If our AI must be intelligent it needs to be capable of making perceptual decisions from abstract information as well. That is one reason why digitization (even before digitalization) of legacy systems remains a challenge at organizations. They are convinced that their ‘data is the new oil’ but when they get into the act they realize ‘data is the new “crude” oil.’ It still needs to be explored, extracted, cleaned, and refined even before you can begin to understand its value. And human bias could have used automated filters in the process as well. So, what if we focused our energies on making AI sensors (NLPs) to read abstract (the crude) without the need for going through the entire value chain of refining?

If these 4 what-ifs can be answered as a whole, an AI - with intelligence- can be deployed much faster and get higher productivity. In my mind, it can be done, and it is already being done.



artificial intelligence

How intelligent is your AI?


How intelligent is your AI?

Though the concept of Artificial Intelligence came up in the 1950s with scientists, mathematicians, philosophers, and thinkers discussing its possibility it was not until the 1990s and 2000s that the idea began taking shape. We did not have the relevant computing power, the processing speed, the storage capacity, the hardware technology, large scale device connectivity, or the variety of databases.  Today we have the internet with over 4.5 billion users, 4.2 billion unique mobile phone users, 3.7 billion active social media users[1], and we create more data every passing year in various sizes, shapes, and forms. Systems can now decipher human languages (Natural Language Processing), interact back (bots), recognize multi-media files like pictures, audio, video (Computer Vision), collect data from various devices (Internet of Things), create data blocks with distribution ledger capability (blockchain),  store data in flexible infrastructure with huge storage capacity (data lakes, cloud), and many other capabilities that are all testimony to advancing data sciences. The role of AI comes in to decipher value from all the data. To achieve this objective, we began with algorithm led analytics that went from simpler descriptive Management Information Systems to advanced predictive and prescriptive output systems. Fast forward, we have Machine Learning and Deep Learning that can do much more when it comes to utilizing the data sciences skillset and providing a seemingly intelligent output. These two data to decision methods are the closest we have got to be declared AI. But are they intelligent?

To answer the question let us travel back to the foundation of AI as an idea. It was to replicate some level of human-like intelligence in computers. Cambridge Dictionary describes ‘intelligence’ is “the ability to learn, understand, and make judgments or have opinions that are based on reason.” (Point to be noted – the focus here is on reason alone and no other aspects like emotions, society, creativity, or other factors of human decision-making behavior. That is another topic to be dealt with another time, perhaps to understand the apocalyptic topic of whether AI will take over us or will remain benign. We are still contesting if AI is intelligent enough on logical reasoning.)

Dissecting the meaning we have four major capabilities that AI should have. The ability to ‘learn’, ‘understand’, ‘make judgments or opinions’, and make them based on ‘reason.’

Learn. Yes. Machine Learning, Deep Learning is about that. But learning is not the way we humans do. We go from basics to intermediate to advanced learning. Neither do we ingest the dictionary or encyclopedia on day 1 nor do we collect pictures on a specific subject and classify them under one category. Our learning is structured and sequential leading us to ‘understand’ and create a context. We begin from basics. Then, we absorb further learning on top of it or apply new learning incrementally or do both. In that sense, our learning is continuous and expands over time. Machine Learning or Deep Learning are largely library creation activities. There is no scope of context creation. It is just data storage as per a pre-decided model that can be retrieved as and when required. That is one reason why Machine Learning is process is highly iterative, where a new set of information needs to be included (coded) in the library classifications repeatedly. And it not just unintelligent, it is also expensive to do it because you require incremental time of data scientists and developers. Further, making judgment gets distorted if the process of learning and understanding have been distorted as well (as they say garbage-in-garbage-out.) The same goes for a presumption of judgment being based on reason because the reason has also been coded for the system.

In summary, the classic human approach to most forms of intelligence begins with learning that creates a context, which allows us to ‘understand’ the backdrop before we can logically process any new information on the subject. This context is developed with basic learning, which then become autonomous in processing new information accepting, rejecting, and editing existing learning along the way. In contrast, the so-called AI engines have been based on the input-output algorithm by identifying, classifying, and arranging data sets that the algorithms can crawl but give a pre-determined (or some level regression or linear algebra backed probabilistic model) to deliver an ‘intelligent looking’ output. This is not intelligence; it is an advanced form of robotics. While Deep Learning is a better approach to Machine Learning, it too lacks the basic building block – in terms of building the ‘understanding’ elements as it requires ‘end-to-end’ learning or some form of encyclopedia ingestion. Both approaches do help boost productivity and they have their uses. That is good, but not yet intelligent. Experts call these narrow AI or weak AI.

We will attempt to reimagine an intelligent AI or strong AI in our next blog. Keep watching this space.





[1] Statista, Numbers as of April 2020

Artificial Intelligence

What is Explainable Artificial Intelligence (AI) or XAI ?

As time and learning progressed, consequently IV Generation Industrial Revolution changed the way we live, work, and relate to one another. Artificial Intelligence (AI), Blockchain, Deep Learning, Machine Language, and IoT came up spreading their wings and making enterprises and companies fly with infinite horizons.

Yes, AI systems like Machine Learning and Deep Learning indeed take inputs, with no decipherable explanation or context. The AI system makes a decision and we don’t know how and why the system has arrived at that decision. This is the Blackbox model of AI which is deeply mysterious.

Explainable Artificial Intelligence (AI) meaning

Explainable AI, as the name suggests, can be, explained and understood by human brains. The outcome of Explainable AI is more reliable & trustworthy.

Simply speaking, Explainable AI is transparent in its operations so that human users can understand and trust decisions. It makes decisions cheaper, better, and faster. The goal of Explainable AI is to help people to be more productive and arrive at decisions that are reasonable and understandable. In other words, people should have an idea of why and how they are making those decisions.

Making the Blackbox of Artificial Intelligence (AI) transparent

Most of the owners, operators, users, companies, enterprises, and organizations need answers to their questions like: Why did the AI system make a specific prediction? Why did not the AI system do something else? Why should we follow the decision of AI? Is the output of AI error-free?

So, far, there is only early, seed research and work in the area of making Deep Learning approaches to Machine Learning explainable. However, it is hoped that sufficient progress can be made to arrive at the required transparency and Explainability.

According to Priti Padhy, CEO and Founder of Cognino, “I will call for a “third wave” of AI, which can adapt to novel situations and, importantly, explain its decisions. It's no longer acceptable to just say "we don't know how it works." If the future is going to run on neural networks, we need to understand them now !” He emphasizes making existing AI systems transparent and explainable.

AI is an inspiring gift to mankind with new outcomes. But with the surprises comes the concern of trust. Explainable AI deals with this trust issue making the output explainable with insights. The dark black box of conventional AI systems is thus unboxed and decisions, outcomes, and predictions are made with a reasoning.

The black box of AI systems is converted into a Glass-Box by Explainable AI models. The lack of understanding of predictions made can cause large risks for companies. No matter how genius you are but if you are not able to explain to a third party how and why are you getting those wonderful predictions, then you might lose your reputation and left with nothing at all.

Explainable AI in Medicine

In fields such as Healthcare and Medicine, mistakes can have catastrophic effects, therefore the black box concept of AI makes it difficult for medical practitioners to trust the outcomes. If an AI algorithm is not trained properly, we cannot be sure that the patients will be diagnosed properly or not. Thus, xAI algorithms that are being developed for Healthcare and Medicine can provide an explanation and justification for their results.

Using advanced Natural Language Processing, Explainable AI can extract and relate some important and critical information from the previous records of the patients, giving doctors a faster approach to reach the patient’s Electronic Health Record.

One of the most important reasons for bringing Explainable AI to medicine is because the medical practitioners are working with complex, heterogeneous, complicated, and disturbed data. Therefore Explainable AI is a boon to the medical fraternity that not only organizes the unstructured data but also come out with predictions that can be explained and understood.

Explainable AI in Finance

Finance is another field that is graved with a lot of paperwork in the form of documents that are unstructured and complex, thereby making their work full of risk and uncertainty. Explainable AI comes in picture in Financial Services and Compliance by providing a directional approach to the consultants and clientele.

Explainable AI can spread its wings in Banking and Finance and extend its hands in the following departments, namely,

  • Know Your Customer (KYC)
  • AML/Fraud Management
  • Rogue Employee Detection
  • Internal Audit Compliance
  • Tax and Accounting Compliance

AI detects and correlated anomalies and forecasts compliance, threat, and performance in no real-time.

Explainable Ai in Public Sector

Artificial Intelligence has transformed several areas in the private sector, now it has started blanketing the public sector as well. Governments of all countries are always under pressure to create an efficient and effective system of citizen services.

There are various categories in this sector, and we can summarise it as:

  • Smart cities and spaces
  • Citizen services
  • Employee engagement

Explainable AI in Education

Educationalists need methods and modern techniques of AI to connect students all over the world and help them realize their potential. Challenges in the education field can be paved through solutions provided by Explainable AI.

Students, teachers, and administrators can utilize Explainable AI to enhance and improve their learning graph with improved outcomes and predictions.

AI tools like real-time language translation can overcome all language barriers and help them learn and grow. Narration of surroundings for blind students enable them to know the insights of any field.

Explainable AI in Retail Industry

The retail industry is one of the most growing concerns with increasing expectations and personalized requirements. This sector is deeply intent-driven and needs an understanding of customer needs, wants, influences and relationships.

Retailers should think more about structuring the data through a glass-box of Explainable AI. It provides a window of “why and how” of an outcome. There are also performance metrics and can track a particular output. Think of AI as another worker. All the risks involved in a business are minimized thereby, increasing revenue and customer retention.


Explainable Artificial Intelligence (AI) or Xai is an Intelligent Core Platform that makes predictions, outcomes, and decisions understandable and explainable to humans. Actions of AI are traceable to a certain level. These levels depend upon the complexity and sensitivity of consequences that may arise from an AI system. Thus, Explainable and transparent system of AI should “know everything when anything goes wrong”.

Deep down, AI has become an important and substantial part of our lives, Explainable AI has become even more important.