The Ongoing Revolution in Technology

Through its history, AI has gone through different periods of development and social acceptance. For the last 5 decades, it has seen periods of research stagnation, as well as periods of loss of interest. Highlights in breakthrough was the amounting interest in expert systems during the 80s, the chess-winning supercomputer Deep Blue from IBM in 1997, and intelligent agents and knowledge-based systems from the last decade.

The field in the usage of AI is stretching, soaking in more subfields. It is producing algorithms that appear to be parts of larger systems that are not associated directly with AI. The fields that are gaining most mainstream traction in media today are for example, data mining, robotics, CRM, bank risk assessment and smart diagnosis, but as we mentioned earlier, this is far from AI’s full potential. Nevertheless, society struggles to label AI as an existing paradigm. As soon as an AI feature is being programmed, explained and delivered to the customer it becomes common, and the magical ‘fleur’ of the new technology fades out – investigated technology is no longer called AI and the research craves for the new inconceivability. AI is becoming extremely widespread and furthermore, mandatory. Just like a century ago we brought in electricity, nowadays we can see objects and services being granted with cognicity. Leading Internet companies are rapidly extending their AI teams, filling them with young engineers, acquiring AI companies and research centers.

The global economy is being profoundly re-shaped by AI-as-a-service (AIaaS) benefits, caused by incredibly low CAPEX, ROI opportunities and investment efficiency. The biggest bets are being put on deep learning, and machine learning algorithms that are capable of digesting massive volumes of data to provide remote decision making, system supervision and efficiency improvement.

 

The Shift in the Core of AI

Deep learning concepts are standing slightly afar from pure algorithmic AI implementations, being based on theories of brain development and cognition. The processes within the brain were inspiring computational models, likewise neural networks originate ideologically in human central nervous system. Today we see that the surrounding complexity of the real world that software systems have to represent in digital manner exponentially grows accordingly to the growth of the data sources. Reinvention of the databases that are able to comprehend this shift is happening now – and it is right about time to turn to neuroscience and ask ourselves

“How is a human-being able to comprehend and analyze all kinds of information from heterogeneous sources?”

The answer to this question would bring us one step closer to the creation of a software environment that uses AI with the ambition to have limitless application scenarios of our systems.

 

Software vendor’s primary goal is to be able to sustain a limitless applicability of their systems without additional costs. Adding more applications into the platform is a way to attract new customers, promote new features, and expand into complementary markets. However, ISVs are cautious about implementing new applications as the bottleneck of integration between applications, and application and platform is a nightmare. This in turn is demotivating business development.

 

 

By | 2017-03-30T12:10:30+00:00 October 27th, 2016|Starcounter|0 Comments

By continuing to use the site, you agree to the use of cookies. More information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close