Active 24/24h +39 339.585.68.22

Artificial Intelligence that doesn't kill you… worse, it sells you!

intelligenza artificiale

Artificial Intelligence: a thousand-sector topic

This article will not speak directly of Tourism with an immediate and practical reference to the topic itself; on the other hand we will demonstrate how technical topics that seem distant from each other are increasingly having points of contact and synergies, unthinkable until about ten years ago.

After all, we are a company that deals with Artificial Intelligence and Tourism and scientific research or the creation of tools for the creation and promotion of tourism products is fully in our area of action.

ENJOY THE READING

Google and its Scientist

In the past few days, two very great news have arrived regarding the field of Artificial Intelligence which can be brought together in the more focused field of Cyber Ethics.

The first of these news stories speaks of a fact as strange as it is disturbing if it were true.

A Google engineer was suspended after months of clashing with his superiors” as he said AI model LaMDA was like a sweet boy. Psychiatric case, perhaps, but still emblematic of the scenario we have entered.

Blake Lemoine, software engineer in forceorganization Responsible AI  (artificial intelligence) of Google, was forcibly placed on paid leave after months of clashes with his superiors” as he stated that the LaMDA chatbot would be like “a sweet guy who just wants to help the world be a better place for all of us […] Please take care of him in my absence”.

In particular, the artificial intelligence would have stated, in an interview with the builders, that it was afraid of being "turned off" in order to no longer be able to help and would also have stated that being "turned off", for "her" would be like die.

…AND THAT'S NOT ALL

This news is paired with another that, case by case, talks about European Laws and Regulations regarding the development of Artificial Intelligence that could be harmful to human activities.

There is a new compromise text for the EU regulation on artificial intelligence. It focuses on "high risk" AI systems and the identification of the obligations and responsibilities that should be placed on the suppliers of artificial intelligence systems.

The purpose pursued by the regulation is to regulate the development and use of Artificial Intelligence, in order to increase the trust of European citizens in such tools and ensure that their use does not constitute a violation of the fundamental rights enshrined in the legislation current European.

The subject of the new compromise text, according to reports from Euractiv (https://euractiv.it/ ), I am:

1) the so-called “high risk” AI systems e

2) the identification of the obligations and responsibilities that should be placed on the suppliers of artificial intelligence systems.

the new amendment proposal introduces new obligations for suppliers of high-risk artificial intelligence systems: firstly, they will have to implement a quality management system that can be integrated into existing systems in compliance with European sector standards, including the European Regulation on medical devices.

Obviously, those who have a few years behind them and a great passion for the Seventh Art which is Cinema, can only remember the famous film Terminator II which talks about Skynet: a revolutionary artificial intelligence, based on an innovative network processor neuronic, designed starting with a microchip found by a cyborgs squeezed into one hydraulic press In the 1984 (SPOILER: Terminator I ending) and which will lead to the end of the world. 

Yes, because, when you talk about Artificial Intelligence, the community is closer to the concept of ruin, rather than to that of opportunity and productive and, why not, also creative help.

Not wanting to go into specifics, deliberately, and reserving other articles to be more technical, let's talk about the current state of Artificial Intelligence. Considering the news of Google engineer, Blake Lemoine, neutral or proof of veracity the development of Artificial Intelligence has no "moral" as we could define it. Algorithms in Machine Learning are developed on test or training databases and learn similar models (attention: similar) to the data entered under certain rules or biases, trying to reproduce or project our particular model into the future. With this Data Science technique you can well understand that an Artificial Intelligence, even in its original and reliable elaboration, follows patterns that are not very distant from the entered data. To speak clearly: if I base an Artificial Intelligence on models created through Machine Learning I will never have a system that is out of control or that are incompatible with the most widespread moral norms.

The speech changes a little with Deep Learning which is based on artificial neural networks organized in different layers, where each layer calculates the values for the next one so that the information is processed more and more completely. There are different types of organization of neural networks but, even here, although with results that can be completely original with respect to Machine Learning, we will never have final processes incompatible with common morality.

Summarizing with both Machine Learning and Deep Learning, if we start with data or request results that are compatible with established social or moral norms, we will not have final results that can produce adverse or negative effects with respect to the normal behavior of the Company.

Yet the black house in the forest of Artificial Intelligence is there.

There are many ways to be "bad" and all of them human

Let's try to think of a different model. The usual mad scientist decides to develop an Artificial Intelligence whose purpose is not the destruction of Humanity, but something less ambitious: how to increase the profits of a business in a certain field. In this case things could be different and the "evil" of the system could manifest itself in all its power.

Artificial Intelligence Techniques, in order to increase monetary gain or resources in general, will develop very predatory models ... becoming worse than men on the planet. For example, it could reduce resources or salaries to human operators; in cities, reduce lighted areas; advertise a poor product by making it pay stratospheric figures even with poor quality; unlock the cashback of a credit card only on coercion of actions to be carried out …

If you think about it, some of these policies are already present in our daily lives. Really happened in the Algotrade case, of the algorithm that was sinking the Euro on the stock exchange

Therefore, an Artificial Intelligence does not have to want the death of Humanity; it could be programmed to make the World efficient, only to then discover that the least efficient thing is Humanity itself.

Position:
Alberobello (Bari)

WhatsApp:
Call us

Leave a Reply

Note: Comments on the web site reflect the views of their authors, and not necessarily the views of the bookyourtravel internet portal. You are requested to refrain from insults, swearing and vulgar expression. We reserve the right to delete any comment without notice or explanations.

Your email address will not be published. Required fields are signed with *

en_USEnglish