The result obtained from an algorithm based on neural networks cannot be explained. Moreover, it always has a statistical error, which is often also quantifiable. Lack of proof is the fundamental difference between neural networks and other A.I. tools like, for example, the inferential systems based on the open world assumption (i.e. rules systems that are tolerant of any lack of information). Such kind of A.I. systems, unlike neural networks, are always able to motivate their choices. The Semantic Web is the most known example. The prevailing trend collapses the whole A.I. on machine learning only (in particular on neural networks), but there are many ways of doing things. The technique of making a machine learn by example is undoubtedly the one that requires less cognitive effort on the part of human beings, and for that, perhaps it generates so many expectations. In order for things to work, we always need a logical-deductive substrate, which perhaps, in a more or less distant future, could also be deduced from a machine but which, for now, MUST always be modeled "by hands" and it must always be an integral part of every automatic system that takes decisions. In other words, you always need to insert the models generated through machine learning in a formal logical context, which evaluates rules defined by humans and based on socially shared conceptualizations. To build this logical model, you need to think a lot, discuss a lot and work hard to formalize it, maybe that's why we tend to pretend it's not needed. In exchange for such significant work, you can always know what you're talking about, what you're doing and why you're doing it. |
News archive >