IEEE Access 6:52138–52160Īllen T, Widdison R (1996) Can computers make contracts. KeywordsĪdadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). In both cases, users’ right to invalidate the contract in case of mistake must be guaranteed and the mistake shall be assigned to the intelligent agent. Users’ liability could be based on the theory of de facto contracts (faktische Verträge) or, alternatively, on the doctrine of reliance liability (Vertrauenshaftung). After presenting the arguments of these theories, the chapter concludes that the legal community should absolutely accept the validity of the contracts concluded by intelligent agents, considering their users legally bound to their performance. Some of them suggest that autonomous AI systems are mere communication tools or agents that render their user liable, whilst other legal scholars suggest that autonomous AI systems themselves-not their user-should be held liable. Various theories have already been expounded in legal doctrine, with the view to tackling thereon. Nonetheless, concerns are raised about the validity of the contracts concluded by autonomous machines and, subsequently, about the contractual responsibility in case of non-performance. Nowadays, one of the most important applications of autonomous AI systems is the conclusion of contracts.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |