Which kind of artificial intelligence do we want to live with? Should machines explain themselves to us? Machine learning techniques are developing at a rapid pace and find applications not only in banal everyday uses, but also in high-stake situations, including science, medicine, banking, law, and business. But it is impossible to reconstruct how they reach their results and to judge whether they reach their results in the intended way. The mechanism is entirely opaque. This prompts a lot of justified skepticism and criticism of these computer programs. By closely investigating the foundations of opacity and explanations from a philosophy of science and epistemological perspective, Buchholz comes to more optimistic conclusions. This book derives practical consequences from a rigorous conceptual analysis of opacity, paving the way to an effective regulation of machine learning, and will advance the debate about the nature of explanation in the philosophy of science.
Es wurden noch keine Bewertungen abgegeben. Schreiben Sie die erste Bewertung zu "Explaining Artificial Intelligence" und helfen Sie damit anderen bei der Kaufentscheidung.