The authority to prove deepfakes technically and jurisprudentially in artificial intelligence systems
##plugins.themes.bootstrap3.article.main##
Abstract
The research aims to demonstrate the authority to prove deepfakes technically and jurisprudentially in artificial intelligence systems, as many people still fear the harm of this development in artificial intelligence, mimicking voice, and image, etc. The research contains the concept of artificial intelligence, details on the method of detecting deepfakes technically, the authority of deepfakes in jurisprudence, and then the result, which is: that probable evidence is considered one of the methods of proof considered in the judiciary in Islam. The means of proof are not limited to a specific number. Therefore, those videos or audio clips that were detected through deepfake tools and whose falsification was not proven are some of the evidence that varies in strength and weakness. That is what the judge and his specialist experts estimate. This means according to the evidence, the authority of rights is proven, or dropped.
Downloads
##plugins.themes.bootstrap3.article.details##
Artificial Intelligence, Deep Fakes, Proof, The Presumption
This work is licensed under a Creative Commons Attribution 4.0 International License.
JSS publishes Open Access articles under the Creative Commons Attribution (CC BY) license. If author(s) submit their article for consideration by JSS, they agree to have the CC BY license applied to their work, which means that it may be reused in any form provided that the author (s) and the journal are properly cited. Under this license, author(s) also preserve the right of reusing the content of their article provided that they cite the JSS.