How to establish confidence in AI: guaranteeing privacy with proper matters

  • WordTech

    2025-09-10 15:07:53

    0

  • With technological progress in AI nowadays, new risks naturally appear. Even so, we have great familiarity with such words as “algorithm,” “automated decision-making” and “machine learning” (ML) through applications existing in real life including email spam filters and automated bank loan decisions. Those said above explains why some of the challenges with AI are not entirely new but are amplified with AI spreading to new areas of life. What pleases us is that for AI doesn’t have its developments overnight, some of risks associated with it have been recognized for quite some time, and some methods of reducing privacy-related risks have already had their existence.

    Some thoughts and available online sources on protecting personal data in the context of AI will be said below in this text.                                           

    Not only the large language models (LLMs) but also image creation technologies have been trained with a great amount of content available online stirring debates over copyright law. When the data handled is private, our privacy can also be illegally used. One of the most well-known cases involves a certain institute issuing fines to a facial recognition technology company for having scraped photos of individuals online, including some awful people from social media applications selling rights to have access to this database to make identification of a person when provided with a photo. Despite the fact that the content scraped was made public by the data subjects themselves, people did not expect their photos to be utilized under this circumstance. A certain agency determined that there was no legal basis for those actions and required the deletion of photos of any residents there. When no evidence of compliance was offered by a company, a fine of a great amount was issued. With data scraping as more of an issue, one authority has recently issued guidelines for scraping personal data online. 

    It is possible to request data subject agreements to attain personal data for the specific purpose of developing an AI model. When data subject consent has not been requested, the institute allows secondary processing for research, development and innovation, when permitted by related rules. Neither agreements nor some relevant rules in place, secondary processing is allowed if the secondary purpose has compatibility with the primitive purpose of processing and certain considerations are taken. One office has issued guidelines over the most ubiquitous privacy issues around AI and data protection, involving having the definition of the lawful basis according to some rules. 

    But wouldn’t it have more convenience to apply synthetic data to train the model? In spite of the fact that fake data does not have the same statistical qualities as real data, synthetic data is produced from a real database and can therefore be employed to train and test an AI model without compromising real personal data. One agency estimated that by the near future, 60% of all data used to train and test AI models would be synthetic. However, recent researches show that when an AI model encounters a large amount of synthetic data, it may result in model collapse. On the basis of these findings, it will be interesting to see whether synthetic data will be as widely used as estimated in the past.

    Aside from the legal basis for collecting and processing personal data, other general personal data dealing with such principles  as data accuracy, data minimization, purpose limitation, data integrity and confidentiality apply to AI technology as well. The organization has also issued an independent report with privacy controls for the entire AI lifecycle, talking about these topics in detail. 

    Malicious attacks can also be made to change AI decisions, thus influencing the fairness of the decisions made. An outcome integrity attack hijacks the real outcome and changes it to a false one like a traditional “man in the middle” attack. In a data poisoning attack, the raw data, training data or test data of an AI model can be tampered to get the AI model to operate in a way beneficial to the attacker.  For instance, data poisoning can be carried out through model skewing, where the attacker pollutes training data so that the model can end up mislabeling bad data as good data. AI models typically have a feedback loop, which can also be used to do harm to the data underlying the AI model.                                    

    In order to reduce privacy risk and build customer trust, companies can make sure that there is a valid legal basis for dealing with personal data when gathering data to train and test AI models. Aside from that said above, they can also employ synthetic data in any possible places. Personal data demanded, you are supposed to have minimization of its use and be clear about the ways it is used. 

     


    To give support to human oversight of decision-making, the organization states that human oversight is strictly necessary for high-risk AI systems. AI systems are classified as “high-risk” for they are used in immigration, law enforcement, employment, education and the medical sector.

    Artificial intelligence solutions lead to risks to privacy which are, to some extent, specific to the technology. However, there are already means to reduce privacy risks related to, for example, the acquisition and anonymization of personal data, as well as fairness and protection. With AI more and more regulated, authorities have the mandate and means to both support ethical AI and take punitive action against companies not complying with regulations.                                                                                

     

    Previous:Why Human Interpreters Still Matter in the Era of AI

    Popular Feeds

    Privacy in the Era of AI: How Can We Safeguard Our Private Information?
    The Importance of a Personal Injury Attorney
    Understanding the Effects of AI on Customer Privacy and Legal Compliance
    The increasing data privacy concerns with AI: Something deserving your attention
    Latest Development of Ascertainment of Foreign Law in International Commercial Litigation in China
    Utilizing artificial intelligence to reshape the work in the future
    ICE can now enter K-12 schools − here’s what educators should know about student rights and privacy
    A Career in Law - Complete Details, Skills Required, Options
    Intellectual Property Law: What You Need to Know
    Civil law vs common law – A Complete guide

    QQ Online

    3069530740

    Telephone

    +86.17749509387

    WeChat