FBI Reports Increase in Online Shopping Scams

Original release date: August 5, 2020

The Federal Bureau of Investigation (FBI) Internet Crime Complaint Center (I3C) has released an alert on a recent increase in online shopping scams. The scams direct victims to fraudulent websites via ads on social media platforms and popular online search engines’ shopping pages.

The Cybersecurity and Infrastructure Security Agency (CISA) encourages users and consumers to review the IC3 Alert for indicators of fraud and tips to avoid being victimized, as well as CISA’s tip on Shopping Safely Online.

This product is provided subject to this Notification and this Privacy & Use policy.

More on Schrems II: No grace period for cross-border data flows – So moving on to next steps

When the Court of Justice of the European Union (CJEU) invalidated the EU-US Privacy Shield as a vehicle to transfer personal data from the EU to the US, last July 16, 2020, the obvious question was: “What is the transition period?” The answer is now coming from EU Data Protection Authorities in Europe: there is none. This is what companies who used to rely on the EU-US Privacy Shield should do now to bring their cross-border personal data transfers in line with European law:

  • Reassess all transfers currently occurring under the EU-US Privacy Shield to determine the appropriate legal basis for further transfer performing “data export impact assessments”, meaning, in accordance with the decision of the CJEU, assessing the specific risks of transfer to a specific country of destination and/or through a specific data importer. The test is stated at Article 44 of the GDPR that “the level of protection of natural persons guaranteed by the Regulation is not undermined.”
  • Negotiate Standard Contractual Clauses (SCCs) to govern the transfer of personal data between organizations or develop Binding Corporate Rules (BCRs) for the transfer of data among affiliates of one organization, or use individual consent where it is applicable. For example, in e-commerce, while it is not ideal, some companies may want to consider the practicality of subjecting a transaction to express consent to cross-border data transfer.
  • Obtain warranties from the organizations receiving EU data (the data importers) under SCCs or verify, in relation to their own BCRs, that they are not precluded by local law to comply with SCCs and BCRs, such as through State interference with personal data, allowed by law, in the country of destination.
  • Adopt
    • internal guidelines for their contract staff to limit cross border data transfers to countries where the SCCs or BCRs are not undermined by local law on State access to personal data;
    • apply technological safeguards, as well as guidelines for their implementation, to allow only legitimate State access to personal data for public safety reasons.   

The European Data Protection Board (EDPB), the body created by the GDPR to “ensure the consistent application of the Regulation”  is currently examining what supplementary measures – whether legal, technical or organizational measures – could be applied to transfer data to third countries where SCCs or BCRs would not provide the sufficient level of guarantees, on their own, in view of the law of the country of destination.

While guidance is being developed, organizations are still expected to address the legal basis for transfer of personal data formerly under the EU-US Privacy Shield, immediately.

Dentons is preparing material to assist its clients in this regard. We encourage you to seek advice from your privacy counsel to ensure compliance in cross border personal data flows.

Subscribe and stay updated
Receive our latest blog posts by email.

ICO Guidance on Artificial Intelligence

The ICO has now finalised the key component of its “AI Auditing Framework” following consultation. The Guidance covers what the ICO considers “best practice” in the development and deployment of AI technologies and is available here.

It is not a statutory code and there is no penalty for failing to follow the Guidance. However, there are two good reasons to comply with the Guidance in any event:

  • Firstly, the ICO makes clear that it will be relying on the Guidance to provide a methodology for its internal investigation and audit teams.
  • Secondly, in most cases where an organisation utilises AI, it will be mandatory to conduct a DPIA – and the ICO suggests that your DPIA process should both comply with data privacy laws generally but also conform to specific standards set out in the Guidance.

Therefore, it would be advisable for your DPO, compliance and technical teams to pay careful attention to the contents of the Guidance, as the ICO will take the Guidance into account when taking enforcement action.

The Guidance is divided into 4 sections. We set out a brief summary of the key takeaways of each section as follows:-

Accountability and Governance

Accountability issues for AI are not unlike governance issues for other technologies e.g. the ICO suggests that your organisation should set its risk appetite; ensure there is senior buy-in and that compliance is conducted by diverse, well-resourced teams and not left to the technologists.

The ICO recommends a DPIA is carried out. A DPIA must be meaningful and not a box-ticking exercise. It should be carried out at an early stage of product development and show evidence that less risky alternatives were considered than a system using AI. The Guidance includes all of the standard elements of a DPIA (as set out in GDPR) but also some interesting specifics. The DPIA should include:

  • An explanation of any relevant margins of error in the performance which may affect fairness;
  • An explanation of the degree of human involvement in the decision-making process and at what stage this takes place;
  • Assessment of necessity (i.e. evidence you could not accomplish the purposes in a less intrusive way) and proportionality (i.e. weighing the interests of using AI against the risks to data subjects, including whether individuals would reasonably expect an AI system to conduct the processing);
  • Trade-offs (e.g. between data minimisation and statistical accuracy) should also be documented “to an auditable standard”;
  • Consideration of potential mitigating measures to identified risks.

As best practice there should be both a “technical” and a “non-technical” version of the DPIA, the second of which is to be used to explain AI decisions to individual data subjects.

The ICO flags that Controller and Processor relationships are a complicated area in the context of AI. However, the final version of the Guidance retreats from specific advice as to characteristics of Controllers, Processors and Joint Controllers. Instead, the ICO will consult on this with stakeholders, with a view to publishing more details in updated Cloud Computing Guidance in 2021.

Lawfulness, Fairness and Transparency

On lawfulness, a different legal basis will likely be appropriate for different “phases” of AI technology (i.e. development vs deployment).

The ICO flags key issues which relate to each different type of basis, in particular:

  • Consent – if Article 6(1)(a) of GDPR is relied upon, consent must meet all the requirements of GDPR-standard consent. It may be a challenge to ensure that the consent is specific and informed given the nature of AI technology. Consent must also be capable of being easily withdrawn.
  • Contract – if Article 6(1)(b) of GDPR is relied upon, it must in practice be objectively necessary for the purposes of the contract – which also includes that there is no less intrusive way of processing data to provide the same service. The ICO adds that this may not be appropriate for the purposes of the development of the AI.
  • Legitimate Interests – if Article 6(1)(e) of GDPR is relied upon, the “three-part test” should be worked through in the context of a legitimate interests assessment (LIA). Where this is used for the developmentof the AI, the purposes may initially be quite broad, but as more specific purposes are identified, the LIA will have to be reviewed.

On fairness, the Guidance highlights the need to ensure that statistical accuracy (i.e. how often the AI gets the right answer) and risks of bias (i.e. the extent to which the outputs of AI lead to direct or indirect discrimination) are addressed both in development and procurement of AI systems.

On transparency, the ICO refers to their more detailed guidance on transparency, developed alongside the Alan Turing Institute (“Explaining decisions made with AI”) which is available here.

Data Security and Data Minimisation

AI poses new security challenges due to the complexity of the development process and reliance on third parties in the AI ecosystem. In addition to good practice cybersecurity measures (such as ensuring that your organisation tracks vulnerability updates in security advisories), the ICO addresses specific security challenges:

  • Development phase:  Technical teams should record all data flows and consider de-identification techniques being applied to training data before sharing internally or externally. Alternative privacy enhancing technologies (PETs) can be considered. There are particular challenges due to the fact most AI systems are not built entirely in-house but are based on externally maintained software, which itself may contain vulnerabilities (e.g. “NumPy” Python vulnerability discovered in 2019).
  • Deployment phase: AI is vulnerable to specific types of attack e.g. “model inversion” attacks, where attackers have some personal data about an individual and can infer other personal data from how the model operates; “adversarial” attacks which involve feeding false data which compromises the operation of the system. To minimise the likelihood of attack, pertinent questions should be asked about how the AI is deployed e.g. what information should the end-user get to access – or even (if your organisation developed the AI) should your external third party client get to access the model directly, or only through an API?

Data minimisation is also a challenge because AI systems generally require large amounts of data. Nevertheless, the principle still needs to be complied with in:

  • Development phase: in the training phase, your organisation needs to consider whether all the data used is necessary (e.g. not all demographic data about data subjects will be relevant to a particular purpose, such as calculating credit risk) and whether the use of personal data is necessary for the purposes of training the model. Statistical accuracy needs to be balanced with the principle of data minimisation. Privacy-enhancing methods, such as use of “synthetic” data, should be considered.
  • Deployment phase: in the inference phase, it may be possible to minimise data processed e.g. by converting personal data into less “human readable” formats (e.g. facial recognition using “faceprints” instead of digital images of faces), or only processing data locally on the individual’s device.

Anonymisation may also play an important role in data minimisation in the context of AI technologies. The ICO states that they are currently developing new guidance in this field.

Individual Rights

During the AI lifecycle, organisations will have to consider how to operationalise the ability for individuals to exercise their rights:

  • Development phase: it may be challenging to identify personal data of a data subject in training data, due to the “pre-processing” that is applied to data (e.g. stripping out identifiers). However, if it is personal data, your organisation will still have to respond. Where the request relates to data incorporated in the model itself, in certain cases (e.g. the individual exercises their right to erase their data) it may be necessary to erase the existing model and/or re-train the model.
  • Deployment phase: typically, once deployed, the outputs of an AI system are stored in the profile of an individual (e.g. targeted advertising driven by a predictive model based on a customer’s profile) – which, of course, may be easier to access for compliance purposes. The ICO suggests requests for rectification of model outputs are more likely than for training data. Data portability does not apply to inferred data, therefore it is unlikely to apply to outputs of AI models.

Automated decision-making requires careful consideration. Article 22 of GDPR will apply, unless there is human input – which must be meaningful and not a “rubber-stamp”. Where AI is used to assist human decision-making (but human input is involved, so it is not solely automated decision-making), the ICO states that your organisation should train the decision-makers to tackle:

  • Automation bias (i.e. humans routinely trusting the output of a machine as being inherently trustworthy and not using their own judgement).
  • Lack of interpretability (i.e. outputs are difficult for humans to interpret, so they agree with the recommendations of the system, rather than using their own judgement).

Human reviewers should have the authority to override the output generated by the AI system and should be monitored to check whether they are routinely agreeing with the AI system’s outputs.

Conclusion

The Guidance is concise, focused and pragmatic.

There will be a forthcoming ICO “toolkit” for organisations linked to the Guidance. Whether this includes a suggested framework for an “Enhanced DPIA” remains to be seen, but would be a welcome addition for DPOs in a fast-moving industry where compliance needs to be proactive rather than reactive.  

More articles on AI, including our piece on Artificial Intelligence in Smart Cities, is available on our Business Going Digital microsite.

Subscribe and stay updated
Receive our latest blog posts by email.