What Is The GDPR?

What is the GDPR?

The GDPR represents new legislation that is destined to replace the General Data Protection Regulation, which has been in place since 1995. The arrival of the digital age means that the way people understand and interact with data is changing rapidly. The GDPR can help to clarify individual rights in the digital age, as well as creating a “digital single market” within the EU. With the GDPR in place, it will be easier for citizens to have control over their personal data, representing a shift in power.

The underlying principle of the GDPR is that the protection of personal data is a fundamental right, and organizations that handle personal data are responsible for those rights. “Processing” data means collecting, sharing, distributing, structuring, storing, or otherwise using an individual’s data. In this relationship, there are controllers and processors. A controller determines the purpose and means of processing personal data and is usually the collector of the data. Processors are engaged to process data on behalf of the controller, but the controllers are responsible for monitoring processors’ compliance.

The GDPR affects the North American market because any organization that offers goods or services to the EU or that monitors the behaviour of people within the EU is responsible for complying with the GDPR.

There are three key principles of the regulation:

  1. Limitation of processing means that: data must be processed only for specified, explicit and legitimate purposes; data must not be further processed in ways inconsistent with the initial purposes; data should be adequate, relevant, and necessary; data should be accurate and kept up-to-date; data should be kept only as long as necessary.
  2. Informed consent refers to freely given and clearly affirmative consent that must be intelligible, easily accessible, and written in plain language. Participants have the right to withdraw consent, and services cannot be withheld on condition of consent.
  3. Lawful processing pertains to at least one of the following conditions must be met:
    1. Consent from the data subject
    2. Processing is necessary for a contract
    3. Processing is necessary for compliance with EU laws
    4. Processing is necessary to protect a person’s vital interests
    5. Processing in the public interest or exercise of official authority
    6. Legitimate interests of the controller or a third party that are not overridden by the data subject’s rights and freedoms

Key Terms

  • Right to be Forgotten

This concept refers to personal data being deleted when the data subject no longer wants it to be processed. The exception to this is when there is legitimate reason to retain the data, for instance, in the case of completing a contract or complying with legal obligations.

  • Informed Consent

Information is made readily available and is communicated in clear, plain language. Informed consent will especially be enforced regarding services for children.

  • Right to Data Portability

Data subjects have a right to a copy of their personal data in an appropriate format and, where possible, they can transfer that data directly from one service provider to another. For example, individuals should be able to transfer photos from one social network to another.

  • Data Protection by Design and Default

This aspect helps protect users’ data by design, for instance by implementing technical safeguards like anonymization, pseudonymization, and encryption, as well as organizational safeguards.

  • Mandatory Data Protection Officer

A DPO fills the need for an organization to help monitor privacy and data protection. A DPO is an expert in their field, and is required if an organization’s core activities consist of regular and systematic monitoring of personal data on a large scale. This position helps ensure compliance and awareness of privacy legislation. The DPO may also monitor internal data protection activities, train staff, and conduct internal audits. If data subjects have inquiries, these will go through the DPO as well.

 

Company Response

Companies are responding to the GDPR in several ways:

  1. Stop buying and selling personal data
  2. Know where your clients live, or implement EU requirements regardless of location
  3. Prepare to respond to requests from data subjects
  4. Audit sub-contractors for compliance
  5. Reconsider cloud services

eDiscovery and Audits: The Solution to Unauthorized Access

eDiscovery and Audits: The Solution to Unauthorized Access

Electronic medical records (EMRs) contain sensitive personal information that is strongly protected in many jurisdictions. EMRs are protected under the Personal Health Information Protection Act (PHIPA) in Ontario by limiting authorized access to professionals who are currently providing healthcare services to the patient or are otherwise given consent to access the patient’s records. Despite PHIPA’s definition of and limits to authorized access, many of Ontario’s healthcare organizations (such as hospitals and clinics) operate open-access EMR database systems, meaning all healthcare staff have access to all records. Despite the responsibilities of healthcare organizations to protect patients’ information from unauthorized access and distribution, the risk of unprofessional conduct is not well-mitigated.

 

Unauthorized access to EMRs, colloquially termed snooping, constitutes the access and/or disclosure of private medical records without consent from the patient. Not all snooping is committed with malicious intent, and several cases in recent years have admitted curiosity or genuine concern as the reason behind the unauthorized access.

 

Regardless of the intention behind the act, snooping can have severe consequences for both the healthcare professional and the patient, even if the security breach is small in scale.

An offending healthcare professional could face loss of employment, relinquishment of their medical license, a civil lawsuit, and a maximum fine of $50,000 for violating PHIPA regulations. As for the patient, cases with the most severe consequences are usually smaller, isolated incidents involving social media and the malicious dispersion of their private medical records. A 2014 case in Indiana offers such an example, when a young woman’s positive STD test results were posted to Facebook by a former high school peer who was working as a nurse at the young woman’s local hospital – it is needless to say the reputational and emotional damage was irreversible.

 

While these cases are extremely serious, they are still relatively rare amongst healthcare-related lawsuits such as malpractice cases. However, the gradual increase in EMR privacy-related lawsuits and the understanding that electronic records can be manipulated have created a demand for reliable software tools that will efficiently identify, collect, and make available for attorneys accurate EMR data. This includes metadata such as the identification of healthcare providers who accessed the records and correlated timestamps, both of which are very important for the litigation. The practice of using these tools to find and freeze relevant data (meaning the data cannot be modified or deleted) is called eDiscovery.

 

Essentially, eDiscovery produces an “audit trail” from EMRs that is more accurate and reliable than simply accepting whatever records the healthcare provider’s attorney produces, as done in the past.

Using technology to sort and extract relevant data provides attorneys and healthcare organizations the advantage of being able to sift through more data faster. This is extremely useful given that the average Ontario hospital will generate approximately one million EMR access logs per week, or approximately 52 million in a given year.

 

However, challenges remain with assessing access logs and determining the purpose and validity of each. Some healthcare organizations use random audits to examine EMRs that have been accessed two or three times (or more) in a week; this method works similarly to a spot check by identifying anomalies and catching offenders off-guard. Another approach some organizations use is running regular conditions-based audit – basically, if a given patient’s EMR access logs meet pre-determined criteria that the healthcare organization has set, the access logs are flagged for further examination. While this approach may seem more methodical and likely to identify questionable activity, it also tends to generate many false alarms. Both random audits and regular condition-based searches make good use of available data analytics technology, however they both also require a level of human judgement and oversight that is not easily replaced by software.

 

So, what can be done to reduce, prevent, and prosecute unauthorized access to EMRs? Ontario’s Information and Privacy Commissioner has identified several methods of minimizing the risk of snooping, including the development of a comprehensive training program on privacy responsibilities for healthcare providers, and mandatory confidentiality agreements that are required prior to healthcare providers gaining access to medical record databases. These are feasible options, but they are only as strong as each healthcare organization’s resolve to implement them. Implementing advanced analytic technology could offer a positive way forward in better understanding this behaviour and improving prosecution against offenders.

 

As software developers create new methods of working with big data and continue to push analytic software and artificial intelligence to the next level, new solutions to complex issues like EMR privacy violations will emerge. The need for human judgement and oversight that is still required when using the aforementioned healthcare approaches to auditing can actually be reduced if not eliminated as long as the new audit solution has sophisticated data mining algorithms and well-integrated process triggers. KI Design’s Audit Solutions, powered by Maize have superior data mining and filtering abilities that can not only identify suspicious behaviour but also connect employee data access with potential clinical or operational reasons for accessing the data. If a particular access record is flagged as particularly suspicious, the audit software will trigger the risk assessment and incident response processes and provide relevant information to the healthcare organization’s privacy officer for manual review.

It is unnecessary for healthcare organizations and legal attorneys to remain dependent on manually sifting through EMRs and their access logs. Audit technology is more advanced and reliable than ever, and will likely play an important role in improving the eDiscovery process and lead to better outcomes to snooping lawsuits.

Social Media Analytics Drivers

By Aydin Farrokhi and Dr. Wael Hassan

Today, the public has remarkable power and reach by which they can share their news, and express their opinion, about any product or services or even react to an existing state of affairs, especially regarding social or political issues. For example, in marketing, consumer voices can have an enormous influence in shaping the opinions of other consumers. Similarly, in politics, public opinion can influence loyalties, decisions, and advocacy. 

While increasingly organizations are adopting and embracing social media, the motive for each establishment to use social media varies. Some of the key drivers for adopting social media include:

Economic drivers:

 

  • Market research and new product
  • Need for better consumer
  • Need to gain competitive
  • Need to improve customer
  • Need to develop new products and
  • Need to increase Return on Marketing Investment (ROMI)
  • Top strategic actions to maximize social media spend
  • Improve ability to respond to customer’s wants and needs
  • Build social media measurement into marketing campaigns and brand promotions
  • Maximize marketing campaign and effectiveness
  • Align social media monitoring capabilities to overall business objectives

 

Political drivers:

  • Public opinion research and new motto
  • Need for better public
  • Need to gain competitive
  • Need to improve public
  • Need to develop new
  • Need to increase Return on Campaigning Investment (ROCI)
  • Top strategic actions to maximize social media spend
  • Improve ability to respond to the public’s wants and needs
  • Build social media measurement into political campaign and publicity promotions
  • Maximize political campaign and effectiveness
  • Align social media monitoring capabilities to overall political agenda

 

In general, there are three major categories of methods for analyzing social media data. These analytical tools can be grouped as either Content Analysis tools, Group and Network Analysis tools or Prediction tools.

 

 

 

Overcoming the Challenges of Privacy of Social Media in Canada

By Aydin Farrokhi and Dr. Wael Hassan

In Canada data protection is regulated by both federal and provincial legislation. Organizations and other companies who capture and store personal information are subject to several laws in Canada. In the course of commercial activities, the federal Personal Information Protection and Electronic Documents Act (PIPEDA) became law in 2004. PIPEDA requires organizations to obtain consent from individual whose data being collected, used, or disclosed to third parties. By definition personal data includes any information that can be used to identify an individual other than information that is publicly available. Personal information can only be used for the purpose it was collected and individuals have the right to access their personal information held by an organization.

Amendments to PIPEDA 

The compliance and enforcement in PIPEDA may not be strong enough to address big data privacy aspects. The Digital Privacy Act (Also known as Bill S_4) received Royal Assent and now is law. Under this law if it becomes entirely enforced, the Privacy Commissioner can bring a motion against the violating company and a fine up to $100,000.

The Digital Privacy Act amends and expands PIPEDA in several respects:

 

  1. The definition of “consent” is updated: It adds to PIPEDA’s consent and knowledge requirement. The DPA requires reasonable expectation that the individual understands what they are consenting to. The expectation is that the individual understands the nature, purpose and consequence of the collection, use or disclosure of their personal data. Children and vulnerable individuals have specific

There are some exceptions to this rule. Managing employees, fraud investigations and certain business transactions are to name a few.

  1. Breach reporting to the Commissioner is mandatory (not yet in force)
  2. Timely breach notifications to be sent to the impacted individuals: the mandatory notification must explain the significance of the breach and what can be done, or has been done to lessen the risk of the
  3. Breach record keeping mandated: All breaches affecting personal information whether or not there has been a real risk of significant harm is mandatory to be kept for records. These records may be requested by the Commissioner or be required in discovery by litigant or asked by the insurance company to assess the premiums for cyber
  4. Failure to report a breach to the Commissioner or the impacted individuals may result in significant

Cross-Border Transfer of Big Data

The federal Privacy Commissioner’s position in personal information transferred to a foreign third party is that transferred information is subject to the laws and regulations of the foreign country and no contracts can override those laws. There is no consent required for transferring personal data to a foreign third party. Depending on the sensitivity of the personal data a notification to the affected individuals that their information may be stored or accessed outside  of Canada and potential impact this may have on their privacy rights.

 Personal Information- Ontario Privacy Legislations

The Freedom of Information and Protection of Privacy Act, the Municipal Freedom of Information and Protection of Privacy Act and Personal Health Information Protection Act are three major legislations that organizations such as government ministries, municipalities, police services, health care providers and school boards are to comply with when collecting, using and disclosing personal information. The office of the Information and Privacy Commissioner of Ontario (IPC) is responsible for monitoring and enforcing these acts.

In big data projects the IPC works closely with government institutions to ensure compliance with the laws. With big data projects, information collected for one reason may be collectively used with information acquired for another reasons. If not properly managed, big data projects may be contrary to Ontario’s privacy laws.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Political Cyber Security

The daily life and economics of the global citizen depend each time more on a stable, secure, and resilient cyberspace. Even before was elected president, Donald Trump promised to make cybersecurity “an immediate and top priority for [his] administration.” Yet, months into his presidency, Trump and global leaders worldwide have struggled to deal with how policies should use their personal technology.

Cybersecurity has gotten sucked into the inevitable vortex of politicization.

Perhaps things first came into media attention when it was discovered that Hillary Clinton was using a private email server when she was Secretary of State. In response, Clinton has said that her use of personal email was in compliance with federal laws and State Department regulations, and that former secretaries of state had also maintained personal email accounts, though not their own private email servers. In a summary of its investigation into Clinton’s use of private email, the FBI concluded that a username and password for an email account on the server was compromised by an unknown entity, which had logged into the compromised email, read messages, and browsed attachments using a service called Tor. Unique to Hillary’s case is that the FBI had repeatedly noted that if a breach did occur that its agents might not be able to tell, but that there was no evidence previously to indicate that Hillary Clinton’s personal email account was hacked.

More recently, the campaign of the French presidential candidate Emmanuel Macron was hit on May 5th, 2017 with leaked emails and other documents on a file-sharing website. Security analysts are under the impression that the huge leak of emails Macron’s campaign team might have been coordinated by the same group of individuals behind the Democratic National Committee leak that effected Clinton.  In fact, the Macron campaign directly compared the hacking directly to the hacker targeting of Clinton campaign, in a statement that read: “Intervening in the last hour of an official campaign, this operation clearly seeks to destabilize democracy, as already seen in the United States’ last president campaign. We cannot tolerate that the vital interests of democracy are thus endangered.”

However, with the ‘Macron-hack’ emerged as an anonymous poster provided links to documents on Pastebin with the message: “This was passed on to me today so now I am giving it to you, the people.” This serves as an example of how authentic documents can easily be mixed on social media with fakes to perpetuate fake messages that can harm political campaigns. While France’s electoral commission aimed to prevent this hack from influencing the election by warning local media that sanctions can be placed on them if they spread this information, the overall effect this link will have on Macron is unknown.

While we acknowledge that it is difficult to assess the impact of breaches done to a single account on a server, these incidences raise fresh questions about the security of other electronic accounts of politicians.

Politicians are particularly vulnerable to cybersecurity threats for the following reasons:

  • All politicians use different or even multiple platforms (windows, mobile, app, etc.), different email systems (gmail, Hotmail, corporate exchange, yahoo) and different file sharing systems (dropbox, box, icloud) that makes it harder to employ the strictest security standards on each one
  • Politicians work with a lot of individuals for temporary amounts of time, such a volunteers. As such, it is hard to know who you’re working with sometimes.
  • There is also a lack of centralized administration. Cybersecurity tends to ascent traditional political fault lines, making it at best confusing territory for politicians.

Despite which side of the political aisle your ideas land on, there is little debate that cybersecurity continues to be a hot issue.  Nowadays, for politicians, ignoring cyber issues could derail their career. Whether it be governments, individuals, or even campaign trails – the political cybersecurity world has experienced resurgence of threats.

Fortunately, the Blockchain’s alternative approach to storing and sharing information provides a way out of this security mess for four very important reasons:

  1. The decentralized consensus nature of Blockchains makes it almost impossible to break into it.
  2. Its platform agnostic, so it runs on any combination of operating system and underlying processor architecture.
  3. Once configured, it does not need an administrator
  4. Malware cannot break into it

A Blockchain is a register of records prepared in data batches called blocks that use cryptographic validation to link themselves together. Publishing keys on a Blockchain instead would eliminate the risk of false key propagation and enable applications to verify the identity of the people you are communicating with. Similarly, using a public Blockchain like Bitcoin would mean your entire system is decentralized with no single point of failure for attackers to target. As of right now, Estonia is one of the first countries to use Blockchain this way, although other governments are slowly warming up to Blockchain technology.

Moreover, there’s a rising tide for big data analytics to help combat cyber-threats and attackers. Social analytics tools can help be the first line of defense for politicians by combining machine learning, text mining modeling to provide an all-inclusive and amalgamated approach to security threat prediction, detection, and deterrence.
The cyberspace is the underlying infrastructure that holds the key to the modernity in technology. These types of threats are real and actively happening. The types of threats that have impacted politicians in the USA and Europe are real and actively happening. Blockchains and analytic tools will not be the golden ticket to fix everything that’s wrong with cybersecurity for politicians, but they can be a place to start. The Blockchain provides innovations that current systems and politicians could embrace.

For more information on how to protect yourself as a politician, please contact Waël Hassan, PhD.

Evaluating Anonymization Methods

Article 29 Data Protection

The European Commission’s Article 29 Data Protection Working Party provides a useful set of criteria for evaluating anonymization methods in its “Opinion on Anonymization Techniques” (2014):

  • Is it still possible to single out an individual?
  • Is it still possible to link records relating to an individual?
  • Can information be inferred concerning an individual?

 

The first criterion means that it should not be possible to discover information about a specific individual or small group of individuals. For example, if only three individuals in an anonymized hospital dataset share a diagnosis, the dataset fails the test of singling out. The second means that it should not be possible to link different records pertaining to an individual or group. For example, a dataset that includes individuals’ occupations as well as demographic information could potentially be linked to publicly available profiles on LinkedIn, social media, or registers of professionals or government employees. Third, it should not be possible to infer potentially identifying attributes based on other attributes in a dataset. For example, location data collected through smartphones, which has sometimes been released as part of open datasets, usually makes it possible to infer the location of an individual’s home and office.

To evaluate re-identification risk, the Article 29 Working Party also suggests understanding identity as multidimensional, with each clear attribute as a coordinate. Whenever it is possible to analyze a region of this multi-dimensional space that contains only a few points, there is a risk of individuals being re-identified. In other words, any combination of properties that is unique to a particular individual or a very small group of individuals poses a risk of re-identification. Anonymity is protected when it is only possible to analyze sizeable “clusters” of individuals who cannot be distinguished from one another based on their attributes.

Here’s an example of the application of anonymization techniques to prevent the singling out of individuals or small subgroups:

A hospital database is being anonymized so that it can be shared with a medical research institute. Patient names and health card numbers have been deleted from the dataset, and dates of birth and death have been generalized to years of birth and death only. Dates of diagnosis and treatment have been generalized to monthly intervals. Data fields that remain unchanged are diagnosis and treatment procedures. If, say, only three individuals born in 1982 received a particular diagnosis in March 2014, the risk of re-identification is too high. One option is to delete these records. The other is to apply additional anonymization, perhaps by generalizing years of birth to ten-year intervals (e.g., 1980-1989, or alternatively age 30-39).

The key to anonymization lies not in deleting particular types of data, but in preventing the occurrence of subsets of one or a few individuals with a specific set of characteristics. The concept of dimensions of identity provides a starting point towards this goal by helping to break down a dataset and suggest possibilities for anonymization. Dimensions not relevant to a particular purpose can be eliminated from the dataset. Within each of the remaining dimensions, the most specific fields can be deleted, randomized, or generalized. Finally, any very small subsets remaining can be identified and deleted. When this is accomplished, the risk of re-identification approaches zero, as any unique or distinct attributes of individuals have been concealed.

Reference

Article 29 Data Protection Working Party, Opinion 05/2014 on Anonymisation Techniques. 

Inappropriate Access detection using Machine Learning

Detecting Inappropriate Access to Personal Health Information

While PHIPA has served Ontarians well over the last decade, rapid changes in technology and communications are demanding that we keep pace. With the growing use of electronic health records, the province needs a legislative framework that addresses the rights of individuals and the duties and obligations of health care providers in an electronic environment. Modernizing PHIPA will pave the way for a smooth and seamless transition toward 21st century health care while protecting our privacy.”   – Brian Beamish, Information and Privacy Commissioner of Ontario

 

Event:  2016 PHIPA Connections Summit www.phipasummit.ca

Using Machine Learning Healthcare to detect healthcare snoopers

Talk By Dr. Wael Hassan and Dr. Daniel Fabbri

Open Electronic Medical Record (EMR) access environments trade clinician efficiency for patient privacy. Monitoring EMR accesses for inappropriate use is challenging due to access volumes and hospital dynamics. This talk presents the Explanation-Based Auditing System, which uses machine learning to quickly identify suspicious accesses, improving compliance officer efficiency and patient privacy.

 

Featuring:

Daniel Fabbri
PhD. Assistant Professor of Biomedical Informatics and Computer Science, Vanderbilt University,
Maize Analytics
Daniel Fabbri, Ph.D., is an Assistant Professor of Biomedical Informatics in the School of Medicine at Vanderbilt University. He is also an Assistant Professor of Computer Science in the School of Engineering. His research focuses on database systems and machine learning applied to electronic medical records and clinical data. He developed the Explanation-Based Auditing System, which uses data mining techniques to help hospital compliance officers monitor accesses to electronic medical records in order to identify inappropriate use. He received a National Science Foundation Innovation Corps award to commercialize this auditing technology at Maize Analytics. Beyond research, he has participated in the A World In Motion program, which teaches elementary and middle school children physics through weekly interactive experiments such as building toy cars powered by balloons. He received his doctorate in computer science from the University of Michigan, Ann Arbor and a bachelor of science in computer science and engineering from the University of California, Los Angeles. Prior to joining Vanderbilt, he interned at Google, Microsoft Research, Goldman Sachs, Lockheed Martin and Yahoo. Students interested in research topics on machine learning, data management and the security of electronic medical records and clinical data? Please consider applying to the Vanderbilt Biomedical Informatics or Computer Science graduate programs. Selected Invited Talks: • The Open Web Application Security Project, Chicago, 2014. • Safeguarding Health Information: Building Assurance through HIPAA Security, U.S. Health and Human Services Department, Washington D.C., 2013. • Archimedes Workshop on Medical Device Security, University of Michigan, Ann Arbor, 2013.
Wael Hassan
Founder: Big Data, Privacy and Risk,
Ki Design Magazine
Dr. Waël Hassan is one of North Americas leading advisors on privacy and cyber security innovation. He serves as an advisor for both the political and industry organizations to help them better understand privacy and cyber security technology & adoption. He has in-depth knowledge of privacy laws across Canada, EU, and the US, along with, holds the first Canadian PhD in Validation of Legal Compliance. In his role Waël advances his clients’ interests on a range of issues, including internet freedom, cyber security, surveillance, disaster response, product certification, and risk metrics. Dr. Hasan founded KI DESIGN Magazine, http://magazine.kidesign.net, where he writes a regular column. Waël’s highly anticipated book, Privacy in Design: A practical guide for corporate compliance will be released in Spring 2017.

Data Protection in Design

Time for a New Vision

Up until now, we have viewed privacy and security on the same sliding scale, through which it appears to be impossible to have one without hurting the other. Envisioning a country where privacy is prioritized over security and surveillance seems absurd. However, it is time that we disrupt this traditional way of thinking.

How? Through Data Protection in Design. By developing and building data protection into the design of private, public, and political systems, citizens would have the ability to express their desires, change the system, and influence government, all the while minimizing the risk to national or public safety. Instead of pitting the forces for privacy and the forces for security against one another, the two forces should be integrated in order to reap the benefits of both.

It is no longer a balance between privacy freedoms and security, but rather about achieving both outcomes in an effective way

IAM Maturity Model

Identity and Access Management (IAM) has two seemingly opposed purposes: to enable user access to information, and to block user access to restricted information. In fact, strong security and user-friendly access are by no means mutually exclusive: a mature IAM solution provides both. Read a summary of my IAM Maturity Model.

What is Legal Compliance?

 

A set of enterprise requirements is considered compliant with the law if the requirements are legally consistent and compliant with respect to the law.

 

 

Legal Compliance is about Legal Consistency & Completness
Legal Compliance

 

 

The figure above shows the proposed methods for consistency and completeness checking. The square boxes represent the methods, which we have partially presented in the previous post: model consistency check, scenario check, ontology check, and coverage check.