“False Light” – Canada’s Newest Tort

A tort recognized by the Ontario Superior Court of Justice last month expands privacy protections for Canadians by adopting a well-established US cause of action.

 

Torts are an essential element of common law. A tort is a wrongful act or injury that leads to physical, emotional, or financial damage to a person, for which another person can be held legally responsible. Torts may be either intentional or unintentional (a tort may be caused by negligence, for example). Law in this area usually develops through legal precedent, as decisions by the highest courts expand the scope and application of a tort.

 

The Ontario case, V.M.Y. v. S.H.G., concerned cyberbullying of a particular nature: in an ongoing campaign of harassment, a father posted images, petitions, and videos of his ex-wife and her parents, along with comments that accused them of numerous illegal acts, including kidnapping, child abuse, assault, and fraud.

 

The tort, new to Canadian law, is that of “publicity placing a person in a false light.” The court found that “the wrong is in publicly representing someone, not as worse than they are, but as other than they are. The value at stake is respect for a person’s privacy right to control the way they present themselves to the world.”[1] 

 

This tort is already well-established in courts in the United States. Indeed, over recent years, three of four key US privacy-related torts have been adopted into Canadian law.  These common-law torts, originally catalogued by American jurist William L. Prosser, are:

  1. Intrusion upon seclusion or solitude or private affairs;
  2. Public disclosure of embarrassing private facts;
  3. Publicity which places the plaintiff in a false light in the public eye; and
  4. Appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness.

The first three have been adopted in Canada in the following decisions:

Tort Case Jurisdiction and year
Intrusion upon seclusion Jones v. Tsige Ontario, 2012
Public disclosure of private facts Doe v. ND Ontario, 2016
False light V.M.Y. v. S.H.G. Ontario, 2019

 

All three Canadian decisions referenced Prosser’s work.

 

As I pointed out in my book Privacy In Design: A Practical Guide to Corporate Compliance (page 62):

 

Liability in these four US privacy torts often hinges upon whether the violation can be considered “highly offensive to a reasonable person.” It isn’t always easy to predict how broadly this category may be defined by a US court, but following prior judgements, certain trends can be noted:

  • “Highly offensive” actions include: snooping into mail, secretly recording conversations, illegally accessing financial records, disclosure of autopsy photos or medical data, and disclosure of debts.
  • “Not highly offensive” actions include: disclosure of reputation-enhancing information, union membership, non-embarrassing facts, minor injuries, or information that causes “minor and moderate annoyance.”

The court in V.M.Y. v. S.H.G. followed the US definition of the tort by hinging  culpability of whether or not the act in question is “highly offensive.” Justice Kristjanson wrote: “It is enough for the plaintiff to show that a reasonable person would find it highly offensive to be publicly misrepresented as they have been.”[2]

 

This case is a victory for privacy rights in Canada. Those who use the Internet to harass others have had their wings clipped; this case and the tort it creates expand the legal remedies available to their victims.

 

Privacy In Design: A Practical Guide to Corporate Compliance (2019) is available on Amazon: https://www.amazon.ca/dp/B07S9Q2CYZ/ref=cm_sw_em_r_mt_dp_U_fVonEbWHGC7ZS

 

 

[1] V.M.Y. v. S.H.G., [2019] O.J. No. 6702.

[2] Ibid.

Using AI to Combat AI-Generated Disinformation

AI can be impact election outcomes? how can this be combatted?

Canada General Election 2019 and US Presidential Race 2020

As citizens worry about election outcomes and the interference in the democratic process in general and elections in specific, some governments are attempting to mitigate the risks and issues.  In December 2018, the Government of Canada’s Standing Committee on Access to Information, Privacy and Ethics released a report, Democracy under Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly. This report, initiated in response to the Facebook/Cambridge Analytica scandal, examines, among other things, the risks posed to the Canadian electoral process by the manipulation of big data and artificial intelligence (AI). A year before in 2017, the Senate Intelligence Committee published a report titled Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution. This report, conducted a full review and produce a comprehensive intelligence report assessing Russian activities and intentions in recent U.S. elections in 2016.

Social media, and the big data it generates, is now so ubiquitous, it’s easy to forget that it’s a relatively recent phenomenon. As such, it’s been hard for legislators to track how such technological developments could be used to influence the Canadian electoral process, and how they should be regulated. Big data, and its manipulation, played a significant role in both the 2016 US election, and the Brexit vote in the UK earlier that year. Fake news, deliberately fabricated, edged into Facebook feeds alongside legitimate sources, and was shared by unsuspecting users. Fake Twitter accounts pressed extreme views, shifting and polarizing public discourse. According to Elodie Vialle, of Reporters Without Borders, false information spreads six times faster than accurate information.[1]

It is well known that AI plays a key role in the spread of disinformation. It powers social media algorithms. It can be programmed to generate content, including automated trolling, and it facilitates the micro-targeting of demographic groups on specific topics: all basic disinformation practices.

Yet what is less widely discussed is that

AI can also be used as a tool to combat disinformation.

Data science can locate trolls and fraudulent accounts: via algorithm, programs can be trained to identify potential bots and unusual political material.[2] While their reach can be enormous, the actual number of perpetrators is very small, and we have the scientific ability to track down who they are. Existing hate speech laws can then be used to prosecute them.

In today’s increasingly febrile global political climate, disinformation is a real and growing problem, both abroad and here in Canada and the United States. A solution is available. Given the upcoming Canadian federal election in October 2019 and the US presidential elections in 2020, proactive use of data science to counter manipulation efforts is both timely and necessary.

Links:

  • https://www.ourcommons.ca/Content/Committee/421/ETHI/Reports/RP10242267/ethirp17/ethirp17-e.pdf
  • https://www.intelligence.senate.gov/publications/assessing-russian-activities-and-intentions-recent-us-elections

References:

[1] Staff, “Artificial Intelligence and Disinformation: Examining challenges and solutions,” Modern Diplomacy, March 8, 2019: online at: https://moderndiplomacy.eu/2019/03/08/artificial-intelligence-and-disinformation-examining-challenges-and-solutions/.

[2] European Parliamentary Research Service, Regulating disinformation with artificial intelligence, March 2019: online at: https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624279/EPRS_STU(2019)624279_EN.pdf.

Smart Privacy Auditing – An Ontario Healthcare Case Study

IPCS Smart Privacy Auditing Seminar

On September 13, Dr. Waël Hassan, was a panelist at the Innovation Procurement Case Study Seminar on Smart Privacy Auditing, hosted by Mackenzie Innovation Institute (Mi2) and the Ontario Centres of Excellence (OCE). The seminar attracted leaders from the health care sector, the private information and technology industry, and privacy authorities. The seminar explored the concept of innovative procurement via the avenue of competitive dialogue, in addition to demonstrating the power and benefits of using artificial intelligence to automates the process of auditing all PHI accesses within a given hospital or health network.

What are the benefits of participating in an innovative procurement process, particularly competitive dialogue?

An innovative procurement partnership between Mi2, Mackenzie Health, Michael Garron Hospital, and Markham Stouffville Hospital was supported by the OCE’s REACH grant and sought to identify an innovative approach to auditing that could be applicable to the privacy challenges faced by numerous hospitals with different practices, policies, and information systems. Rather than focus on how the solution should operate, the partners collaboratively identified six outcome-based specifications the procured audit tool would be required to meet.

By identifying key priorities and specifying the outcomes a solution should achieve, Competitive Dialogue establishes a clear and mutual understanding of expectations. This can help the private sector narrow down solution options to a model best-suited for the contracting authority’s unique context. The feedback loop provided by the iterative rounds (if used) enables vendors to clarify any confusion and customize proposals to the contracting authority’s unique needs, staff workflows, and policy contexts.

Competitive Dialogue is an opportunity for transparent communication that gives vendors the opportunity to learn more intimate details of what the contracting authority, in this case Mackenzie Health, needs from a solution. Because hospitals are not tech or security experts, they often struggle to accurately identify and define what solutions they need to solve a particular issue, and thus a traditional procurement process is rarely ideal since there is little to no room for clarification or feedback. This process is more flexible than the traditional procurement process and thereby allows for more creativity and innovative thinking processes during the initial proposal development. Encouraging creativity and creating a competitive environment in which competing vendors may be sounding ideas off each other results in higher quality proposals and final solutions.

 Mackenzie Health Case Study

Mackenzie Health employs over 450 physicians and 2,600 other staff members, processes nearly 55,000 patient medical record accesses every day, and has just one privacy officer to monitor everything. Mackenzie Health’s privacy needs far outweigh its capacity, so they turned to the private sector for an innovative solution.

Section 37(1) of PHIPA outlines the possible uses of personal health information, and these guidelines are based on the purpose underlying the activities. Because the legal framework is centred on purpose, KI Design’s approach is to explain the purpose for accessing a given medical record. The core of this technology is more commonly known as an explanation-based auditing system (EBAS) designed and patented by Dr. Fabbri of Maize Analytics.

To detect unauthorized accesses, the technology has the capability of identifying an intelligible connection between the patient and the employee accessing the patient’s records. AI changes the fundamental question underlying auditing tools from “who is accessing patient records without authorization?” to, “for what purpose are hospital staff accessing patient records?” Asking this question helps the technology break down staff workflows and identify common and unique purposes for accessing any given medical record, which are further categorized as either authorized access or unexplained access, which may then flagged as potentially unauthorized behaviour. The technology is able to filter out the authorized accesses, which are usually 98% to 99% of all accesses, so that the Privacy Officer can focus on the much smaller number of unexplained and flagged accesses.

Why is the private sector interested in health care?

Health care is an extremely complex system operated by the province and service providers. The province is a specialist in governance and regulation, the service providers are specialists in medicine – neither are experts in privacy or security. Companies such as KI Design are interested in filling the expertise gap within the health care sector by working closely in tandem with health care providers and the Information & Privacy Commissioner to adapt privacy and security solutions that are suitable for their working realities. There is irrevocable value added in having a privacy and security expert working directly with hospitals and other health service providers to assist in refining privacy best practices and implementing a privacy tool that will improve privacy and security outcomes without restricting the workflows of health practitioners.

To learn more on how AI solutions improve Audit visit https://phipa.ca/

What I Learned Managing Millennials

A daily routine that includes continuously scrolling through Instagram, sipping kale smoothies, drinking Starbucks coffee, hitting the gym, and hanging out with friends, while still managing to fit in a full day of work is most-likely a Millennial.

Millennial. The four-syllable word that makes thousands of Generation Xers roll their eyes and cringe at the so-called “entitled” and “privileged” group born after the 80’s.

Not all, but most Millennials share the features of a short attention span, an obsession with social media, and a love to socialize. Although this may drive a crowd of Generation Xers to angrily grunt in agreement, from a managerial-perspective, these aren’t negative characteristics. In fact, they are actually valuable elements of a workplace.

In order to be an effective manager, as with all employees, it is important to understand the Millennials in the workplace. Clearly I have a different daily routine as them as I hardly scroll through Instagram and don’t think I could even get through an entire kale smoothie. I started to wonder that if even our daily lives are so different – how different are their expectations and interests in the work that they’re doing?

After discussing with the Millennials that I work with, they’ve explained to me their main priorities and interests. I believe it’s important to integrate these things into the workplace and foster an innovative environment for both them, and myself, as I know that I have a lot to learn from them.

From what I’ve gathered, Millennials’ priorities include: hanging out with their friends, finding a work/life balance, being passionate about the work they’re doing, and using social media to connect with people.

In my experience, these often overlooked interests allow Millennials to be valuable assets in the workplace. Millennials are conditioned to immediacy and will find solutions to get work done quickly and efficiently, with the ability to do several things at once. They are fluent in media, and natives of the digital world, creating innovation in technology. With constant posting and use of social media, Millennials are naturals in communications and marketing. They foster cohesiveness and team-building in the workplace. They thrive on community and naturally build it within a workplace.

Unlike many of us Generation Xers, Millennials aren’t as interested in climbing the ladder or making mass amounts of money as they value these other priorities. Some may not be interested in becoming a leader or gaining status whatsoever. They may be simply trying out different positions for the sake of having new experiences. It’s important to ensure that they are passionate and interested in their work, and that they aren’t doing repetitive, boring tasks. Some of us have spent years doing jobs for the sole purpose of getting a promotion or making money. To those born in this new generation, they focus on pursuing their passions, and focusing on the present.

Most Millennials grew up in a contented environment, where they were given independence from a young age, not under strict authority. This translates to giving millennial workers lots of independence and creative freedom in the workplace. Rather than constantly correcting, or giving strict guidelines, allow them to work on projects where they can implement their own ideas and strategies.

Millennials are conditioned with an ethical value system that Generation Xers weren’t naturally exposed to. Surrounded by ethnic diversity, planet-saving initiatives, socio-economic rallies, and an overall environment that strives for equality, Millennials are aware of the social responsibilities of the companies they work for. They have a balance between their need to excel in their work and their ingrained moral ethics.

Ultimately, we all have a lot to learn from the Millennials in our workplace, and they have unique perspectives that should be heard. Acknowledge and understand the differences you have, and incorporate them into the workplace to create a challenging and thriving environment.

 

By Wael Hassan and Tessa Barclay

 GDPR Responsibilities of Controllers and Processors

Responsibilities of Controllers and Processors

What are controllers and processors under the GDPR?

  • Controllers determine the purpose and means of processing personal data and are usually the collectors of data. They do not necessarily need to be located in the EU. Controllers are additionally responsible for monitoring processors’ compliance.
  • Processors are engaged to protect data on behalf of the controller

Both controllers and processors are responsible and liable for compliance under the GDPR.

Responsibilities of Controllers

The primary responsibility of controllers is to data subjects. Controllers also demonstrate compliance with the GDPR and ensure data processors’ compliance as well. Controllers outside the EU that regularly process personal data pertaining to people within the EU should have a designated representative within the EU to help manage compliance.

 

Responsibilities of Processors

Processors are governed by a contract that addresses how data will be processed, how requests from data subjects will be fulfilled, and whether data will be transferred to any other geographical locations. The processor makes information available to the controller to demonstrate compliance and notifies the controller in the event of a breach. It is also the processor’s responsibility to ensure that authorization is given prior to engaging a sub-processor, and that all data is deleted or returned at the end of their service provision.

The GDPR introduces direct statutory obligations on processors as well as severe sanctions for compliance failures. This is especially relevant for non-EU data processors, who need to ensure that if their clients are based in the EU they are responsible for complying with the GDPR. The processor has equal risk for fines as the controller.

 

Required Data Protection Practices

  • Data protection by design and default

Data can also be protected by design, meaning that data protection principles are integrated into the design of the systems that manage personal data. Another way to protect data is by default, meaning putting in place safeguards to limit the processing of data.

  • Safeguards

Generally, it is recommended to put in place practices and technologies that are appropriate to the level of risk. Some of the best safeguards are quite simple. For instance, having a data protection officer and consulting with supervisory authorities concerning high risk projects. Other examples include breach notifications and data protection impact assessments (DPIA) for high risk projects.

  • Breach notification

Breaches must be reported within 72 hours of discovery unless there is a low risk to the rights and freedoms of the data subjects. High risk breaches should be communicated to data subjects without delay.

  • Documentation

Companies with 250+ employees and those that handle certain special categories of data are required to document: contact information, purpose of processing, categories of data, data transfers to other countries, timelines for erasure of different categories of data and, where possible, a description of technical and organizational security measures.

What Is The GDPR?

What is the GDPR?

The GDPR represents new legislation that is destined to replace the General Data Protection Regulation, which has been in place since 1995. The arrival of the digital age means that the way people understand and interact with data is changing rapidly. The GDPR can help to clarify individual rights in the digital age, as well as creating a “digital single market” within the EU. With the GDPR in place, it will be easier for citizens to have control over their personal data, representing a shift in power.

The underlying principle of the GDPR is that the protection of personal data is a fundamental right, and organizations that handle personal data are responsible for those rights. “Processing” data means collecting, sharing, distributing, structuring, storing, or otherwise using an individual’s data. In this relationship, there are controllers and processors. A controller determines the purpose and means of processing personal data and is usually the collector of the data. Processors are engaged to process data on behalf of the controller, but the controllers are responsible for monitoring processors’ compliance.

The GDPR affects the North American market because any organization that offers goods or services to the EU or that monitors the behaviour of people within the EU is responsible for complying with the GDPR.

There are three key principles of the regulation:

  1. Limitation of processing means that: data must be processed only for specified, explicit and legitimate purposes; data must not be further processed in ways inconsistent with the initial purposes; data should be adequate, relevant, and necessary; data should be accurate and kept up-to-date; data should be kept only as long as necessary.
  2. Informed consent refers to freely given and clearly affirmative consent that must be intelligible, easily accessible, and written in plain language. Participants have the right to withdraw consent, and services cannot be withheld on condition of consent.
  3. Lawful processing pertains to at least one of the following conditions must be met:
    1. Consent from the data subject
    2. Processing is necessary for a contract
    3. Processing is necessary for compliance with EU laws
    4. Processing is necessary to protect a person’s vital interests
    5. Processing in the public interest or exercise of official authority
    6. Legitimate interests of the controller or a third party that are not overridden by the data subject’s rights and freedoms

Key Terms

  • Right to be Forgotten

This concept refers to personal data being deleted when the data subject no longer wants it to be processed. The exception to this is when there is legitimate reason to retain the data, for instance, in the case of completing a contract or complying with legal obligations.

  • Informed Consent

Information is made readily available and is communicated in clear, plain language. Informed consent will especially be enforced regarding services for children.

  • Right to Data Portability

Data subjects have a right to a copy of their personal data in an appropriate format and, where possible, they can transfer that data directly from one service provider to another. For example, individuals should be able to transfer photos from one social network to another.

  • Data Protection by Design and Default

This aspect helps protect users’ data by design, for instance by implementing technical safeguards like anonymization, pseudonymization, and encryption, as well as organizational safeguards.

  • Mandatory Data Protection Officer

A DPO fills the need for an organization to help monitor privacy and data protection. A DPO is an expert in their field, and is required if an organization’s core activities consist of regular and systematic monitoring of personal data on a large scale. This position helps ensure compliance and awareness of privacy legislation. The DPO may also monitor internal data protection activities, train staff, and conduct internal audits. If data subjects have inquiries, these will go through the DPO as well.

 

Company Response

Companies are responding to the GDPR in several ways:

  1. Stop buying and selling personal data
  2. Know where your clients live, or implement EU requirements regardless of location
  3. Prepare to respond to requests from data subjects
  4. Audit sub-contractors for compliance
  5. Reconsider cloud services

eDiscovery and Audits: The Solution to Unauthorized Access

eDiscovery and Audits: The Solution to Unauthorized Access

Electronic medical records (EMRs) contain sensitive personal information that is strongly protected in many jurisdictions. EMRs are protected under the Personal Health Information Protection Act (PHIPA) in Ontario by limiting authorized access to professionals who are currently providing healthcare services to the patient or are otherwise given consent to access the patient’s records. Despite PHIPA’s definition of and limits to authorized access, many of Ontario’s healthcare organizations (such as hospitals and clinics) operate open-access EMR database systems, meaning all healthcare staff have access to all records. Despite the responsibilities of healthcare organizations to protect patients’ information from unauthorized access and distribution, the risk of unprofessional conduct is not well-mitigated.

 

Unauthorized access to EMRs, colloquially termed snooping, constitutes the access and/or disclosure of private medical records without consent from the patient. Not all snooping is committed with malicious intent, and several cases in recent years have admitted curiosity or genuine concern as the reason behind the unauthorized access.

 

Regardless of the intention behind the act, snooping can have severe consequences for both the healthcare professional and the patient, even if the security breach is small in scale.

An offending healthcare professional could face loss of employment, relinquishment of their medical license, a civil lawsuit, and a maximum fine of $50,000 for violating PHIPA regulations. As for the patient, cases with the most severe consequences are usually smaller, isolated incidents involving social media and the malicious dispersion of their private medical records. A 2014 case in Indiana offers such an example, when a young woman’s positive STD test results were posted to Facebook by a former high school peer who was working as a nurse at the young woman’s local hospital – it is needless to say the reputational and emotional damage was irreversible.

 

While these cases are extremely serious, they are still relatively rare amongst healthcare-related lawsuits such as malpractice cases. However, the gradual increase in EMR privacy-related lawsuits and the understanding that electronic records can be manipulated have created a demand for reliable software tools that will efficiently identify, collect, and make available for attorneys accurate EMR data. This includes metadata such as the identification of healthcare providers who accessed the records and correlated timestamps, both of which are very important for the litigation. The practice of using these tools to find and freeze relevant data (meaning the data cannot be modified or deleted) is called eDiscovery.

 

Essentially, eDiscovery produces an “audit trail” from EMRs that is more accurate and reliable than simply accepting whatever records the healthcare provider’s attorney produces, as done in the past.

Using technology to sort and extract relevant data provides attorneys and healthcare organizations the advantage of being able to sift through more data faster. This is extremely useful given that the average Ontario hospital will generate approximately one million EMR access logs per week, or approximately 52 million in a given year.

 

However, challenges remain with assessing access logs and determining the purpose and validity of each. Some healthcare organizations use random audits to examine EMRs that have been accessed two or three times (or more) in a week; this method works similarly to a spot check by identifying anomalies and catching offenders off-guard. Another approach some organizations use is running regular conditions-based audit – basically, if a given patient’s EMR access logs meet pre-determined criteria that the healthcare organization has set, the access logs are flagged for further examination. While this approach may seem more methodical and likely to identify questionable activity, it also tends to generate many false alarms. Both random audits and regular condition-based searches make good use of available data analytics technology, however they both also require a level of human judgement and oversight that is not easily replaced by software.

 

So, what can be done to reduce, prevent, and prosecute unauthorized access to EMRs? Ontario’s Information and Privacy Commissioner has identified several methods of minimizing the risk of snooping, including the development of a comprehensive training program on privacy responsibilities for healthcare providers, and mandatory confidentiality agreements that are required prior to healthcare providers gaining access to medical record databases. These are feasible options, but they are only as strong as each healthcare organization’s resolve to implement them. Implementing advanced analytic technology could offer a positive way forward in better understanding this behaviour and improving prosecution against offenders.

 

As software developers create new methods of working with big data and continue to push analytic software and artificial intelligence to the next level, new solutions to complex issues like EMR privacy violations will emerge. The need for human judgement and oversight that is still required when using the aforementioned healthcare approaches to auditing can actually be reduced if not eliminated as long as the new audit solution has sophisticated data mining algorithms and well-integrated process triggers. KI Design’s Audit Solutions, powered by Maize have superior data mining and filtering abilities that can not only identify suspicious behaviour but also connect employee data access with potential clinical or operational reasons for accessing the data. If a particular access record is flagged as particularly suspicious, the audit software will trigger the risk assessment and incident response processes and provide relevant information to the healthcare organization’s privacy officer for manual review.

It is unnecessary for healthcare organizations and legal attorneys to remain dependent on manually sifting through EMRs and their access logs. Audit technology is more advanced and reliable than ever, and will likely play an important role in improving the eDiscovery process and lead to better outcomes to snooping lawsuits.

Artificial Intelligence and Privacy: What About?

Inference

How AI impacts privacy and security implementaiton

Big Data analytics is transforming all industries including healthcare-based research and innovation, offering tremendous potential to organizations able to leverage their data assets. However, as a new species of data – massive in volume, velocity, variability, and variety – Big Data also creates the challenge of compliance with federal and provincial privacy laws, and with data protection best practices.

Stemming from internationally-recognized Privacy Principles, data protection regulations tend to follow a specific format, focusing particularly on the collection, retention, use, and disclosure of personal information or personal health information, provided there is a specified purpose and explicit consent from the individual.

From a conceptual standpoint, the evolution of Big Data brings in a new element of analytical functions: Inference

– the extraction of new knowledge from parameters in mathematical models fitted to data – captures commonly-known functions such as data linking, predictive analytics, artificial intelligence (AI), and data mining. Inference allows analysts to create new data based on reasoning and extrapolation, which adds a greater dimensionality to the information already in their possession.

From a corporate governance perspective, the addition of inference will impact how an organization complies with the above-mentioned Privacy Principles, and how it meets its legal obligations regarding consent, access, auditing, and reporting.

In terms of privacy practices, inference can be understood as both a collection and a use of data. For organizations to be compliant with data protection principles, inferences gleaned from data analysis must meet the requirements of applicable privacy laws. This means that new data created from the inference process must be collected and used for a specified reason and with the consent of the individual; it must be accurate; the data collector must disclose all collected information on an individual should they request it; and the data’s use, disclosure, and retention must be limited to the specified reason of collection.

If inference is used to generate new data outside the original data’s specified purpose, the collecting organization will not be complying with privacy laws and could become subject to individual complaints and auditing from the Office of the Privacy Commissioner. So while inference can seem like the dawn of a new age in Big Data analytics, it is still restricted by privacy laws, and must be used only within the present data collection and use principles.

 

Implementing PPM Successfully

Implementing PPM Successfully

The web is full of articles on PPM tools and how difficult a PPM implementation could be. That said it’s a well-established that companies using PPM tools generally have a better vision of their future markets and how to manage their investments.

PPM tools are good for business

If your organization is in the market for a PPM tool, you are already looking at the latest industry research reports;  at the same time you may be researching the net.

You may be asking one of two questions (or both):

Q1 : “How do I avoid failure?” This is a good question if your approach is risk mitigation.

As a response we say that implementing a PPM solution is like starting a new business in that you should have a good idea how your business will fit into the market, especially at the start-up phase. Similarly, your organization should have a clear sense of how a PPM solution will fit into their company’s future.

Q2 : “I am transforming my organization to improve decision-making and I need a technology to fit this process, how can PPM tools help?” – this is a better question in my mind.

There is a wide range of PPM solutions and each will fit each company differently, so it is important for companies to approach PPM implementation with an open mind. Having gone through several implementations, some successful and some not, here are some rules of thumb:

Tier 1 Project and Portfolio Management (PPM) solutions do not fail, but their implementation could

Organizations that:

  1. Think of a PPM as a business transformation technology do better than those who think of it simply as a singular tool.
  2. Organizations that balance their focus between tool sophistication and adoption are the most successful.
  3. Organizations that have in-house PPM business champions do better than those who lack business champions and rely on external SMEs specializing in select tools.