The cohesive suite of security and productivity solutions provided by an E5 licence can significantly streamline your technological landscape, doing away with a number of on-premises and SaaS tools.
While many organisations opt for the lower-cost E3 licence, they may find this soon requires a supplementary selection of single-solution tools from alternate vendors to patch gaps in its capabilities.
Too many solutions means confusion, an often-disjointed workflow, potential overlap and overspend, and crucially, increased security risk.
By consolidating your collaboration, productivity, automation, and security solutions into a single trusted vendor platform, IT management becomes simplified, redundant solutions can be axed, and ROI can be better measured.
The Microsoft E5 Security Components
So you get everything from a single source with no finger pointing. What could go wrong?
Plenty, according to those who still think of Microsoft as an evil empire.
Microsoft is making a compelling case to businesses to consolidate into the Microsoft umbrella of products. The ease of use, and financial motives just make too much sense. Now do those customers get a great IAM experience with that? Meh…kinda. Entra SSO is solid product, Active Directory/EntraID is solid, MIM…well….we don’t talk about MIM.
Microsoft Identity Manager
Well, I will talk about MIM, or Microsoft Identity Manager.
Microsoft Identity Manager (MIM) 2016 builds on the identity and access management capabilities of Forefront Identity Manager (FIM) 2010 and predecessor technologies. MIM provides integration with heterogeneous platforms across the datacenter, including on-premises HR systems, directories, and databases.
MIM augments Microsoft Entra cloud-hosted services by enabling the organization to have the right users in Active Directory for on-premises apps. Microsoft Entra Connect can then make available in Microsoft Entra ID for Microsoft 365 and cloud-hosted apps
But what of the argument that it’s better to get everything from one vendor? Other companies will tout their best-in-class products. While you’ll end up with a possibly disjointed solution, the work will get done more accurately.
In the end, it’s up to you. Do you want a single solution that is “good enough” and is already pre-made, or do you want to take the best solution from the best-in-class vendors and roll your own?
AAL1 (some confidence). AAL1, in the words of NIST, “provides some assurance.” Single-factor authentication is OK, but multi-factor authentication can be used also. All sorts of authentication methods, including knowledge-based authentication, satisfy the requirements of AAL1. In short, AAL1 isn’t exactly a “nothingburger” as I characterized IAL1, but AAL1 doesn’t provide a ton of assurance.
AAL2 (high confidence). AAL2 increases the assurance by requiring “two distinct authentication factors,” not just one. There are specific requirements regarding the authentication factors you can use. And the security must conform to the “moderate” security level, such as the moderate security level in FedRAMP. So AAL2 is satisfactory for a lot of organizations…but not all of them.
AAL3 (very high confidence). AAL3 is the highest authenticator assurance level. It “is based on proof of possession of a key through a cryptographic protocol.” Of course, two distinct authentication factors are required, including “a hardware-based authenticator and an authenticator that provides verifier impersonation resistance — the same device MAY fulfill both these requirements.”
This is of course a very high overview, and there are a lot of…um…minutiae that go into each of these definitions. If you’re interested in that further detail, please read section 4 of NIST Special Publication 800-63B for yourself.
Which authenticator assurance level should you use?
NIST has provided a handy dandy AAL decision flowchart in section 6.2 of NIST Special Publication 800-63-3, similar to the IAL decision flowchart in section 6.1 that I reproduced earlier. If you go through the flowchart, you can decide whether you need AAL1, AAL2, or the very high AAL3.
One of the key questions is the question flagged as 2, “Are you making personal data accessible?” The answer to this question in the flowchart moves you between AAL2 (if personal data is made accessible) and AAL1 (if it isn’t).
So what?
Do the different authenticator assurance levels provide any true benefits, or are they just items in a government agency’s technical check-off list?
Perhaps the better question to ask is this: what happens if the WRONG person obtains access to the data?
Could the fraudster cause financial loss to a government agency?
Threaten personal safety?
Commit civil or criminal violations?
Or, most frightening to agency heads who could be fired at any time, could the fraudster damage an agency’s reputation?
If some or all of these are true, then a high authenticator assurance level is VERY beneficial.
The 438 U.S. federal agencies (as of today) probably have over 439 different security requirements. When you add state and local agencies to the list, security compliance becomes a mind-numbing exercise.
For example, the U.S. Federal Bureau of Investigation has its Criminal Justice Information Systems Security Policy (version 5.9 is here). This not only applies to the FBI, but to any government agency or private organization that interfaces to the relevant FBI systems.
The Federal Risk and Authorization Management Program (FedRAMP®) was established in 2011 to provide a cost-effective, risk-based approach for the adoption and use of cloud services by the federal government. FedRAMP empowers agencies to use modern cloud technologies, with an emphasis on security and protection of federal information. In December 2022, the FedRAMP Authorization Act was signed as part of the FY23 National Defense Authorization Act (NDAA). The Act codifies the FedRAMP program as the authoritative standardized approach to security assessment and authorization for cloud computing products and services that process unclassified federal information.
Note the critical word “unclassified.” So FedRAMP doesn’t cover EVERYTHING. But it does cover enough to allow federal agencies to move away from huge on-premise server rooms and enjoy the same SaaS advantages that private entities enjoy.
It turns out that a number of other FedRAMP-authorized products are partially dependent upon Microsoft Azure Government’s FedRAMP authorization, so continued maintenance of this authorization is essential to Microsoft, a number of other vendors, and all the agencies that require secure cloud solutions.
They can only hope that the GSA Inspector General doesn’t find fault with THEM.
Is FedRAMP compliance worth it?
But assuming that doesn’t happen, is it worthwhile for vendors to pursue FedRAMP compliance?
If you are a company with a cloud service, there are likely quite a few questions you are asking yourself about your pursuits in the Federal market. When will the upward trajectory of cloud adoption begin? What agency will be the next to migrate to the cloud? What technologies will be migrated? As you move forward with your business development strategy you will also question whether FedRAMP compliance is something you should pursue?
The answer to the last question is simple: Yes. If you want the Federal Government to purchase your cloud service offering you will, sooner or later, have to successfully navigate the FedRAMP process.
The Digital Trust & Safety Partnership (DTSP) consists of “leading technology companies,” including Apple, Google, Meta (parent of Facebook, Instagram, and WhatsApp), Microsoft (and its LinkedIn subsidiary), TikTok, and others.
DTSP appreciates and shares Ofcom’s view that there is no one-size-fits-all approach to trust and safety and to protecting people online. We agree that size is not the only factor that should be considered, and our assessment methodology, the Safe Framework, uses a tailoring framework that combines objective measures of organizational size and scale for the product or service in scope of assessment, as well as risk factors.
We’ll get to the “Safe Framework” later. DTSP continues:
Overly prescriptive codes may have unintended effects: Although there is significant overlap between the content of the DTSP Best Practices Framework and the proposed Illegal Content Codes of Practice, the level of prescription in the codes, their status as a safe harbor, and the burden of documenting alternative approaches will discourage services from using other measures that might be more effective. Our framework allows companies to use whatever combination of practices most effectively fulfills their overarching commitments to product development, governance, enforcement, improvement, and transparency. This helps ensure that our practices can evolve in the face of new risks and new technologies.
But remember that the UK’s neighbors in the EU recently prescribed that USB-3 cables are the way to go. This not only forced DTSP member Apple to abandon the Lightning cable worldwide, but it affects Google and others because there will be no efforts to come up with better cables. Who wants to fight the bureaucratic battle with Brussels? Or alternatively we will have the advanced “world” versions of cables and the deprecated “EU” standards-compliant cables.
So forget Ofcom’s so-called overbearing approach and just adopt the Safe Framework. Big tech will take care of everything, including all those age assurance issues.
Incorporating each characteristic comes with trade-offs, and there is no one-size-fits-all solution. Highly accurate age assurance methods may depend on collection of new personal data such as facial imagery or government-issued ID. Some methods that may be economical may have the consequence of creating inequities among the user base. And each service and even feature may present a different risk profile for younger users; for example, features that are designed to facilitate users meeting in real life pose a very different set of risks than services that provide access to different types of content….
Instead of a single approach, we acknowledge that appropriate age assurance will vary among services, based on an assessment of the risks and benefits of a given context. A single service may also use different approaches for different aspects or features of the service, taking a multi-layered approach.
Why did I mention the “future implementation” of the UK Online Safety Act? Because the passage of the UK Online Safety Act is just the FIRST step in a long process. Ofcom still has to figure out how to implement the Act.
Ofcom started to work on this on November 9, but it’s going to take many months to finalize—I mean finalise things. This is the UK Online Safety Act, after all.
This is the first of four major consultations that Ofcom, as regulator of the new Online Safety Act, will publish as part of our work to establish the new regulations over the next 18 months.
It focuses on our proposals for how internet services that enable the sharing of user-generated content (‘user-to-user services’) and search services should approach their new duties relating to illegal content.
On November 9 Ofcom published a slew of summary and detailed documents. Here’s a brief excerpt from the overview.
Mae’r ddogfen hon yn rhoi crynodeb lefel uchel o bob pennod o’n hymgynghoriad ar niwed anghyfreithlon i helpu rhanddeiliaid i ddarllen a defnyddio ein dogfen ymgynghori. Mae manylion llawn ein cynigion a’r sail resymegol sylfaenol, yn ogystal â chwestiynau ymgynghori manwl, wedi’u nodi yn y ddogfen lawn. Dyma’r cyntaf o nifer o ymgyngoriadau y byddwn yn eu cyhoeddi o dan y Ddeddf Diogelwch Ar-lein. Mae ein strategaeth a’n map rheoleiddio llawn ar gael ar ein gwefan.
Oops, I seem to have quoted from the Welsh version. Maybe you’ll have better luck reading the English version.
This document sets out a high-level summary of each chapter of our illegal harms consultation to help stakeholders navigate and engage with our consultation document. The full detail of our proposals and the underlying rationale, as well as detailed consultation questions, are set out in the full document. This is the first of several consultations we will be publishing under the Online Safety Act. Our full regulatory roadmap and strategy is available on our website.
And if you need help telling your firm’s UK Online Safety Act story, Bredemarket can help. (Unless the final content needs to be in Welsh.) Click below!
Having passed, eventually, through the UK’s two houses of Parliament, the bill received royal assent (October 26)….
[A]dded in (to the Act) is a highly divisive requirement for messaging platforms to scan users’ messages for illegal material, such as child sexual abuse material, which tech companies and privacy campaigners say is an unwarranted attack on encryption.
This not only opens up issues regarding encryption and privacy, but also specific identity technologies such as age verification and age estimation.
This post looks at three types of firms that are affected by the UK Online Safety Act, the stories they are telling, and the stories they may need to tell in the future. What is YOUR firm’s Online Safety Act-related story?
What three types of firms are affected by the UK Online Safety Act?
As of now I have been unable to locate a full version of the final final Act, but presumably the provisions from this July 2023 version (PDF) have only undergone minor tweaks.
Among other things, this version discusses “User identity verification” in 65, “Category 1 service” in 96(10)(a), “United Kingdom user” in 228(1), and a multitude of other terms that affect how companies will conduct business under the Act.
I am focusing on three different types of companies:
Technology services (such as Yoti) that provide identity verification, including but not limited to age verification and age estimation.
User-to-user services (such as WhatsApp) that provide encrypted messages.
User-to-user services (such as Wikipedia) that allow users (including United Kingdom users) to contribute content.
What types of stories will these firms have to tell, now that the Act is law?
For ALL services, the story will vary as Ofcom decides how to implement the Act, but we are already seeing the stories from identity verification services. Here is what Yoti stated after the Act became law:
We have a range of age assurance solutions which allow platforms to know the age of users, without collecting vast amounts of personal information. These include:
Age estimation: a user’s age is estimated from a live facial image. They do not need to use identity documents or share any personal information. As soon as their age is estimated, their image is deleted – protecting their privacy at all times. Facial age estimation is 99% accurate and works fairly across all skin tones and ages.
Digital ID app: a free app which allows users to verify their age and identity using a government-issued identity document. Once verified, users can use the app to share specific information – they could just share their age or an ‘over 18’ proof of age.
MailOnline has approached WhatsApp’s parent company Meta for comment now that the Bill has received Royal Assent, but the firm has so far refused to comment.
[T]o comply with the new law, the platform says it would be forced to weaken its security, which would not only undermine the privacy of WhatsApp messages in the UK but also for every user worldwide.
‘Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98 per cent of users,’ Mr Cathcart has previously said.
Companies, from Big Tech down to smaller platforms and messaging apps, will need to comply with a long list of new requirements, starting with age verification for their users. (Wikipedia, the eighth-most-visited website in the UK, has said it won’t be able to comply with the rule because it violates the Wikimedia Foundation’s principles on collecting data about its users.)
All of these firms have shared their stories either before or after the Act became law, and those stories will change depending upon what Ofcom decides.
Approximately 2,700 years ago, the Greek poet Hesiod is recorded as saying “moderation is best in all things.” This applies to government regulations, including encryption and age verification regulations. As the United Kingdom’s House of Lords works through drafts of its Online Safety Bill, interested parties are seeking to influence the level of regulation.
In Allan’s assessment, he wondered whether the mandated encryption and age verification regulations would apply to all services, or just critical services.
Allan considered a number of services, but I’m just going to hone in on two of them: WhatsApp and Wikipedia.
The Online Safety Bill and WhatsApp
WhatsApp is owned by a large American company called Meta, which causes two problems for regulators in the United Kingdom (and in Europe):
Meta is a large company.
Meta is an American company.
WhatsApp itself causes another problem for UK regulators:
WhatsApp encrypts messages.
Because of these three truths, UK regulators are not necessarily inclined to play nice with WhatsApp, which may affect whether WhatsApp will be required to comply with the Online Safety Bill’s regulations.
Allan explains the issue:
One of the powers the Bill gives to OFCOM (the UK Office of Communications) is the ability to order services to deploy specific technologies to detect terrorist and child sexual exploitation and abuse content….
But there may be cases where a provider believes that the technology it is being ordered to deploy would break essential functionality of its service and so would prefer to leave the UK rather than accept compliance with the order as a condition of remaining….
If OFCOM does issue this kind of order then we should expect to see some encrypted services leave the UK market, potentially including very popular ones like WhatsApp and iMessage.
Speaking during a UK visit in which he will meet legislators to discuss the government’s flagship internet regulation, Will Cathcart, Meta’s head of WhatsApp, described the bill as the most concerning piece of legislation currently being discussed in the western world.
He said: “It’s a remarkable thing to think about. There isn’t a way to change it in just one part of the world. Some countries have chosen to block it: that’s the reality of shipping a secure product. We’ve recently been blocked in Iran, for example. But we’ve never seen a liberal democracy do that.
“The reality is, our users all around the world want security,” said Cathcart. “Ninety-eight per cent of our users are outside the UK. They do not want us to lower the security of the product, and just as a straightforward matter, it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98% of users.”
In passing, the March Guardian article noted that WhatsApp requires UK users to be 16 years old. This doesn’t appear to be an issue for Meta, but could be an issue for another very popular online service.
The Online Safety Bill and Wikipedia
So how does the Online Safety Bill affect Wikipedia?
It depends on how the Online Safety Bill is implemented via the rulemaking process.
As in other countries, the true effects of legislation aren’t apparent until the government writes the rules that implement the legislation. It’s possible that the rulemaking will carve out an exemption allowing Wikipedia to NOT enforce age verification. Or it’s possible that Wikipedia will be mandated to enforce age verification for its writers.
If they do not (carve out exemptions) then there could be real challenges for the continued operation of some valuable services in the UK given what we know about the requirements in the Bill and the operating principles of services like Wikipedia.
For example, it would be entirely inconsistent with Wikipedia’s privacy principles to start collecting additional data about the age of their users and yet this is what will be expected from regulated services more generally.
Left unsaid is the same issue that affects encryption: age verification for Wikipedia may be required in the United Kingdom, but may not be required for other countries.
(Wales) used the example of Wikipedia, in which none of its 700 staff or contractors plays a role in content or in moderation.
Instead, the organisation relies on its global community to make democratic decisions on content moderation, and have contentious discussions in public.
By contrast, the “feudal” approach sees major platforms make decisions centrally, erratically, inconsistently, often using automation, and in secret.
By regulating all social media under the assumption that it’s all exactly like Facebook and Twitter, Wales said that authorities would impose rules on upstart competitors that force them into that same model.
One common thread between these two cases is that implementation of the regulations results in a privacy threat to the affected individuals.
For WhatsApp users, the privacy threat is obvious. If WhatsApp is forced to fully or partially disable encryption, or is forced to use an encryption scheme that the UK Government could break, then the privacy of every message (including messages between people outside the UK) would be threatened.
For Wikipedia users, anyone contributing to the site would need to undergo substantial identity verification so that the UK Government would know the ages of Wikipedia contributors.
This is yet another example of different government agencies working at cross purposes with each other, as the “catch the pornographers” bureaucrats battle with the “preserve privacy” advocates.
Meta, Wikipedia, and other firms would like the legislation to explicitly carve out exemptions for their firms and services. Opponents say that legislative carve outs aren’t necessary, because no one would ever want to regulate Wikipedia.
Yeah, and the U.S. Social Security Number isn’t an identificaiton number either. (Not true.)
I learned some fun facts during Eren Cello’s presentation to the Greater Ontario Business Council this morning, and filed those in my brain along with some other facts that I have collected over the years.
Cello is the Director of Marketing and Communications for Ontario International Airport in Ontario, California. Which, incidentally, is not in Canada.
Ontario International Airport in the 1980s and 1990s
I first became aware of Ontario International Airport in October 1983, when I flew in from Portland, Oregon for a job interview. Back in those days, you didn’t walk from the airplane straight into the terminal. Instead, you walked to a flight of stairs, went down the stairs, then walked across the runway to enter the terminal.
As Ontario and the surrounding area grew over the years, the then-owner of Ontario International Airport (Los Angeles World Airports) decided that an ambitious expansion of the airport was in order, including modern, multi-level terminals with check-in and baggage claim on the first floor, and the gates and shops on the second floor. Instead of renovating the existing terminal, LAWA decided to build two brand new terminals. These terminals were opened in 1998 and were designated “Terminal 2” and “Terminal 4.” As soon as traffic increased to the required level, LAWA would go ahead and build Terminal 3 between the two terminals.
And the old terminal, now “Terminal 1,” was closed.
Ontario International Airport Terminal 1 as of September 2021, 20 years after airport traffic changed forever.
It sounded like a sensible design and a sensible plan. What could go wrong?
Ontario International Airport in the 2000s and 2010s
Well, three years after Terminals 2 and 4 opened, 9/11 happened. This had two immediate effects.
First, the anticipated increase in passenger traffic needed to open Terminal 3 didn’t happen.
There were other alleged reasons for this which eventually led to the separation of Ontario International Airport from LAWA, but those are beyond the scope of this post. I wrote about them in a personal blog at the time; here’s an example.
Second, increased security meant that the second floors of Terminals 2 and 4 were accessible to passengers only.
The days of walking to the gate to send off departing passengers and greet arriving ones were gone forever.
And for all of those businesses that were located on the second floors of the two terminals, their customer base was cut dramatically, since non-ticketed individuals were confined to the first floors of the terminals. Until recently, those first floors only included the random vending machine to serve visitors. Only now is the situation starting to improve.
According to Cello, Ontario International Airport now serves 11 passenger airlines with nonstop flights to destinations in the United States, Mexico, Central America, and Asia.
The second most fascinating fun fact
But of all the fun facts I learned today, the second most fascinating fun fact was the reason why the international airlines are based in Terminal 2 rather than Terminal 4. No, it’s not because Southwest has so many flights in Terminal 4 that there is no room for anyone else. Actually, parts of Terminal 4 are closed; if you see a film with someone at Gate 412, you know the film is staged. See 15:08 of this video.
The reason why the international airlines are based in Terminal 2 is because that terminal is the only one designed for the large wide-body jets that go to international destinations.
Southwest Airlines, of course, has a different operating model that doesn’t need a lot of wide-body jets.
International services in the future and in the past
Incidentally, there are both short-term and long-term plans to improve the facilities for international passengers, who currently can depart from Terminal 2 but have to arrive at a completely separate “international arrivals terminal” (reviews) and go through security there.
And if you’re wondering why Ontario International Airport doesn’t have optimum service for international passengers, the “international” in the airport’s designation merely means that there is at least one existing flight to an international destination. For Ontario, trans-Pacific cargo flights existed back in the 1940s, and the first passenger flight from an international destination occurred (according to Wikipedia) on May 18, 1946, when a Pacific Overseas Airlines flight arrived from Shanghai. (This was the Pacific Overseas Airlines based in Ontario, California, not the Pacific Overseas Airlines in Siam. The Ontario company appears to have only been in existence for a year or so.)
Of course, back in 1946, international passengers didn’t have great expectations. Leaving the plane by going down a flight of stairs was the normal mode of operations; none of this walking from the airplane straight into the airport building.
The Beatles arrive at the former Idlewild Airport on February 7, 1964. Note the stairway in the background. By United Press International, photographer unknown – This image is available from the United States Library of Congress’s Prints and Photographs division under the digital ID cph.3c11094.This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons: Licensing for more information., Public Domain, https://commons.wikimedia.org/w/index.php?curid=4532407
The MOST fascinating fun fact
Oh, and in case you’re wondering why the wide-body jet service is only the second most fascinating fun fact, I learned something else today.
The “Paw Squad” at Ontario International Airport has their own trading cards!
But what about the activities of the hijackers on that day, and in the months preceding that day?
All of this was examined by the 9/11 Commission. As a result of its investigation, this body made significant recommendations, some of which have only taken nearly two decades to implement, assuming they ARE implemented as (re) scheduled.
Janice Kephart was border counsel to the 9/11 Commission, and has been involved in homeland security ever since that time. She is currently CEO and Owner of Identity Strategy Partners.
As the 20th anniversary of 9/11 approaches, Kephart has released a documentary. As she explains, the documentary contains a wealth of information from the 9/11 Commission’s investigation of the hijackers, much of which was never officially released. Her hope:
If we are never to forget, we must educate. That is the purpose of this documentary. It is history, it is legacy, from the person who knows the details of the hijacker’s border story and has continued to live it for the past 20 years. I hope it resonates and educates.
When listening to Kephart’s documentary, keep in mind how much our world has changed since 9/11. Yes, you went through a security screening before you boarded a plane, but it was nothing like the security screenings that we’ve gotten used to in the last 20 years. Before 9/11, you could walk all the way up to the gate to send off departing passengers or greet arriving ones. And identity documents were not usually cross-checked against biometric databases to make sure that applicants were telling the truth.
I personally was not as familiar with the stories of the hijackers as I was with the stories of Bush and Cheney. The documentary provides a wealth of detail on the hijackers. (Helpful hint: don’t be afraid to pause the video when necessary. There’s a lot of visual information to absorb.)
Toward the end of the documentary, Kephart concentrates on Mohamed Atta’s return to the U.S. in January 2001, when his tourist visa had already expired and his student visa application was still pending. Kephart notes that Atta shouldn’t have been allowed back into the country, but that he was let in anyway. The details regarding Atta’s January 2001 entry are discussed in detail in a separate report (see section III.B).
Kephart wonders what might have happened if Mohamed Atta had been denied re-entry into the United States in January 2001 because of the visa irregularities. Since Atta was the ringleader and the driving force behind the attack, would the denial of entry have delayed or even terminated the 9/11 attack plans?