Tuesday, August 21, 2012

 

XACML and the SDLC (Development): Part Two

This is a continuing conversation with James McGovern who is lead Enterprise Architect for HP Enterprise Services and whose focus is in providing bespoke enterprise applications to the insurance vertical and Gerry Gebel, Former Gartner analyst and now US lead for Axiomatics. The conversation to date is about how entitlements should be conceptualized along the SDLC (part 1). The topic we will cover in this dialog is centered on concerns that happen after IT Architects have performed high-level architecture and need to provide hands off to development teams. Gerry's colleague, Felix Gaehtgens also provided valuable input to the discussion.

JM: Generally speaking, the need for entitlements management tends to be on the radar of savvy information security professionals who realize that they need to invest more time in protecting enterprise applications and the data they hold over simply twiddling with firewalls, SSL and audit policies that look for whether a third party has a clean desk policy and whether there number two pencils are sharpened. When security people know nothing about software development and software development people don’t know anything about security, then bad things can happen. Today’s conversation will be a small attempt in connecting these two concerns. Are you game?

GG: Definitely. I also see a disproportionate amount of time and budget dedicated to security apparatus that does not address the specific security, business or compliance rules that an enterprise must enforce. To do that, you need to address security and access control concerns within the business application directly.

JM: A developer has received the mockups for a user interface from the graphics team and now has to turn it into code using JSPs and Servlets. In this particular tier, how should they incorporate entitlements into the pages as well as do it en-masse if they have hundreds of pages to develop?

FG: That’s an excellent question. Access control can and should happen on multiple layers. As you mention a user interface, that is a good point to control access to individual user interface components. For example: a button might start a particular transaction. Is this user authorized to carry out that transaction? If not, then the button should perhaps not be displayed. We can even think of fine-grained access control here. Suppose you are displaying a list of customer accounts to a user. What details should be visible? Should you perhaps hide some columns?

When we do access control in a holistic manner, we can obviously not stop at the presentation layer. You mentioned servlets here. A servlet operation is another type of action that can be authorized. May this function be executed on this servlet by this user in this particular context? This again is a good question. Let’s assume the user is authorized. What happens then? The servlet probably does some things, perhaps retrieving some data, perhaps kicking off a call to some back-end service. As the servlet does its thing, there other steps that would need to be authorized within the execution code of the running servlet. None of this is actually new. If we look at existing code, we see a lot of “If thens” that check whether something is allowed to happen. What architects should be vigilant about, is the fact that having all these “if thens” causes problems down the line. What if the business policies change? What if new regulations come into force? How can you actually audit what is happening? Because of this, it is important to consider moving access control to a separate layer and externalize authorization.

JM: Developers will also develop reusable web services whenever possible that can be leveraged not only by their enterprise application but others as well. How should they think about incorporating entitlements into a service-oriented architecture?

FG: Hooking entitlements into a service-oriented architecture is actually quite painless. The easiest way – without modifying code – would be to use interceptors that check whether a particular transaction is authorized. This also makes the services simpler because authorization is moved into its own layer.

JM: There are a variety of ways to develop web-based applications ranging from Spring, Struts, Django, etc and each of them come with some sort of security hook functionality. How do I configure this to work with entitlements?

FG: These frameworks support authorization, to a certain degree. Unfortunately though, Authorization is typically quite coarse-grained. In Spring for example, you can authorize access to a class. But if this class implements a lot of logic by itself, Spring doesn’t help you doing these “micro-authorizations” or fine-grained authorization. So it’s likely going to be a lot of “If thens” within those classes. The best approach would be to externalize both the coarse-grained as well as the fine-grained authorizations. But if for any reason that is not practical, then the coarse-grained authorization can already be done through the framework by talking to an externalized authorization layer, such as a XACML policy decision point (PDP).

JM: Being an Enterprise Architect who codes and knows security, I have observed throughout my career that many enterprise applications from a code perspective tend to centralize authentication but spread authorization in almost every module. What guidance do you have for both new and old applications in this regard?

FG: For new code, you have the option of externalizing authorization from the start. There are several ways to do this. Aspect-oriented programming can help automate some of this. You can also implement your own permissions checker interface and then hook that into either a local implementation or an externalized XACML authorization service at run-time, so that it gives you all of the flexibility. There is no perfect answer for all cases, as it really depends on how you are writing your code. Wherever in your code you would otherwise do the hard-coded “If thens” to check whether something should be authorized or not, you should be calling an authorization function. If you can create certain “control points”, then you make your life easier. If you have some other points where you need to authorize, use simple APIs to make a call-out to an authorization service.

For old applications, you will need to check where you can “hook in” the authorization. Perhaps there are some control points where you can install interceptors, inject dependencies, or wrap existing classes. If this is not possible, you might be able to intercept data flows coming in or out of a module, and do your authorization there.

JM: Within my enterprise application, I may have built up a “profile” of the user that contains information I would have retrieved post authentication from a directory service. What is the best practice in using this information to make authorization decisions?

GG: The design issue you are raising is whether the PEP should do attribute lookups or if we should rely on the PDP to perform this function. Generally speaking, it is more efficient for the PDP to look up attributes. Mostly this is because the PDP determines what policies will be evaluated and is able to fetch only the additional attributes it needs for policy evaluation. The PEP is not aware of what policies are going to be evaluated, and therefore may waste processing cycles retrieving attributes that will not be used. That extra processing time could be substantial when considering network time for the retrieval, parsing the response, and converting data to XACML attributes.

However, in your case it appears that the application is collecting attribute data for the profile in its normal course of operation. Seems like these attributes can be forwarded to the PDP in the access request without compromising response time performance. There may be other cases where the attributes are in close proximity to the application and it is better for the PEP to do the lookup.

Each scenario and use case should be analyzed, but our starting position would be to have the PEP include attributes it has already collected and to let the PDP look up the rest through its PIP interface. Attribute retrieval is really an externality for the application and should be left to the authorization service. It is also important to consider what happens when policies change. If too much attribute handling is done by the application, it may require additional code changes to accommodate policy changes. If the developer relies on the authorization service to deal with attribute management, then he/she gets the additional benefit of fewer (if any) code changes when the access policies must be adjusted.

JM: Another form of reuse within that many enterprise applications should consider but are not currently implementing is the notion of supporting multiple tenants. Today, an enterprise may take an application and deploy it redundantly instead of keeping a single instance and allowing multiple tenants to live within it. If I wanted to show development leadership in this regard, how can entitlements help?

GG: Applications have multiple layers or integration points where you must consider authorization for a multi tenant configuration – this also applies to single tenant applications. As you described earlier, access policies need to be applied at the presentation and web services or API layers. Beyond this, you have the data layer, typically a database, to consider. It is likely that enterprises deploy multiple instances of an application and its database because they cannot adequately filter data per tenant with current technologies or approaches. With an XACML entitlements system, you can enforce row, column and field level access controls – providing a consistent enforcement of entitlements from presentation to web service to the database. Axiomatics builds specific database integrations (such as Oracle, Microsoft and others), but customers can also use the API to integrate with their preferred SQL coding mechanisms. We think this is a less costly AND more secure solution than what can be purchased from Oracle, for example.

With the approach just described, enterprises can get some economies of scale by deploying fewer application instances – I know there are reports out there about idle CPU time in data centers. Hopefully this also reduces the operational burden by managing fewer instances, but the operations center has to know more detail about which user communities or customer groups each application is supporting.

JM: Our corporation has been breached on more than a few occasions by Wiley hackers. Every time this happens, the security forensic jamboree blows their trumpets really loud asking for assistance in determining what happened. They attempt to reactively walk through log files. To me, this feels like a ceremonial failure. Can entitlements management make those information security people disappear so that I can focus on developing code that provides business value without listening to their forensic whining?

GG: Audit logs of what HAS happened will always be important when attempting to analyze a breach, incident or even for extreme troubleshooting. I think it can be helpful to investigators if there are fewer access logs to examine – here a central authorization service can provide a lot of benefit. A central authorization system that serves multiple applications gives you a single audit stream and single audit file format. It also relieves developers from at least some of the burdens of security logging – although there may be requirements to log additional context that the authorization system is not aware of.

There is also a proactive side of this coin: what CAN users access in an application. It seems that, as an industry, we’ve been trying to definitively answer auditor questions such as, “Who can update accounting data in the general ledger system?” or “Who can approve internal equity trades when the firm’s accumulated risk position reaches a certain threshold?” First, there is a fundamental failure in application design when business owners, auditors and security officers alike cannot easily answer these questions. Why is it still acceptable to build and buy applications that actually increase the operational risk for an organization? Second, many identity management technologies have only served to mask the problem and, ultimately, enable the problem to continue. For example, user provisioning systems were initially thought to be capable of managing access and entitlements for business applications. It turns out that they are relatively good at creating user accounts, but have limited visibility into application entitlements – those are managed by local admin teams. Access governance tools have a better view of entitlements, but it remains difficult to get a complete view when authorization logic is embedded in the application code.

With XACML policies implemented, auditors can test specific access scenarios to confirm enterprise objectives are being met. A policy language is an infinitely richer model for expressing access control policies, than can be done with ACLs, group lists, or roles. Finally, you can specifically answer those auditor questions of who can access or update applications, transactions, or data.

| | View blog reactions


Monday, August 20, 2012

 

XACML and the SDLC (Architecture and Design): Part One

Last year, Gerry Gebel, former Gartner analyst and now Americas lead for Axiomatics and I held several discussions (Part 1, Part 2, Part 3) on using entitlements management within the insurance vertical. Now that we are in a new year, we have decided to revisit entitlements management from the perspective of the software development lifecycle.

JM: Historically speaking, a majority of enterprise applications were built without regard to modern approaches to either identity or entitlements management. At the same time, there is no published guidance by either the information security community or industry analysts in terms of how not to repeat past sins. So, let’s dive into some of the challenges a security architecture team should consider when providing guidance to developers on building applications securely. Are you game?

GG: Definitely! I think it remains an issue that applications are still being built without a modern approach to identity or entitlements – we see many cases where developers make their own determinations on how to best handle these tasks. Security architects and enterprise architects have long professed the desire to externalize security and identity from applications, but this guidance has an uneven track record of success.

JM: The average enterprise is not short of places to store identity. One common place where identity is stored is within Active Directory. However, infrastructure teams generally don’t allow for extending Active Directory for application purposes. So, should architects champion having a separate identity store for enterprise applications or somehow find a way to at least centralize application identity?

GG: Attribute management and governance is a key element to an ABAC (attribute based access control) approach. You might expect that one source of identity data is ideal, but that is not the reality of most deployments. Identity and other attribute data is distributed between AD, enterprise directories, HR, databases, CRM systems, supply chain systems, etc. The important thing is to have a process for policy modeling that is aware of and accommodates the source of attributes that are used in decision making.
For example, some attributes are derived from the session and application context, captured by the policy enforcement point (PEP) code and sent to the policy decision point (PDP) with the access request. The PDP can look up additional attributes through a policy information point (PIP) interface. The PIP is configured to connect with authoritative sources of information, which could be additional information about the user, resource or environment.

JM: While I haven’t ran across an enterprise that has gotten a handle on identity, I can also say that many security architecture professionals haven’t figured out ways to stitch together identity on the fly either. If we are going to leave identity distributed, what should we consider?

GG: I am a proponent of a distributed model as the starting point for this issue. That is, identity data should be stored and managed in close proximity to its authoritative source. In a distributed approach such as this, data accuracy should be better than if it is synchronized into a central source. Others will argue for data synchronization, and it is important when performance requirements call for a local copy of data. Therefore, performance, latency and data volatility are all issues to consider.

JM: What if an enterprise application currently assumes that authentication occurs by taking a user-provided token and compares it to something stored within the applications database. Many shops deploy web access management (WAM) technologies such as Yale CAS, CA Siteminder, etc where they centralize authentication and pass around session cookies but may not to know from an identity perspective why this may not be a complete solution?

GG: A few things come to mind here. First, a WAM session token is proprietary and therefore has a number of limitations in the areas of interoperability, support in multiple platforms, etc.

Second, there is the issue of separation of concerns. From an architectural perspective, I strongly believe in having an approach that treats authentication separate from authorization concerns. One of the main benefits is the ability to adjust your authentication scheme to meet the rapidly changing threats that we see emerging on a daily or weekly basis. If authentication is tightly coupled with another identity component, then an organization is severely limiting its ability to cope with security threats.

Finally, authentication should be performed at the identity domain that is most familiar with the user. Said another way, each application does not and should not store a credential for users. Federation standards permit the user to authenticate at their home domain and present a standardized token to applications they may subsequently access.

JM: Have you ever been to a website where they ask you to enter your credentials and they don’t provide you with any queues as to what form the credential comes in? For example is it a User ID or email address. A person may have multiple unique identifiers. Is it possible to use entitlements management as a centralized authenticator for an enterprise application in this scenario?

GG: My initial thought is “no” based on my comments regarding separating authN and authZ above. There are also security reasons for not giving the user a hint about the credential – to reduce the attack surface for someone trying to compromise the site.
However, there may be cases where a web site wishes to permit the use of multiple unique identifiers for authentication. Once you get to the authorization step, will you still have all the necessary user attributes available? Do you need to map all the identifiers to the attribute stores? You can end up making the authorization more complex than it needs to be.

JM: If you have ever witnessed how enterprise applications are developed, they usually start out with the notion of two roles where the first role is a user and the second is the administrator. The user can do a few things and the administrator can do anything. Surely, we need to have something more finer-grained than this if we want to improve the security of enterprise applications. What guidance could you provide in terms of modeling roles?

GG: There are different levels of roles that should be defined for any given application:
I definitely would start with the security administrator role – this role deal with managing entitlements, access policies and assigning these to users – they should not have access to the data, transactions or resources within the application. The system administrator role functionality should be constrained to managing the application, such as configuring the system, starting/stopping the application, defining additional access roles (see below) or other operational functions that are not associated with the business application.  This is a vast departure from the super user model where there is a root account with complete access to everything on a system, which ends up being a security and audit nightmare.

Third, you can define a user role that permits an individual to login to the application but with very limited capabilities. Here is where ABAC/XACML comes in to give you the granularity required. Access rules can define what functions a user role can perform as well as what data they can perform functions on. With this kind of dynamic capability, you can enforce rules such as, Managers can view payroll data for employees in their department.

JM: I had the opportunity in my career to be the lead architect for many once popular and now defunct Internet startups during the dot-com era. At no time, do I remember anyone ever inquiring about a standard around what a resource naming convention should look like. Even today, many enterprise applications have no discernable standards as to what a URL should look like. Now that we have portals and web services, this challenge is even more elusive. I know that web access management technologies use introspection techniques and therefore are suboptimal in this regard. Does Entitlements Management provide a potential solution and if so, what constructs should we consider in designing new enterprise applications?

GG: The XACML policy language includes a namespace and naming convention for attributes, policies, etc. This helps to organize the system and also to avoid conflicts in the use of metadata. It is also possible to incorporate semantic web approaches or ontologies to manage large and complex environments – we are seeing some customers interested in exploring these capabilities.

JM: I have heard Gunnar Peterson use an analogy in a testing context that makes me smile. He once stated, testing through the UI is like attempting to inspect the plumbing in your basement by peering through your showerhead. This seems to hint that many applications think of security only through the user interface. Does entitlements management provide the ability to define a security model that is cohesive and deals with all layers of an enterprise application?

GG: Absolutely, this is one of the strengths of the XACML architecture. You can define all the access rules that an XACML policy server will enforce – and install policy enforcement points (PEP) at the necessary layers of an application. These are typically installed at the presentation, application and data tiers or layers. Such an approach is important because you have a different session context at each layer and may have different security concerns to address, but the organization needs to ensure that a set of access rules are consistently enforced throughout the layers of the application. Further, individual services or APIs can be secured as they are used on their own or in mash-up scenarios.

You get the additional benefit of a consolidated access log for all layers of the application. All access successes and failures are available for reporting, investigations or forensic purposes.

JM: Some enterprises are moving away from thinking in terms of objects towards thinking in terms of business processes. How should a security architect think about applying an entitlements-based approach to BPM?

GG: I recall writing some years ago that BPM tools could facilitate the creation of application roles – it’s very interesting that you now ask me about BPM and entitlements! But it’s a logical question. BPM tools help you map out and visualize the application, have the notion of a namespace, resources, and so on. At least a couple of places where entitlements and authorization rules can be derived are within BPM activities as well as when you have an interaction with an activity in another swim lane.

JM: Enterprises are also developing mobile applications that allow their consumers to access services, pay bills and conduct business transactions. It goes without saying that a mobile application should have the same security model or at least adhere to the same security principles as an internally hosted web application. What are some of the entitlements considerations an architect should think about?

GG: There are several considerations that come to mind, but let’s address just a few of them here.

| | View blog reactions


Wednesday, August 15, 2012

 

Why Risk Management is an Infosec Worst Practice!

It is very easy to find Information security professionals that can wax poetic about multiple risk management methods. Unfortunately, when these methods are measured rigorously, they don't appear to work. Yes, it is important to admit that many risk management approaches neither result in a measurable reduction in risk or improvement in decisions.


Many risk management practices also fail to account for known sources of error in the analysis of risk or, worse yet, add error of their own. whenever an ISACA certified member comes by to certify that you adhere to clean desk policies can they provide strong data to support their stance or is it more anecdotal in nature? Which is a better risk management technique, having a clean desk or in ensuring that all of your number two pencils are sharpened?

Let's face it, most of the information security's approach to risk management is a big fat joke. You would think that information security professionals would be keenly aware and respectful of failure, yet they too tend to not know when their own risk management system has failed (except in scenarios where there is no longer any business to protect).

Is risk management nothing more than a method than can be fooled by a kind of "placebo effort" where best practices are obtained via groupthink? What are the performance measures for the risk management approach used in your organization? I suspect even your Chief Risk Officer will be blissfully ignorant in how to answer this question.

The widespread inability to make subtle but important differentiations between methods that work and methods that don't work means that ineffectual methods may spread like the plague. Interestingly enough, a few process weenies will latch on to the plague, label it a "best practice" help contaminate others.

I suspect there is a strong correlation as to how AIDS is spread amongst society at large and how enterprises continue to adopt silly practices. They both have long incubation periods that are passed from one party to another along with no early indicators of ill effects until its too late.

If you want to practice genuine risk management, be skeptical of sentiment, snake oil salesman and auditors with ISACA certifications...


| | View blog reactions


 

Will Infosys be successful in blowing smoke up Gartner and Forrester's bleep...

There are way too many industry analysts covering the game of outsourcing that don't understand how it works. As Infosys shifts to what it labels as its 3.0 model, Gartner and Forrester will take the bait of the melodic words of a snake oil salesman especially when combined with factual numbers that do not necessarily represent the facts.

Infosys has declared to the marketplace that going forward, 1/3 of its future revenues shall be derived from strategic business consulting. This is an ambitious target that is ripe for industry analysts who want to believe to be exploited. What happens if you simply do a little bit of slick accounting trickery and label anyone that is a business analyst now is an individual contributing to the revenues of strategic business consulting? After all, business analysts even though they usually are only focused on project-level concerns do interact with business customers in their daily activities. Likewise, many projects are strategic. Combine these two truths and you can successfully con industry analysts into believing that you have made the transition into strategic business consulting.

We understand how easy it is to con industry analysts but what about all those CIOs who listen to their sage wisdom? Will they too fall for the trickery of rapid growth in this space while being redirected away from understanding the details of the past engagements that contributed to such profound revenue growth? Don't get it twisted, I have the utmost respect for Infosys and their marketing prowess. I simply question whether analyst firms will look at see that revenue growth is not all it is cracked up to be especially since the believe that strategic business consulting is of higher margin, yet they probably won't drill into the numbers.


| | View blog reactions


Friday, August 10, 2012

 

A conversation between an Enterprise Architect and Quality Assurance Team

Don't ever make the mistake in asking a basic question like what is a test case...

Don't you hate when you talk to a team and get different answers to the same question? Some believe that a test case is a set of inputs with expected results. This aligns the notion that a test case is a tangible artifact that can be used by project managers to check the box that something has been delivered. Others believe that a test case is an instance of a test idea and that the concern for artifacts is a second class concern. I find the distinctions intriguing because my own set of thinking tends to align more with the later over the former.

The first model is useful when you have offshore resources and you have the need to count and track things. This aligns well with outsourcing firms that throw lots of bodies at QA and essentially leverage them like they are script monkeys. Scope of testing is well-known and can be accurately estimated to completion. Yet, if you think about it, executing test cases doesn't imply that the application has been thoroughly tested.

The later is more aligned with a philosophy I hold which is the need to shift from "activities" toward "outcomes". The former has a built-in implication that you are going to ignore the risk of testing something else or finding information outside of the scope of the test case. Depending on how this is manifested, this could result in finding more bugs and having a QA resource being chastised for doing so. So much for the notion of quality assurance.

| | View blog reactions


This page is powered by Blogger. Isn't yours?