Reference Models
In this section I discuss frameworks - what a framework is and how having one can benefit the organization.
A framework can help articulate the expression of appropriate IT governance for the organization - who makes what kinds of decisions with regard to information technology. Most frameworks offer some specialization in governance eg., ITIL offers guidance on decisions surrounding operations and service management). Many authors argue that "organizations should never simply choose a single framework, but instead try to use what they can from each"R. This implies that no single framework has all the answers - "The issue broadly speaking is that Service Management needs to bridge with the other Core process areas and also with the support and enabling process areas"R.
In this section I look at three kinds of frameworks:
- Service Management - models describing how the pieces of an organization can "fit together" to deliver better IT services;
- Project Management - Process descriptions of how to deliver new services - applications, processes, services;
- Improvement Models - Descriptions of how best to deliver new services, etc given the current state of the organization. These models outline sequenced representations of the organizational capabilities necessary to obtain the described framework at various Maturity levelsN.
In "Multiple View Models" I suggest that the models can be seen from more than a single perspective - following the principles enunciated by Zachmann. I provide a multiple view of IT Service Management by building the view underscoring the ITIL Capacity Management book (view by the business, by services offered or by the resources utilized).
Lastly, in this section I provide a comparison of the three primary reference models - ITIL, CobIT and CMMI. The reader selects which of these three models to employ as the base reference and I provide an assessment of how that reference sources maps to the other (and, where useful to additional frameworks).
What's in this Section
IT Service Management Models
A framework is a basic outline - or frame -- which can be used to describe systems, activities, etc for heuristic purposes.
"The purpose of the framework is to provide a basic structure which supports the organization, access, integration, interpretation, development, management, and changing of a set of architectural representations of the organizations information systems. Such objects or descriptions of architectural representations are usually referred to as Artifacts.
The framework, then, can contain global plans as well as technical details, lists and charts as well as natural language statements. Any appropriate approach, standard, role, method, technique, or tool may be placed in it. In fact, the framework can be viewed as a tool to organize any form of meta-data for the enterprise.
Zachman Framework, Information Systems Architecture - Isa
|
The outline represents a particular viewpoint of how the systems under study are (AS-IS), or can be (TO-BE) organized. The basic idea is that such systems can be thought of as operating or behaving as a number of interrelated processes. To study and understand systems, one constructs ’process models’ according to particular frameworks and using particular modeling techniques. A framework is valuable because it provides an organizing structure for the process model as well as a standard syntax and lexicon.
CIOs and the IT managers that report to them have all been in need of a clear picture that depicts the IT
processes required to deliver quality IT services in support of their emerging e-services.
Without a clear picture, IT organizations will continue to struggle as they try to understand and
determine:
- The current state of IT with regard to process (the "as is")
- The desired future state of IT (the "to be")
- The gaps between the current and future states of IT
- The steps needed to bridge those gaps
Therefore, the need for a concise picture - one that reflects an enterprise service management
capability - is very real for most IT organizations and critical to their success.
The HP IT Service Management
Reference Model, White Paper, Version 2, January 2000
|
By implementing a well-known, proven framework such as ITIL your company will undoubtedly experience a number of key benefits.
Firstly, why should you reinvent the wheel? In today's highly competitive IT and business industries, time is a precious commodity. Therefore, why spend all of the time and effort to develop and framework based on limited experience when international developed standards such as ITIL already exists.
Model frameworks also provide an excellent structure that companies can follow. Essentially, employees can work towards the same goals, guided and supported by a definite structure.
Indeed, standards have been developed over time and accessed by hundreds of people and organizations all over the world. The cumulative years of experience reflected in, for example, the ITIL model can not be matched b a single organization's efforts.
Lastly, standards enable knowledge sharing. By following it, people can share ideas between organizations, web sites, magazine, books and so forth. Proponents of company-specific ad hoc approaches do not have this luxury.
ITSMWatch July 12, 2004, By Wilhelm Hamman
|
Information Technology Service Management (ITSM) is a generic term to describe the basic management and operation of IT services on behalf of an enterprise. While each business may deviate in terms of kinds of goods and services delivered there is a high degree of similarity in the types of IT processes and tools and approaches employed. This conformity provides impetus for the search for "best practices" in IT management. By using the lessons learned from other organizations, an enterprise can avoid making many of the mistakes and mis-directions inherent in gaining needed knowledge and experience.
Basic Management Functions
Most models in use within IT are derived from basic management functions widely accepted as encompassing a well-run organization. The traditional formula for effective systems management of any system, process, or activity consists of five sequential and iterative phases:
- Setting Objectives
- Planning
- Execution
- Measurement
- Control
The ITIL disciplines can be organized according to this outline.
Phase | Discipline | Description
|
Setting Objectives
| Service level management
| Identify, negotiate, and agree to services to be provided, quality measurement and IT performance targets to be provided to users.
|
Planning | Application & System Design
| Plan and design IT infrastructure to meet Service levels committed to user.
|
Capacity Planning | Plan for systems growth requirements
|
Configuration Management | Create and maintain systems configuration information
|
Asset management | Create and maintain asset inventory; track and monitor such use of assets
|
Execution | Incident Management | Detect, record, resolve problems
|
Backup and recovery | Design alternative systems and resources to immediately restore IT services when problems occur.
|
Measurement | Performance Management | Monitor system performance data; tune system for optimal achievement of service levels committed to users.
|
Control | Change Management | Control all changes to the system to ensure that change does not degrade system performance
|
Security Management | Control and administer access to the system to minimize threats to system integrity
|
Availability Management | Monitor and control system resources and IT operation to maintain system availability
|
Problem Management | Monitor and control system Known Errors and proactively remove them from the environment
|
Financial Management | Monitor and control system IT expenditures
|
Source: High Availability", Design, Techniques and Processes, Floyd Pidad, Michael Hawkins, Enterprise Computing Series, Prentice-Hall, 2001
|
Utility Model of Computing
Many authors and technology futurists contend that IT technology is inexorably moving in the direction of securing services and in this mileau a utility model, similar to the provision of electricity, will likewise apply to the provision of IT services. From the perspective of delivery and consumption, utilities demonstrate certain characteristics:
- Relationships are defined by service level agreements
- The delivery mechanism is hidden from the consumer
- They are provisioned dynamically
- They are charged on a usage basis
- They are typically delivered through standard interfaces
Utility or cloud computing is all about the ability of providers to reduce parts and labour costs, and to do this they must:
- Work the economies of scale that the Internet and associated Web technology advances have enabled via shared service provision
- Support SLAs that operate on usage metrics delivered at a lower level of granularity than was possible before the introduction of IP networks and "net native" software
- And then, introduce transparent billing and pricing to enable clients to more efficiently manage consumption of the service being delivered
In general, utility customers do not own any of the physical infrastructure. By avoiding capital expenditure by renting usage from a third-party provider, they consume resources as a service and pay only for resources that they use. Sharing "perishable and intangible" computing power among multiple tenants can improve utilization rates, as servers are not unnecessarily left idle (which can reduce costs significantly while increasing the speed of application development). A side-effect of this approach is that overall computer usage rises dramatically, as customers do not have to engineer for peak load limits. In addition, "increased high-speed bandwidth" makes it possible to receive the same response times from centralized infrastructure at other sitesR.
A Utility Model will usually present some combination of five primary functions based upon a parrallel with electric lighting:
Connection: Turning the lights on
- Service Requests - password resets, equipment MACs
- Project Management - support for IT projects
- Release Management - product test and build services and release to production
Availability: Keeping the lights on
- Technology Toolset provisioning for new employees
- Operations - Keeping systems in peak performance
- Application maintenance and database management
- Security Services - Ensuring integrity of information and systems
Restoration - Turning the lights back on when they go off
- Incident Management - password resets, equipment MACs
- Change Management - introducing changes
Capacity: lighting - bright, anywhere, anytime
- Capacity Planning - Ensuring enough capacity to meet current and future requirements including regular workstation and server refresh, participation in bandwidth planning
- Usage Monitoring and Forecasting
- Mobile Services - extending capacity by increasing the reach of service or usage
Efficiency - Lowering the costs of lighting
- Configuration Management - maintaining inventory of assets and their interconnection
- Architectural planning - improving the overall design of systems
WikipediaR offers the following characteristics of Cloud computing:
- Agility improves with users able to rapidly and inexpensively re-provision technological infrastructure resources.
- Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
- Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
- Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
- Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
- Peak-load capacity increases (users need not engineer for highest possible load-levels)
- Utilization and efficiency improvements for systems that are often only 10–20% utilized.
- Reliability improves through the use of multiple redundant sites, which makes cloud computing suitable for business continuity and disaster recovery.[36] Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
- Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely-coupled architectures are constructed using web services as the system interface.
- Security typically improves due to centralization of data[39], increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels[. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
- Sustainability comes about through improved resource utilization, more efficient systems, and carbon neutrality. Nonetheless, computers and associated infrastructure are major consumers of energy.
The ITIL Reference Framework
ITIL - the Information Technology Infrastructure Library is a set of best practices for Service Management originally advanced by the CCTA (now absorbed into the Office of Government Commerce - OGC) in Great Britain.
The concept of a formal approach to the management of operational IT services
was developed during the mid-1980’s in response to growing concerns about
the value for money delivered by IT Departments.
At that time, there had been considerable research into software development,
resulting in a number of development methodologies, some proprietary and others
in the public domain. One common deficiency in the methodologies that were
available was the lack of any detailed guidance on the operational stage of an IT
service (sometimes called the support & maintenance phase).
Michael Davies, IT Service Management: An Overview, ISSUE 3,
11 September 2002, p.4
|
The initiation of the concept was fraught with problems of the order in which the publications would be made available and an absence of overall integration amongst, and a framework for, the publications. The impact of Service Level Management being one of the first areas of release is poignantly presented in the following commentary.
Why then did OGC place so much emphasis on the SLA? They didn’t. The problem was that ITIL was a huge project costing in excess of five million pounds sterling, involving only a small group of people working to pull together best practices from many sources. The original intention was to publish books in related sets. A lack of clear overall vision and underpinning process models led to volumes being published as and when they completed the quality assurance process. It took four years for all of the original books to be published, by which time the inconsistencies were apparent and a market established that would give rise to ‘the cult of the SLA’.
Back to the Future, Pink Elephant - June 2003
|
When the CCTA published the library, other organizations quickly saw its' benefits and began to adopt facets of the approach.
ITIL has continued to evolve and mature as the CCTA and, more recently, the UK Office of Government Commerce (OGC) have striven to maintain its' continued relevance through research and periodic updating of the framework. The OGC once described ITIL as the world's "de facto standard for service management" because of its widespread use — especially in European countries.
Finally, the ITIL books were published in compendiums to emphasize the importance of taking a holistic look at an area rather than focusing on a single process. The new library has several books: Service Support, Service Delivery, Planning To Implement Service Management, Applications Management, Security, ICT Infrastructure Management and last, but definitely not least, The Business Perspective. These titles represent subjects that were included in the original library. Published in this manner however, they should get the recognition they deserve. After all, service management and ITIL is about more than just SLAs.
ITIL is an acronym - Information Technology Infrastructure Library.
Infrastructure denotes a logical means of dividing the overall IT environment into components of related functionality. So ITIL is divided into logical segments of the environment. Secondly, Library is a depository built to contain books and other materials for reading and study. So the division describes procedures and guidelines for specialized segments of the IT environment. So, ITIL consists of a series of documents that are used to help implement a framework for IT Service Management (ITSM). This framework defines how Service Management is applied within specific organizations.
A reference model for structuring the ITIL publication set is presented in the publication "Planning to Implement ITIL". The diagram is reproduced here. As an action framework this depiction is overly simplistic. While it acknowledges the central placement of service management and surrounds the approach with anciliary services, as an organizing concept it's use is rudimentary. ITIL’s best-practice approach is outlined in a series of seven books, includingR:
- Business Perspective - aims to familiarize management with the underlying components and architecture design of the Information and Communications Technology (ICT) infrastructure necessary to support their business processes and gain an understanding of Service Management standards and better practice R.
- Planning to Implement Service Management - helps Service Providers plan their implementation of Service Management best practice while at the same time helping the business to talk on level terms with the Service Provider.
- Service Delivery - focuses on delivering IT services to IT customers through agreed-upon service levels
- Service Support - defines how to maintain the delivery of services by providing user support, managing changes and managing releases within the infrastructure
- ICT Infrastructure Management encompasses IT planning and architecture as well as day-to-day infrastructure operations
- Security Management - explains how to best manage defined levels of infrastructure security
- Application Management - outlines the entire application life cycle, from requirements to end of life
- Planning to Implement Service Management - helps organizations under-stand, assess and implement service management within an IT organization
The following depiection is taken from the ITIL Business perspective document. While this is the closest thing yet to an ITIL process description it has two curious ommissions - Configuration Management and Incident/problem management are missing from the description.
These ommissions are resolved in the following two, more granular presentations. In the diagram below, the ITIL primary service management functions are grouped into the division encompassed by this middle box:
Service Support
- Service Desk - services associated with registering a service request with an agent (either live or an application interface)
- Incident Management - processes associated with the resolution of a reported fault, either by a person or through an automated alert, in the infrastructure. Since many of the procedures associated with initiating a Service Request are similar to processes encompassed by either Service Desk or Incident Management, ITIL does not distinguish these processes. However, many organizations do delineate separate processes.
- Change Management - processes associated with modifying the known and trusted state of the infrastructure in a controlled manner,
- Configuration Management - processes associated with recording, updating and using information describing the components of the infrastructure,
- Problem Management - processes associated with tracking the Known Errors existing in the infrastructure and attending to their removal,
- Release Management - processes associated with moving hardware and software components from development to production status
This model highlights the central role of information describing the state of the infrastructure - the Configuration Management Database (CMDB). Each of the five management processes reference this data source within their process descriptions and all the processes are driven by customers, client and/or user groups. There is nothing mysterious about this. Indeed, these (and the service delivery processes) are derived from basic management functions.
|
|
Service Delivery
- Availability Management - processes associated with keeping keeping the infrastructure operational including the speedy restoration of services in the event of failure,
- Capacity Management - processes associated with ensuring that users of IT resources, services and businesses have sufficient (but not costly excessive) capacity to perform their roles,
- Service Continuity - processes designed to keep business operations going in the event of a severe and prolonged outage and procedures to restore business capacities in the event of failure
- Service Level Management - services designed to ensure that the level of service offering conforms to business directions.
- Financial Management - services designed to monitor and apportion the costs associated with maintaining the IT infrastructure.
|
|
A recent article available at ISAAC idenitfied two principal concepts as underscoring ITIL thinkingR:
- Holistic service management — IT service managers:
- Assure the consideration of functional and non-functional requirements
- Ensure that services are appropriately tested before live operational use
- Assess the possible risks and impact on existing infrastructure caused by new or modified systems
- Define future service requirements
- Customer orientation — IT services are provided at a level of quality that allows permanent reliance on them. To
assure this quality, responsibility is assigned to individuals who:
- Consult the users and help them use the services in an optimal manner
- Collect and forward opinions and recommendations of users
- Resolve incidents
- Monitor the performance of the services delivered
- Manage change
BS15000 Standard in IT Service ManagementR
The BS15000 IT Service Management standard has two modules - BS15000-1: 2002 and
BS15000-2: 2003. Part one is the specification for service management and defines what
is required of an organization when delivering high quality, managed services to customers.
B.1.1. Management System
As a management system, BS 1500011 not only specifies the requirements for service management processes, but it also specifies the requirements for management and implementation of service management capabilities within the context of business and customer requirements. Ownership and responsibility of the management system, and of each process, lies with senior-level management, who ensure that best practices in service management are adopted and sustained through the establishment of suitable policies, plans and objectives, provision of adequate resources, training and development, management of risks, and measurement, review, and control. For effective planning, operation and control of service management, BS 15000 requires documentation of service-management policies, plans, procedures, processes, agreements, and records [BSI 2002].
B.1.2. Planning and Implementation
For planning and implementing service management, the standard requires the Plan-Do-Check-Act (PDCA) methodology in continuous loop.
- Plan service management (Plan)
- Implement service management and provide the services (Do)
- Monitoring, measuring and reviewing (Check)
- Continuous improvement (Act)
While there is a focus on core service management processes, BS 15000 requires due diligence in planning and implementation using the PDCA methodology. This ensures ongoing control, greater efficiency, and opportunities for continuous improvement through coordinated integration and implementation of the service management processes [BSI 2002]. Organizations undergoing an audit are expected to show evidence of meeting these requirements for planning and implementing service management.
The standard also has requirements for planning and implementing new services and changes to existing services. Organizations are required to take into consideration cost as well as organizational, technical, and commercial impacts that could result from the delivery and management of new services or changes. Planning is required to cover roles and responsibilities, changes to existing services, contracts and agreements, necessary skills and training, processes, methods and tools, budgets, schedules, acceptance criteria, and expected outcomes from the new or changed services. A post-implementation review is required to compare and report actual outcomes against expectations.
B.1.3. Service Management Processes
Part 1 of the standard (BS 15000-1:2002) specifies the required standard that an organization’s service management processes should meet to manage and deliver IT services in conformance with the best practices of the BS 15000 series. BS 15000 requires management commitment in the form of process ownership.
For each service management process shown below, the standard specifies the objectives and controls that need to be implemented as part of an integrated approach to service management. Click on the function for a description of its' objectives.
BS 15000 requires that the interfaces between processes are clearly defined, are well-understood, coordinated, integrated, and suitably documented [BSI 2002]. For many organizations process interfaces map onto organizational interfaces, a feature that can lead to the incorrect assumption that service management best practices require each process to be a separate organizational group. In reality the BS 15000 series makes no requirement for specific organizational form.
For each service management process shown above, the standard specifies the objectives and controls that need to be implemented as part of an integrated approach to service management.
BS 15000 requires that the interfaces between processes are clearly defined, are well-understood, coordinated, integrated, and suitably documentedR.
Proactive Variation on ITIL
Proactive, an Australian IT service management company, also distinguishes service delivery and service management sub-sets of ITIL. The Service Delivery functions are closely related to the annual planning cycle
and ongoing review during the year and therefore form a logical grouping. If high quality IT services are to be provided, then it is essential to know the
criteria by which this quality will be judged. Service Level Management is of great
value because the creation of service level agreements (SLAs) provides a
mechanism for documenting customer requirements and defining performance
targets. Once agreements have been negotiated, ongoing monitoring is needed
to ensure that all parties to the agreements remain satisfied.
|
|
The Service Support elements are integral to service delivery. They are concerned with day to day operations, although each management areas can also have a longer term view when dealing with business-driven considerations rather than IT-initiated. The British Standards Institution (BSI) describes Service Support as providing a number of related types of process:
- The control processes of (Asset &) Configuration Management and Change
Management
- The release process of Release Management
- The resolution processes of Incident Management and Problem Management.
In addition, Service Support includes the Service Desk function which in many
organizations owns the Incident Management process.
|
|
Art of Service Framework
This framework is based upon a distinction between 'Customer' and 'User' so as to differentiate between those people (generally senior managers) who commission, pay for and own the IT Services (the Customers) and those people who use the services on a day-to-day basis (the Users).
The semantics are less important than the reason for differentiation. The primary point of contact for individuals using services is (or should be) a Service Desk (or in less sophisticated environments, a basic help desk). Therefore the user population is most at risk from an inadequate service support function.
Customers of service providers increasingly rely upon contracts to define the relationship of the service provider to the business (even in the case of in-house service provision) and use the contracts to formalize areas of performance that are frequently underpinned by Service Level Agreements (SLAs). The day-to-day impact of service provision (unless catastrophic), is largely ignored in favour of prearranged meetings to discuss deviations from contractual issues. Therefore the prime focus for customers is the Service Manager, who controls the Service Level Agreement and who is involved in contractual issues.
It is therefore important to distinguish the different, but related, needs of users and customers in the provision of services. Certainly, their goals may be at odds and need to be balanced; for example users may demand high availability whereas customers look for value for money at different levels of availability. There are information flows that must be maintained and key process elements that must be defined for use by both parties; possibly the best example is configuration management. If configuration management is defined only from the perspective of the user, then the cost of introduction is not likely to be the principal issue (100% availability, predicated on extensive knowledge about all configuration items, regardless of cost is more likely!). On the other hand if configuration management is designed solely from the perspective of the customer, then service availability will not be considered key, as the customer may not have the goal face knowledge to understand the need to back-up fragile service elements, (which would require extensive knowledge about all configuration items) or perhaps be unwilling to increase costs by doing so.
A Simple Model of IT Management
Reid Shay, in a recent publicationR, develops an IT management model based upon distinguishing business and IT "space". He then divides the IT space into "Tool" and "Management" categories.
SLM relationship
Day-to-day business functions comprise many individual business processes within the business space. These business processes make use of IT Tools and, as such need to be decomposed into distinct processes which will map to the appropriate tool sets used for their administration...
The reason for breaking down the operations into individual processes is to properly track the importance of specific matching IT tools that manage or provide for that process. A Simple Model of IT Management, p. 74
|
Processes and Tool Relationship
These relationships are many to many. Most tools have the capability to handle multiple business processes. In order to understand the importance of an enabling tool, and thereby make an intelligent decision on service levels, the relationship with key business processes must be understood.
Instrumentation
In order to properly manage the toolsets, data on the health and status of the tools needs to be maintained along with its' potential impact on business operations from various kinds of service deterioration:
- Trouble Ticket: The Help desk is the most common source for gathering business effects. A ticket is created to fulfill a request for service - either itsd' initiation or recovery.
- Agents and Probes: IT Tool Data is system, storage, application, or network based. Agents are . Probes are specific, stand-alone devices (hardware or software) that gather information about a target environment.
- Business Impact data: creates data on the impact of each tool to the effected business processes. The data can be gathered automatically or manually.
Management Data
Once the business, IT tool and business impact datasets are combined management data is created. It is this management data which provides the basis for the management of the overall IT infrastructure -
This combined management data is the basis used to manage the IT tool environment. A Simple Model of IT Management, p. 81
|
Management Information
Management data is massaged to become Management Information. This information takes several forms:
- Event Information: The important information to track for an event is the time, source, object and any available detailed description.
- Device relationship: tells how things are connected. The logical connection among devices is often critical to solving problems.
- Prioritization Information: Previously collected information on the impact of IT tools on business processes is combined with the relative importance of different business processes (including time of day, week, month, variations on interactions, etc) to create a prioirity for IT Management action.
Management Analysis
This is the step where all of the management information if consolidated and analyzed for appropriate actioning. The two primary types of actioning are:
- Problem Analysis: The basic problem resolution process is comprised of:
- recognizing something is wrong
- identifying what is wrong
- isolating what is causing the problem
- determining what will fix the problem
- implementing the fix
- testing and verifying the result
- Change Analysis: managing changes is made up of a number of overlapping steps:
- plan
- design
- install
- configure
- test
- adjust for impact
The analysis step represents the culmination of the management process...
In an ideal world, all of the preceding steps in the management process have one by one provided the necessary information to allow for a straight-forward analysis. < A Simple Model of IT Management, p. 87
|
Once the analysis is complete the issue is resolved. Actions should include items that indicate that the desired results have been achieved.
Hewlett Packard Reference Model
Hewlett-Packard has developed their framework over a decade. The latest HP reference model is described in its’ White Paper - The HP IT Service Management Reference Model. Their rendition recognizes four primary functions interacting in a continuous loop to plan, design, implement and operate an IT service. Supporting this cycle are quality assurance functions represented by Service Level, Configuration and Change Management. HP states "the model provides a coherent representation of IT processes and a common language, making it useful in initiating a meaningful dialogue between all parties involved in IT process requirements and solutions."
The HP ITSM Reference Model's primary focus is on distributed environments. The model is focused on datacenters ( a primary HP business), but also addresses non-integration issues that are prevalent in existing, mainframe-centric process models.
The five primary components of the model areR:
Business-IT Alignment
The strategic processes contained in this quandrant involve aligning IT strategy with business goals and developing a service portfolio that provides excellent business value.
- IT business assessment - examines organizational markets for IT services, determines business needs, and then defines the business requirements that drive IT strategy and contribute to the corporate value chain.N.
- IT strategy & architecture planning - evaluates the overall IT value proposition and, based on the findings of the business assessment, generates an IT strategy and IT architecture plan.
- Customer management - facilitates partnership between the IT service provider (including any outsourced services) and company LOB (lines-of-business) customers.
- Service planning - building on the outcomes of the IT strategy and architecture planning process, service planning seeks to ensure that new services are properly planned for and that the IT organization understands the risks associated with their delivery. Service planning also involves finding ways to maximize the ROI of new and existing services by leveraging them across multiple business units or customers.
Service Design and Management
Service design and management processes provide the detailed service information needed to design new services, manage the availability and quality of those services, and balance service quality with costs.
- Security management - defines, tracks, and controls the security of corporate information and servicesN. This process accounts for the implementation, control, and maintenance of the total security infrastructure. All services must adhere to strict corporate standards of information security.
- Continuity management - addresses the IT organization's ability to continue providing predetermined service levels to customers following a serious interruption to the businesN.
- Availability management - defines, tracks, and controls customer access to servicesN.
- Capacity management - defines, tracks, and controls service capacities to confirm that service workloads are ready to meet agreed-upon performance levelsN.
- Financial management - determines the cost of providing services and to recover these costs via charge allocation structuresN.
Service Development and Deployment
Service development and deployment processes allow you to build and test services and related infrastructure components, such as procedures, tools, hardware staging, software installation, application development, and training plans, according to service design specifications.
After a service and its components have been successfully built and tested, the service is deployed and integrated into a production environment, where it is tested again prior to final project signoff and release. These processes reduce service activation risks and minimize implementation costs.
- Service build & test - after a service specification is completed, the service build and test process develops and validates a functional version of a component, service function, or end-to-end service. As part of this process, the IT organization acquires or builds the necessary components, service functions (such as backup capability or Web functionality) and even end-to-end service solutions (such as SAP Financials). The process also enables you to test for adherence to security policies and guidelines as well as documenting instructions for replication and implementation of a production copy. Once assembled, the component, function, or end-to-end service is thoroughly tested. Service build and test interacts extensively with change management, configuration management, and release to production, as well as other processes in the model.
- Release to production - creates one or more production copies of a new or updated component, service function, or end-to-end service for a specific customer, based on a detailed production plan known as a master blueprint.
Service Operations
The processes identified under service operations work together to monitor, maintain, improve, and report on the IT services. These processes provide command and control capabilities, as well as continuous service improvement and support for the IT environment. They also help the organization maintain customer satisfaction by managing day-to-day IT customer service requests and confirming that service quality meets agreed-upon levels.
- Operations management - manages and performs the normal, day-to-day processing activities required for service delivery in accordance with agreed-upon service levelsN.
- Problem management - proactive focus on reducing the number of incidents in the production environment by addressing the root causes of closed incidents. Its activities, including ongoing trend analysis and error control, help ensure that the IT organization implements long-term solutionsN.
- Incident & service request management - timely restoration of service availability, minimizing service disruptions, and responding to customer needs. Reactive in nature, its activities focus on handling incidents in the infrastructure or those reported by users via efficient first-, second-, and third-level support service, as well as responding to service requests. A help desk that uses this process does more than simply handle service-related incidents - it also deals with requests for information and other types of administrative assistance. This process interacts frequently with change management and configuration management.
Service Delivery Assurance
The service delivery assurance process group resides in the center of the HP ITSM Reference Model because all other process groups revolve around this central hubN.
- Service-level management - defines, negotiates, monitors, reports, and controls customer-specific service levels within predefined standard service parameters. With a detailed service specification, the service-level management process seeks to determine measurable, attainable service-level objectives for potential customers - and enables organization commitment to meaningful SLAs. Both service planning and service-level management processes are dependent on the results of and interactions with other related IT processes.
- Change management - ensures that the IT organization uses standard methods and procedures for handling all production environment changes in order to minimize the impact of change-related problems on service quality. This process logs all significant changes to the enterprise environment, coordinates change-related work orders, prioritizes change requests, authorizes production changes, schedules resources, and assesses the risk and impact of all changes to the IT environment.
- Configuration management - centrally registers, tracks, and reports on each IT infrastructure component - known as configuration items (CI) - under configuration control. The process involves identifying CI attributes, CI status, and their relationships. This data is stored in a logical entity known as the Configuration Management Database (CMDB). Any other IT process that affects the infrastructure must interact with this processN.
Microsoft Operations Framework
Microsoft has its’ version of ITIL which it covers under its’ Microsoft Operating Framework (MOF).
"In contrast to the descriptive ITIL approach, the MOF approach is prescriptive, promoting continuous improvement of IT service management capabilities throughout the IT life cycle. IT organizations are ideally in a constant state of improvement. To assist in achieving this ongoing development, MOF provides prescriptive, process-driven tools and best practices through a growing number of specific service management functions. By combining MOF with Microsoft Solutions Framework (MSF),1 organizations can implement an end-to-end framework to manage their infrastructures—from planning and building through operations and support." Microsoft Technet web site
|
|
- Supporting Quadrant - Resolve incident service requests and other end-user problems in a timely and effective way. Includes the SMFs required to identify, assign, diagnose, track, and resolve incidents, problems, and service requests in SLAs.
- Changing Quadrant - Effectively and quickly introduce approved changes into the IT environment with minimal disruption of service. Includes the SMFs required to identify, review, approve, and incorporate change into a managed IT environment.
- Operating Quadrant - Execute day-to-day operations tasks, both manual and automated, in a highly predictable and reliable way. Includes the SMFs that help an IT organization achieve and maintain its service level commitments.
- Optimizing Quadrant - Drive changes that optimize cost, performance, capacity, and availability while delivering IT services. Includes the SMFs that help an IT organization review outages and incidents, examine cost structures, assess staff performance, conduct systems availability and performance analysis, and forecast system capacity needs.
|
Microsoft presents a brief comparison of MOF to ITIL:
Topic | ITIL | MOF
|
Planning to Implement Service Management | - Business continuity management, partnerships and outsourcing, surviving change, and transformation of business practices through radical change
- Looks at IT in business terms as a means of improving services and reducing costs
- Includes cross-organizational integration with IT services and decision-making governance
| - Continuous Improvement Roadmap (CIR) applies business perspectives to IT as a strategic asset
- Helps companies assess current service management and form a Service Improvement Program (SIP) based on business value
- Changing Quadrant highlights best practices for planning and managing change
- MOF Team Model defines roles and responsibilities for a transparent decision-making process
|
Business Perspective | - Planning the steps required to implement or improve IT service provision
| - Changing Quadrant addresses implementation planning
- MSF provides project implementation guidance
|
Service Management | - The management of services to meet the customer’s requirements
- Includes performance management, service acquisition management, and service provision management
- Also contains the topic areas of Service Support and Service Delivery
| - Applies universal service management themes to the specific operational needs of the Microsoft platform
- Embodies the service management framework for Microsoft products
- Divides service management into four functional quadrants and 21 service management functions spanning the entire service life cycle
|
Service Support | - Service desk, incident management, problem management, configuration management, change management, release management, and the necessary interactions between these and other core IT service management disciplines
- Updates best practice to reflect recent changes in technology and business practices
| - Supporting Quadrant addresses service support with service desk, incident management, and problem management guidance
- Related service management functions encompass those listed in the ITIL definition of service support
|
Service Delivery | - Service level management, financial management for IT services, IT service continuity management, availability management, contingency planning, and capacity management
| - Addressed within the MOF Process Model and in the Optimizing Quadrant
- Adds security management, workforce management, and infrastructure engineering
- Expands service delivery into the Operating Quadrant to include guidance for system administration, security administration, service monitoring and control, directory services administration, network administration, storage management, and job scheduling
|
Security Management | - The process of security management within IT service management
- Focuses on implementing security requirements identified in the IT service level agreement (SLA)
- Does not address business aspects of security policy
- Optimizing Quadrant addresses the areas defined in ITIL Security Management
- Expands security administration into the Operating Quadrant to address issues surrounding data access, data management and integrity, and user permissions
| ICT Infrastructure Management | - Network service management, operations management, management of local processors, computer installation and acceptance, and systems management
| - Optimizing Quadrant incorporates infrastructure engineering guidance
- Windows Server System Reference Architecture (WSSRA) provides architectural guidance and blueprints, addresses dependencies between infrastructure components, and enables systems architects to design for operations
- Microsoft provides management and infrastructure solutions to deploy Microsoft products in adherence to WSSRA, MSF, and MOF principles
| Applications Management | - The software development life cycle
- Provides details on business change, emphasizing clear requirement definitions and implementation to meet business needs
| - MSF includes project life cycle guidance on software development and deployment projects
- Changing Quadrant provides guidance through the change management, configuration management, and release management functions
|
|
Service Management Functions
| Each of the SMFs within a particular quadrant shares a common service mission or goal. Many SMFs are based on ITIL. The notable exceptions are workforce management (in the Optimizing Quadrant) and all SMFs in the Operating Quadrant, which ITIL does not address.
These additional elements demonstrate that, at least from a framework perspective, MOF presents a more useful depiction or model of IT service operations that does ITIL:
- it contains additional elements which present a more complete rendering of the IT service environment
- the quadrant concept reflect a basic acknowledgement of organizaitonal maturity with the "optimizing processes" encompassing ITIL Service delivery components - clearly requiring relatively mature organizations to be considered - and requiring propr experience with "changing" and "operating" SMFs.
- the collection and definition as milestones and/or reviews represents a refinement in the treatment of these processes/function within ITIL.
- MOF presents a better rationalization of sub-processes and roles in related process areas such as (1) Change and Release, (2) Availability, Service Continuity and Service Level Management and (3) Financial (budgeting) and Capacity
However, the heavy relience in MOF SMF descriptions is often at the expense of clarity of concepts. These process-oriented descriptions will often make reference to key concepts which are not explained as well as they are in ITIL (eg. DSL, Capacity Plan, etc). They are often referenced (and defined in a Definition section) but their importance and usage is all too often left unclear.
|
MOF Quadrant/Description | SMFs | Milestones/Reviews
|
Changing QuadrantDescribes processes, responsibilities, reviews, and best practices that help organizations manage changes to their IT infrastructure. Through classification of change types, the appropriate assignment of authorization responsibilities, and a consistent change management and release process, organizations following MOF best practices reduce incompatible or conflicting changes and streamline their release efforts.
| - Change Management - describes a consistent set of processes to initiate infrastructure changes, assess and document their potential impacts, approve their implementation, and schedule and review their deployment.
- Configuration Management - a key principle in effectively managing an IT infrastructure is to document its components and the relationships between them. The Configuration Management SMF provides the foundation for decision-making in the Changing Quadrant, negotiating Service Level Agreements, assessing IT capacity, and other critical processes.
- Release Management - An effective release management process creates a bridge between development or acquisition of new services and the IT organization responsible for operating them. The Release Management SMF coordinates efforts to deploy services and applications into a managed environment
- Change Initiation Review - is the first formalized opportunity for operations and production stakeholders to review proposed changes to the IT infrastructure. Closely aligned with the Microsoft Solutions Framework planning activities, this review facilitates the alignment of proposed changes with current IT standards and policies
- Release Readiness Review - is the final check given to a developed application or service prior to deployment. IT stakeholders take this opportunity to verify that the release meets its stated objectives, fulfills the design criteria and requirements, and can be safely released into the production environment with low risk of failure or incompatibilities
| Operating QuadrantThe Microsoft Operations Framework (MOF) Operating Quadrant is the collection of processes and IT functions dedicated to the ongoing maintenance, monitoring, control, and protection of IT infrastructure assets. Efficient implementation of MOF best practices in this quadrant enables IT organizations to move beyond simple infrastructure maintenance, such as patch management or backup-and-restore, to proactive measures that help optimize for better performance.
| - Directory Services Administration - Provides processes and best practices for the routine management of the directory systems used to locate users, files, services, and servers. This service is crucial to effective use of a distributed infrastructure.
- Job Scheduling - handle the sequencing of various batch jobs and other workloads (printing, database, backups, and others) for optimal use of network resources.
- Network Administration - Defines and delivers the processes and procedures required to operate basic network services, including Dynamic Host Configuration Protocol, Windows Internet Name Service, and Domain Name System, on a day-to-day basis.
- Security Administration - Deals with the daily, routine application of security policies and best practices to maintain a secure operating environment.
- Service Monitoring and Control - Observing the health of the operating environment is key to making rational decisions for maintenance, optimization, risk mitigation, and proposed changes. Service Monitoring and Control provides best practices for monitoring and resolving incidents and alerts in the production environment.
- Storage Management - The most crucial investment that an organization has in its infrastructure is the data stored in it. Storage Management is the set of practices dedicated to safe, secure storage of data, effective backup-and-restore policies, and efficient use of storage resources to optimize the business’s investment in physical storage components.
- System Administration - The “glue” that binds services together within the operating quadrant, System Administration is responsible for managing a variety of services, with varying levels of control. These include crucial services, such as messaging, databases, operating systems, Internet, and telecommunications.
| - Operations Review - An ongoing Operations Review process gives IT managers the opportunity to review service management processes, and their performance and capabilities. Assessments of various Service Operating Level Agreements at these reviews are used as the basis for negotiating Service Level Agreements with service customers. The reviews also provide a quality check on operating practices, to assure that daily activities are proper documented in the organization’s knowledge management system.
| Supporting QuadrantActivities and processes that are performed to resolve user and system-generated queries, issues, or problems are in the domain of the Microsoft Operations Framework (MOF) Supporting Quadrant. The Supporting Quadrant contains those processes and practices required to fully support the efficient use of an IT infrastructure. Specific team role clusters from the MOF Team Model focus their activities on accomplishing the functions defined within the quadrant.
| - Incident Management - A critical process that provides organizations with the ability to first detect incidents and then to target the correct support resources in order to resolve the incidents as quickly as possible.
- Problem Management - By implementing Problem Management processes at the same time as Incident Management processes, organizations can identify and resolve the root causes of any significant or recurring incidents, thus reducing the likelihood of recurrence.
- Service Desk - The Service Desk is the first point of contact for the company; its efficient and effective response to customers’ problems and concerns can do much to enhance the reputation of the company.
| - Service Level Agreement Review - Provides IT and the customers it serves with the opportunity to examine current service level commitments. Often, service requirements evolve over time, or new IT capabilities may allow beneficial enhancements to a service. This regular review helps IT organizations to stay well-aligned with the business.
| Optimizing Quadrant
This Quadrant encompasses processes and IT functions dedicated to planning and implementing enhancements to the IT environment, through a continuous cycle of process improvement. As organizations become more mature and capable in their service management, Service Management Functions (SMFs) in the optimizing quadrant assure tighter alignment of operations with business needs and longer term business strategies. Recommendations spawned in the optimizing quadrant generally are reflected as changes in the IT infrastructure, and are instituted through the change management process.
| - Availability Management -
Availability of IT services to users is one of the most critical of management functions. The Availability SMF describes processes and best practices to ensure that services achieve their service level agreements for availability.
- Capacity Management - The flow of information through an organization is dependent on many key performance factors. Capacity management works to optimize capacity and improve system performance through planning, sizing and controlling network resources as efficiently as possible.
- Financial Management - The Financial Management SMF defines IT service budgeting processes, but also provides guidance for service charge-backs, accounting, and decommissioning.
- Infrastructure Engineering - Consistent standards across an IT organization improve interoperability, reduce risk of deployment failures, and facilitate governance. The Infrastructure Engineering SMF provides guidance for collecting, creating, and managing standards and policies for IT services and infrastructure.
- IT Service Continuity Management - Major IT outages occur outside the realm of availability and incident management. IT Service Continuity provides best practices and guidance to support business continuity through the implementation of effective IT service recovery procedures.
- Security Management - The Security Management SMF defines and communicates the IT organization’s security plans, policies, guidelines, and the relevant regulations that mandate them. It works in concert with Security Administration, which implements these policies, to secure corporate information and assets by controlling access, confidentiality, and authorization.
- Service Level Management - IT services reflect a formalized commitment to provide negotiated levels of performance. Service Level Management provides a structured process for business users and IT service providers to discuss the service levels needed and assess their current performance.
- Workforce Management - Skilled IT personnel are crucial to the evolution of an efficient IT organization. Their recruitment, training, readiness, compensation, and retention are discussed in this SMF.
- SLA Review - Assesses the effectiveness of IT operations in delivering services to meet negotiated performance metrics. This review is complementary to the Operations Review, and provides necessary input for creation and management of Service Level Agreements through the Service Level Management SMF.
- Change Initiation Review - Is the first formalized opportunity for operations and production stakeholders to review proposed changes to the IT infrastructure. Closely aligned with the Microsoft Solutions Framework planning activities, this review facilitates the alignment of proposed changes with current IT standards and policies.
|
|
|
Application Development Models
Projects are the way that most new work gets delivered. Best practices suggest the use of a structured project management methodology to guide the introduction of new systems, approaches, processes - and 'service improvement initiatives, ITIL best practices included. The are three main aspects to project management:
- Knowledge areas - Grouping of concepts germane to projects. The Project Management Book of Knowledge (PMBOK) is widely recognized as the source of best practicesR.
- Project processes - Projects can be managed using a common set of project management processes - regardless of the type of project. All projects should be defined and planned and all projects should manage scope, risk, quality, status, etc.
- Lifecycle models - Just as there are common project management processes to manage most projects, there are also common models that can provide guidance on how to define the project lifecycle. These common models are valuable since they save project teams the time associated with creating the project workplan from scratch each time.
Application development models are tailored versions of lifecycle models. The principles enunciated in PMBOK are combined into a lifecycle model with the goal to produce quality systems on time and on budget. A lifecycle models might be one of (or combined or derived from):
Lifecycle Model | Description | Comments
|
Waterfall | Uses milestones as transition and assessment points with each set of tasks being completed before the next phase begins. | The waterfall works best for projects where it is feasible to clearly delineate a fixed set of
unchanging project requirements at the start. Fixed transition points between phases
facilitate schedule tracking and assignment of responsibilities and accountability.
|
Spiral | The spiral is a risk-reduction oriented model that breaks a software project up into mini-projects, each addressing one or more major risks. The model focuses on the continual need to refine the requirements
and estimates for a project. | Early iterations of a project are often the cheapest, enabling the highest risks to be addressed at the lowest total cost, and, each iteration of the spiral can be tailored to suit the needs of the project. However, the many iterations can easily result in great complication requiring attentive and knowledgeable management for success.
|
Modified Waterfall | Uses the same phases as the pure waterfall, but is not done on a discontinuous basis. This enables the phases to overlap when needed | Strengths- More flexibility than the pure waterfall model.
- If there is personnel continuity between the phases, documentation can be substantially reduced.
- Implementation of easy areas do not need to wait for the hard ones.
Weaknesses- Milestones are more ambiguous than for the pure waterfall.
- Activities performed in parallel are subject to miscommunication and mistaken assumptions.
- Unforseen interdependencies can create problems
Risk reduction spirals can be added to the top of the waterfall to reduce risks prior to the waterfall phases. The waterfall can be further modified using options such as prototyping, JADs or CRC sessions or other methods of requirements gathering done in overlapping phases.
|
Evolutionary Prototyping | Uses multiple iterations of requirements gathering and analysis, design and prototype development. After each iteration, the result is analyzed by the customer. Their response creates the next level of requirements and defines the next iteration. | Strengths- Customers can see steady progress.
- This is useful when requirements are changing rapidly, when the customer is reluctant to commit to a set of requirements, or when no one fully understands the application area.
Weaknesses- It is impossible to know at the outset of the project how long it will take.
- There is no way to know the number of iterations that will be required.
|
Staged Delivery | Although the early phases cover the deliverables of the pure waterfall, the design is broken into deliverables stages for detailed design, coding, testing and deployment. | StrengthCan put useful functionality into the hands of customers earlier than if the product were delivered at the end of the project.WeaknessDoesn't work well without careful planning at both management and technical levels.
|
Evolutionary Delivery | Straddles evolutionary prototyping and staged delivery. | StrengthEnables customers to refine interface while the architectural structure is as planned.WeaknessDoesn't work well without careful planning at both management and technical levels.
|
Design-to-Schedule | Like staged delivery, design-to-schedule is a staged release model. However, the number of stages to be accomplished are not known at the outset of the project. | Strength- Produces date-driven functionality, ensuring there is a product at the critical date.
- Covers for highly suspect estimates.
WeaknessWon't be able to predict the full range of functionality.
|
Design-to-Tools | The capability goes into a product only if it is directly supported by existing software tools. If it isn't supported, it gets left out. Besides architectural and functional packages, these tools can be code and class libraries, code generators, rapid-development languages and any other software tools that dramatically reduce implementation time. | StrengthWhen time is a constraint, may be able to implement more total functionality than possible when building everything "from scratch".Weakness- You lose a lot of control over the product.
- You may become "locked in" to a vendor. If it is for long-term functionality, vendor lock-in can become a weak link.
- May not be able to implement all features desired or in the manner desired.
|
Off-the-Shelf | Following requirements definition, analysis must be done to compare the package to the business, functional and architectural requirements. | StrengthAvailable immediately and usually at far lesser cost.WeaknessWill rarely satisfy all system requirements.
|
The initial artifacts of project management focus on defining the work and building a workplan which will be focused on executing the project using one or more of the lifecycle models. Even if the organization has a great project management processes in place, it will still need to select models for the lifecycle.
System Development Lifecycle (SDLC)
The original and most common lifecycle model is the System Development Lifecycle (SDLC) The model can take any one of the above forms waterfall, spiral, etc.
"The system development life cycle is a process, involving multiple stages, used to convert a management need into an application system, which is custom-developed or purchased or is a combination of both." IS Auditing Guideline - System Development Life Cycle (SDLC), Review Document G23, ISACA
|
|
The Systems Development Life Cycle (SDLC), was developed to better ensure that computer systems being delivered satisfied user requirements, and/or were being developed within estimated and established budget and/or within specified timelines. The SDLC is a methodology to design and implement systems in a methodical, logical and step-by-step approach.
The SDLC for an application system often depends on the chosen acquisition/development mode. Application systems could
be acquired/developed through various modes, including custom development using internal resources, custom development using fully or partly outsourced resources located onsite or offsite, vendor software packages implemented as-is with no customisation, and, vendor software packages customised to meet the specific requirements. At times, large complex applications may involve a combination of these options.
An organiztions may use specific SDLC methodologies and processes, either custom- or vendor-developed and these generally
prescribe standard processes for different modes of acquisition with the facility to customize the process design for specific application systems. These may be supported by appropriate tools to manage the SDLC. In such cases, the SDLC would
depend on a methodology tool. Where an application system is developed instead of being purchased as a package, the SDLC would depend on the development methodology used, such as waterfall development, prototyping, rapid application development, CASE and object-oriented development.
|
System Development Lifecycle Phases
Stage | Description | Deliverables
|
Preliminary Investigation |
the purpose of this stage is to verify that a problem or deficiency really exists, or to pass judgment on the new requirement. This
phase is typically very short, usually not more than a day or two for a big project, and in
some instances it can be as little as two hours!.
The end result, or deliverable, from the Preliminary Investigation phase is either a
willingness to proceed further, or the decision to 'call it quits'. There are three factors,
typically called constraints, which result in a 'go' or 'no-go' decision:
- Technical - The project can't be completed with the technology currently in
existence. This constraint is typified by Leonardo da Vinci's inability to build
a helicopter even though he is creditedwith designing one in the 16th
century. Technological constraints made the construction of the helicopter
impossible.
- Time - The project can be completed, but not in time to satisfy the user's
requirements. This is a frequent reason for the abandonment of the project
after the Preliminary Investigation phase.
- Budgetary - The project can be completed, and completed on time to satisfy
the user's requirements, but the cost is prohibitive.
|
- Information Service Request (ISR)
- Information Service Request (ISR)
- Feasibility Study
- Time Accounting Number ISR Number/Project Proposal
- Contacts List & Time Accounting Tasks
- Scope Statement / Vision Statement
- Project Definition Work Sheet
- Flow Diagram
- Logical Context Diagram
- Information Request Work Sheet
- Project Proposal Work Sheet with Funding/Budget & Personnel Resources included. (Automation Plan Request)
- Project Initiation Worksheet
|
Systems Analysis |
Sometimes called the Data Gathering Phase, in this stage the suggestion is studied,
deficiency or new requirement in detail. Depending upon the size of the project being
undertaken, this phase could be as short as the Preliminary Investigation, or it could take
considerable time. This phase should be completed before any actual programming. At the end of this phase, the Requirements Statement should be in developmentN:
|
- Context diagram
- System Flow diagram or Grid Flow diagram
- Interview Preparation Work Sheet
- Requirements Definition Document
- Business Functional Process (BFP) Candidate List, Parking Lot list of Assumptions and Questions
- Functional Specification
- Business Functional Process Script
- Business Functional Process Logic Diagram
- Object Definition
- Occurrence Table
- Business Rules
- Business Process Map
- Project Schedule
- Signature of Sponsor
|
Systems Design | Most programs are designed by first determining the output of the program. The reasoning
here is that if you know what the output of the program should be, you can determine the
input needed to produce that output more easily. Once you know both the output from, and
the input to the program, you can then determine what processing needs to be performed
to convert the input to output. You will also be in a position to consider what information
needs to be saved, and in what sort of file.
While doing the Output and Input designs, more information will be available to add to
the Requirements Statement. It is also possible that a first screen design will take shape and
at the end of these designs, and a sketch will be made of what the screen will look like. |
- System Flow diagram
- Physical diagram
- Entity Relationship Diagram
- Data Dictionary
- System Flow diagram/Specifications
- Quotes, updated Project Schedule.
- Design Approval
- Updated Physical Design
- Updated System Flows, Management Approval
- Requirements Traceability Matrix
|
Systems Development | Examination and re-examination of the Requirements Definition Statement
is needed to ensure that it is being followed to the letter. Any deviations would usually
have to be approved either by the project leader or by the customer. Changes are applied to the requirements Traceability Matrix.
The Development phase is often split into two sections, that of Prototyping and Production
Readiness. Prototyping is the stage of the Development phase that
produces a pseudo-complete application, which for all intents and purposes appears to be
fully functional.
Developers use this stage to demo the application to the customer as another check that the
final software solution answers the problem posed. When they are given the OK from the
customer, the final version code is written into this shell to complete the phase. |
- Style Sheets, Images, Logos
- System Access Privilege Request (SAPR) forms and Request for Development space if Web application
- Technical Specifications, Updated Schedule
- Updated Entity Relationship Diagram (ERD)
- Technical specifications
- Program Source Code, Database Stored Procedures, Database Triggers, File Folders with Code, Before/After Tests
- Testing results/Test Plan
- User's Testing Plan
- Testing Plan results/System Test Plan
- Successful Load Test Results
- forms
- User Training Plan
- Infrastructure
- Updated User Testing Plan/Test Results
- User Signature on User Acceptance Document
- Operations Turnover Instructions, Run Book
- Managerial Sign-off
|
Systems Implementation | Any hardware that has been purchased will
be delivered and installed. Software, which was designed in the System Design Phase, and programmed in
System development phase of the SDLC, will be installed on any PCs that require it. Any person that will be
using the program will also be trained during this phase of the SDLC. During the Implementation phase, both the hardware and the software is tested. Although
the programmer will find and fix many problems, almost invariably, the user will uncover
problems that the developer has been unable to simulate. |
- Implementation Plan Including Go-Live Checklist
- Package deployment documentation
- Change Request
- Migrate Request/Developer, Verification of program in staging
- Client signature
- Migrate Request/Developer, Verification of program/system in staging
- updated ISR/ISR Ratings
- Build Book
|
Systems Maintenance | In this phase someone (usually the
client, but sometimes a third party such as an auditor) studies the implemented system to
ensure that it actually fulfills the Requirements Statement. Most important, the system
should have solved the problem or deficiency, or satisfied the desire that was identified in
the Investigation Phase.
The Maintenance portion of this
phase deals with any changes that need to be made to the system.
Changes are sometimes the result of the system not completely fulfilling its original
requirements, but they could also be the result of customer satisfaction. Sometimes the
customer is so happy with what they have got that they want more. Changes can also be
forced upon the system because of governmental regulations, such as changing tax laws,
while at other times changes come about due to alterations in the business rules of the
customer. |
- Support matrices
- Maintenance requirements
- Service level definitions
- Transitional requirements
|
This is the most basic SDLC process. There are many variations in the number and titling of stages. The following two adaptations represent key "tailorings" which have particular relevence in the context of IT service management improvement undertakings.
ITIL Application Lifecycle
|
The Application Lifecycle is a derivation of the System Development Lifecycle. The correspondence is detailed below.
SDLC | Application
SDLC - Application Lifecycle Model Comparison
| Preliminary Investigation | Requirements
| Systems Analysis | Requirements
| Systems Design | Design
| Systems Development | Build
| Systems Implementation | Deploy
| Systems Maintenance | Operate
| Iterate | Optimize
|
The Application Lifecycle shifts the continuum of the SDLC beyond system development into the application's operational sphere. These later stages (Deploy, Operate and Optimize) begin to impinge upon ITIL Service Support and Delivery processes - most prominently, Release and Change Management for Deployment, ICT Infrastructure Management for Operate and Application and Capacity Management for Optimize.
|
Requirements
The phase during wherein the development team works closely with key business decision-makers to determine organizational requirements for the application. Functionality, performance levels, and other characteristics that the application are stated. The requirements developed in this phase serve as a foundation for the remaining phases of the development process, and as the acceptance criteriaN.
Considerations
- Functional requirements - the things an application is intended to do, and can be expressed as services, tasks or functions the application is required to perform
- Non-functional requirements - used to define requirements and constraints on the IT system and serve as a basis for early system sizing and estimates of cost, and can support the assessment of the viability of the proposed IT system
- Usability requirements - ensure that the system meets the expectations of its Users with regard to its ease of use
- Change cases - specify expected future application functionalityN
- Testing requirements - testing requirements against developed criteria for acceptance
In terms of CMMI this phase covers both Requirements Definition and Requirements Management functions. The Requirements Management process takes the defined requirements and manages them over the life of the project. One of the key tools for doing this is a requirements traceability matrix.
Design
This phase ensures that an application is conceived with proper functionality and giving appropriate acknowledgement to the need for management of the application. This phase takes the outputs from the requirements phase and turns them into the specification that will be used to build the application. A key element in this is tracking the requirements using a traceability matrix.
Considerations
- Ensure Proper Consideration for Non Functional requirements - giving non-functional requirements a level of importance similar to that for the functional requirements, and including them as a mandatory part of the design phaseN
- Risk-driven scheduling - Scheduling assigns higher-risk tasks a high priority and includes risk priorities that are assigned to meet customer requirements.
- Managing trade-offs - Proper balancing of the relationship among resources, the project schedule, and those features that need to be included in the application for the sake of quality
- Design management checklist - Testing the design against the high-level functional requirements for the organization and any special non-functional requirements that have been identified for the application
- Testing the requirements - Design actions need to comply with the functional requirements
- Change and Configuration Management - provide advice to the development team on how the application can move from development/test to the live production environment
This stage is comparable to the CMMI Engineering Phase - Technical Solution.
Build
Once the design phase is completed, the application development team uses the designs to build and test the application. This phase continues to address the non-functional aspects of the design (responsiveness, availability, security) in order to reduce later re-work to accommodate these considerations.
Considerations
- Coding Conventions - Standardized structure and coding style of an application (preferably also amongst applications used by the organization) so that everyone can easily read, understand, and manage the application development process
- Development Tools - Rather than creating all the pieces of an application from scratch, developers can customize an existing template. They can also reuse custom components in multiple applications by creating their own templates.
- Embedded Application Instrumentation - Incorporating the applications instrumentation including performance reporting considerations into the drivers and executables, that is efficient and easy to implement.
- Operability testing - As the application is built, it is tested to ensure that it meets all the stated requirements and features that the business has requestedN
- Assemble a Build Team - The Team which will move the code from the development to the production environment.
- Change and Configuration Management (CCM) - Provide advice to developers and the Build Team on how to build the hooks and facilities into an application so that it conforms to the required Change and Configuration Management standards in use
This process is covered in CMMI by the Engineering - Product Integration processN. This CMMI function, however, would also cover part of the following Deploy process.
Additionally, this function is beginning to overlap with aspect of ITIL Service Support - Release Management. Section 9.6.2 Designing, building and configuring a release discusses elements in this phase - "Procedures should be planned and documented for building software Releases, reusing standard procedures where possible".
Deploy
Once assembled the application need to be moved into the organization's production environment. From this stage on there is overlap with Service Support functions. This occurs because the processes are beginning to overlap with the production environment which is considered in ITIL to be the defining line between development and operations and hence between application management and service support processes. The following table highlights process similarities between the Application Lifecycle and Release/Change Management.
Application Lifecycle | Release/Change Management
|
5.5.2 Planning the deployment | Release Management - 9.6.1 Release planning
|
5.5.3 Approving the deployment | Change Management - 8.5.7 Change approval
|
5.5.4 Distributing applications | Release Management - 9.6.6 Distribution and installations
|
Considerations
- Planning the deployment - use standards and guidelines for deployment that tailored to individual situations and organizations
- Approving the deployment - Obtaining approval from Change Management to implement the change to the production environment.
- Distribution and installations - Considerations include packaging, deployment, flexible software distribution targeting, deployment push functionality, pull technologies, patching, feedback and reporting and back-out provisioning.
- Pilot deployments - A pilot deployment is a controlled test of the system-wide deployment, using a small subset of the production system. The pilot project does not need to be a complete test of all system functionality, but it should test enough of the system to determine whether the chosen design will work well in the production environment.
- Deployment management checklist - The deployment of the application should be built and tested against the high-level manageability requirements for the organization and any special management requirements that have been identified for the application
Checklists conform to the CMMI Verification processes.
Operate
Actions which expedite ongoing operation of the application. Failure to perform a simple task, such as monitoring the available disk space on a server, could result in the server running out of room and causing the application to fail.
An SLA documents the clients service level expectations. Typically it cites the availability and performance requirements of the business. Measurement of the application’s performance during its operation provides data regarding stability and break/fix requirements, and provides management with data on overall quality.
Considerations
- Regular maintenance activities - Preventative maintenance and the avoidance of costly downtimeN
- Application state - restoring the application to a known and trusted stateN
- Benefits Realization - An assessment of the benefits provided by the application compared with those that it was designed to deliverN
- Operations management checklist - operational comparison against the high-level manageability requirements for the organization and any special management requirements that have been identified for the application
Checklists conform to the CMMI Verification processes.
Optimize (Baseline)
The use and performance of an application should be reviewed periodically to ascertain whether it continues to meet the' business and technical requirements of the infrastructure. The review process can be initiated by:
- A designated review time - a fixed period of time has passed since the last review
- Application Faults - Problem Management has identified issues with the current version that require modifications
- Business changes - business requirements are changing and the use of the current application needs to be reviewed against these new requirements
- Infrastructure changes - the technical infrastructure that the application relies on has changed and the application needs re-tooling
Alternatively, when an application is identified as no longer being required, the assessment would generate a retirement proposal. This proposal should provide a road map of how the application will be removed from live service.
Microsoft Solution Framework
The Microsoft Solutions Framework is Microsoft's project management framework with a view to support the release of its' own product line. While it originated from Microsoft's application lifecycle model, it has evolved to combine the principles of other process models. They contend it may be applied across any project type as a "phase-based, milestone-driven, and iterative model".
MSF guidance for different project types focuses on managing the "people and process," as well as the technology elements that most projects encounter. The needs and practices of technology teams usually evolve constantly. To this end, the materials gathered into MSF also change continually, expanding to the ever-growing needs. The model follows the development of a solution from its inception to full deployment. Each phase culminates in an
externally visible milestone.
MSF interacts with Microsoft Operations Framework (MOF) to provide a transition to the operational environment. Since so much of MOF is highly similar to ITIL this means the MSF provides a smooth transition to ITIL best practices as well.
Envisioning Phase
This phase focuses upon the creation of a common team vision. This best ensures a common understanding of project goals and creates a motivational platform for both the team and the customer. Envisioning, by creating a
high-level view of the project’s goals and constraints, provides a venue for planning and helps create a more formal planning process that will take place
during the project’s planning phase.
The primary activities accomplished during envisioning are the formation of the core
team and the preparation and delivery of a vision/scope document.
The documentation of the project vision and the identification of the project scope are
distinct activities; both are required for a successful project. Vision is an unbounded
view of what a solution may be. Scope identifies the part(s) of the vision can be
accomplished within the project constraints.
Risk management is a recurring process that continues throughout the project. During
the envisioning phase, the team prepares a risk document and presents the top risks
along with the vision/scope document. For more information, see the MSF Risk
Management Discipline white paper.
During the envisioning phase, business requirements must be identified and analyzed.
These are refined more rigorously during the planning phase.
The primary team role driving the envisioning phase is the product management role.
The stage finishes with an approved vision/scope. At this point,
the project team and the customer have agreed on the overall direction for the project, as
well as which features the solution will and will not include, and a general timetable for
delivery.
Deliverables
- Vision/Scope Document - defines a clear direction for the project team, sets expectations, and provides the criteria for the designing and deploying the solution. There are four content elements in the Vision/Scope Document which address the what, where, when, why, who, and how of the project: The problem statement (or statement of objectives), vision statement, user profiles, and solution concept.
- Risk Management Plan - ascertains the impact of the consequence by determining the likelihood of its occurrence and the severity of the outcome relative to established project objectives.
- Project Structure Document - defines how the project will be managed and supported, and the administrative structure for the project team going into the Planning Phase.
- Next Phase Estimate - documents what comes next
Planning Phase
During this phase the functional specifications, design processes, work plans, cost estimates, and schedules are all prepared for the various deliverables.
Early in the planning phase, the team analyzes and documents requirements in a matrix. Requirements fall into four broad requirement categories:
- business
- user
- operations, and
- system (those of the solution itself).
As the team moves on to design the solution and create the functional
specifications, traceability between requirements and
features needs to be maintainedN.
The design process gives the team a systematic way to work from abstract concepts
down to specific technical detail. This begins with a systematic analysis of user profiles which describe various types of users and their job functions (operations staff are users too). Much of this is often done during the envisioning phase.
These are broken into a series of usage scenarios, where a particular type of user is
attempting to complete a type of activity, such as front desk registration in a hotel or
administering user passwords for a system administrator. Finally, each usage scenario is
broken into a specific sequence of tasks, known as use cases, which the user performs to
complete that activity - often referred to as “story-boarding.”
There are three levels in the design process: conceptual, logical, and physical designs. Each level is completed and baselined in a staggered sequence and documented in functional specifications describing the detail of how each feature is to look and behave and the underlying architecture and design for the features.
The functional specification serves multiple purposes, such as:
- Instructions to developers on what to build
- Basis for estimating work.
- Agreement with customer on exactly what will be built
- Point of synchronization for the whole team.
Once the functional spec is baselined, detailed planning starts. Team leads
prepare plans for the deliverables that pertain to their role and participates in
team planning sessionsN.
All plans are synchronized and presented together as the master project plan. The number and types of subsidiary
plans included in the master project plan will vary depending on the scope and type of
project.
Team members representing each role generate time estimates and schedules for
deliverables. The various schedules are then synchronized and integrated into a master project schedule.
At the culmination of the planning phase — the project plans approved milestone — customers and team members have agreed in detail on what is to be delivered and when. At the project plans approved milestone, the team re-assesses risk, updates priorities, and finalizes estimates for resources and schedule.
At the project plans approved milestone, the project team and key project stakeholders
agree that interim milestones have been met, that due dates are realistic, that project
roles and responsibilities are well defined, and that mechanisms are in place for
addressing areas of project risk. The functional specifications, master project plan, and
master project schedule provide the basis for making future trade-off decisions.
After the team approves the specifications, plans, and schedules, the documents become
the project baseline. The baseline takes into account the various decisions that are
reached by consensus by applying the three project planning variables: resources,
schedule, and features. After the baseline is completed and approved, the team
transitions to the developing phase.
After the team defines a baseline, it is placed under change control. This does not mean
that all decisions reached in the planning phase are final. But it does mean that as work
progresses in the developing phase, the team should review and approve any suggested
changes to the baseline.
For organizations using MOF, the team submits a Request for Change (RFC) to IT
operations at this milestone.
Deliverables
- Conceptual Design Document - addresses what needs to be included in the product. While it should be non-technical, it should be detailed regarding the new functionality in the proposed solution, how the existing technology infrastructure will react to the introduction of this functionality, how the solution will interact with the user, and what is included in the performance criteria.
- Design Specification - describes how to implement the "what" defined in the Conceptual Design Document and includes two major sub-deliverables: the technical specification and the security plan. The technical specification includes four content elements: the logical and physical design, standards and guidelines, change control methodology, and the life cycle management plan.
- Test Lab Setup - serves to ensure that an appropriate isolated environment has been established to simulate and test the functionality encompassed by the proposed solution. The lab setup has been completed when everything that is required to conduct the isolated testing, as defined in the Conceptual Design Document and the Design Specification, is in place. This is critical because it is a prerequisite for the proof-of-concept and the pilot, which will be conducted later in the project.
- Master Project Schedule - combines all the schedules from the various teams. After Program Management has drafted the Conceptual Design and Design Specification, the team leads map the individual functional components to specific tasks and assign the tasks to the team members. Each team lead is responsible for providing a schedule that their teams can commit to meeting during the development process. The Master Project Schedule has six content elements: the task list, implementation schedule, test schedule, preliminary training estimates, logistics schedule, and marketing schedule.
- Master Project Plan - a collection of plans from the various roles. The Development lead maps out tasks based on the Conceptual Design and Design Specification and groups the tasks into major interim releases.
- Life Cycle Management Plan - considers the rapid evolution of individual and aggregate technologies, together with dynamic organizational factors. It provides a management framework that encompasses the entire cycle, from the strategic planning stage through the disposition of technology and back to the planning stage again. In addition, it encourages the concept of planning while building.
- Change Control Methodology - similar to the reasoning behind the Vision/Scope Document, the project team must determine the who, what, when, where, why, and how of proposed changes. The team must be able to assess the risk and impact of the change and should have a mechanism for tracking the changes that have been implemented. The controls introduced by the methodology help the team to effectively direct its change-related activities, avoiding costly errors while maintaining acceptable quality levels.
- Risk Assessment - Consolidated and rationalized assessment of the collected risks including Probability and severity assessment and possible mitigation strategies.
- Business Manager Approval - Sign-off on above elements by business executive assigned as champion of the initiative.
Developing Phase
During the developing phase the team accomplishes most of the building of solution
components (documentation as well as code). However, some development work may
continue into the stabilization phase in response to testing.
The developing phase involves more than code development and software developers.
The infrastructure is also developed during this phase and all roles are active in building
and testing deliverables.
The developing phase culminates in the scope complete milestone. At this milestone, all
features are complete and the solution is ready for external testing and stabilization.
This milestone is the opportunity for customers and users, operations and support
personnel, and key project stakeholders to evaluate the solution and identify any
remaining issues that must be addressed before the solution is released.
Deliverables
- Pilot Plan - Purpose is to test the solution in a real environment. With this in mind, the pilot needs to include representatives from every user community and usage scenario impacted and to validate the implementation, training, and support plans and procedures that will be used in the production roll out.
- Training Plan - The probability of success of any infrastructure project is contingent upon the quality and appropriateness of the training provided to the respective users. While training is not an end product of the project, it is a critical component in determining the positive or negative impact of the solution.
- Capacity Plan - Capacity planning and the optimization of infrastructure components are two activities that must be approached cautiously and systematically because the conclusions drawn from these activities can dramatically impact the overall effectiveness of critical business processes.
- Business Continuation Plan - to recover only the elements of the technology that are essential for conducting business, to minimize downtime and loss of revenue. Disaster recovery encompasses business continuation and extends to the restoration of systems to their pre-failure state.
- Roll out Plan - a step-by-step strategy for effectively deploying the solution to the targeted users with minimal disruption to the organization’s day-to-day activities. It should address the technical design and implementation issues associated with the new technology and incorporate the training, security, procurement, and support plans and procedures discussed earlier.
Stabilizing Phase
The stabilizing phase conducts testing on a solution whose features are complete.
Testing during this phase emphasizes usage and operation under realistic environmental
conditions. The team focuses on resolving and triaging (prioritizing) bugs and preparing
the solution for release.
Early during this phase it is common for testing to report bugs at a rate faster than
developers can fix them. There is no way to tell how many bugs there will be or how
long it will take to fix them. There are, however, a couple of statistical signposts known
as bug convergence and zero-bug bounce that helps the team project when the solution
will reach stability. These signposts are described below.
MSF avoids the terms “alpha” and “beta” to describe the state of IT projects. These
terms are widely used, but are interpreted in too many ways to be meaningful in
industry. Teams can use these terms if desired, as long as they are defined clearly and
the definitions understood among the team, customer, and stakeholders.
Once a build has been deemed stable enough to be a release candidate, the solution is
deployed to a pilot group.
The stabilizing phase culminates in the release readiness milestone. Once reviewed and
approved, the solution is ready for full deployment to the live production environment.
The release readiness milestone occurs at the point when the team has addressed all
outstanding issues and has released the solution or placed it in service. At the release
milestone, responsibility for ongoing management and support of the solution officially
transfers from the project team to the operations and support teams.
Deliverables
- Release notes
- Performance support elements
- Test results and testing tools
- Source code and executables
- Project documents
- Milestone review
Deploying Phase
During this phase, the team deploys the core technology and site components, stabilizes
the deployment, transitions the project to operations and support, and obtains final
customer approval of the project. After the deployment, the team conducts a project
review and a customer satisfaction survey.
Stabilizing activities may continue during this period as the project components are
transferred from a test environment to a production environment.
The deployment complete milestone culminates the deploying phase. By this time, the
deployed solution should be providing the expected business value to the customer and
the team should have effectively terminated the processes and activities it employed to
reach this goal.
The customer must agree that the team has met its objectives before it can declare the
solution to be in production and close out the project. This requires a stable solution, as
well as clearly stated success criteria. In order for the solution to be considered stable,
appropriate operations and support systems must be in place.
Deliverables
- Operation and support information systems
- Procedures and processes - in the Deploying Phase, the project team needs to determine precisely how the projected “to-be” compares with the final solution. This information will then serve as a solid basis for future technology implementations.
- Knowledge base, reports, logbooks
- Documentation repository for all versions of documents, load sets, and code developed during the project
- Project close-out report
- Final versions of all project documents
- Customer/user satisfaction data
- Definition of next steps
Rational Unified Process (RUP)
The Rational Unified Process (RUP) is an interative software development process (based upon a spiral project management model) created by the Rational Software Corporation (now a division of IBM). It describes an approach for the deployment of software. It describes best practices in software deployment to eliminate or reduce the impact of the identified root causes of software failure. To remedy this six "best practices" were identified:
Develop Software Interactively
It is no longer possible to sequentially define an entire problem, design the
entire solution, build the software and then test the product at the end. Instead, an iterative approach is required, which builds interactively through refinement upon problem understanding and definition. RUP employs an iterative approach that addresses the
highest risk items at every stage in the lifecycle thereby reducing
a project's risk profile. This iterative approach helps you attack risk
through demonstrable progress — frequent, executable releases that enable
continuous end user involvement and feedback. Because each iteration ends
with an executable release, the development team stays focused on
producing results. This approach better accommodates tactical changes in requirements, features or schedule, and, frequent status checks help ensure that the project
stays on schedule.
Manage Requirements
RUP describes how
to elicit, organize, and document required functionality and constraints;
track and document tradeoffs and decisions; and easily capture and
communicate business requirements. The notions of use case and scenarios
proscribed in the process facilitates the capture of
functional requirements and ensures that they guide design,
implementation and testing.
Visually Model Software
The process creates visual models (using Unified Modeling
Language - UML) to capture the structure and behavior of architectures and components, permitting the developer to hide details and write code using
"graphical building blocks." These visual abstractions help communicate
different aspects of the design and the interactivity amongst the design components.
Use Component-Based Architectures
The process focuses on early
development and base lining of a robust executable architecture, prior to
committing resources for full-scale development. It describes how to
design a resilient architecture that is flexible, accommodates change, is
intuitively understandable, and promotes more effective software reuse.
The Rational Unified Process supports component-based software
development. RUP provides a systematic
approach to defining an architecture using new and existing components, which are assembled in a well-defined architecture, either by themselves, or in a
component infrastructure such as the Internet, CORBA, and COM.
Verify Software Quality
Poor application performance and poor
reliability are factors which plague application development. Quality should be
reviewed with respect to the requirements based on reliability,
functionality, application performance and system performance. Quality
assessment is implicit in all RUP activities, using objective measurements and criteria.
Control Changes to Software
RUP describes how to control, track and monitor
changes to enable successful iterative development, and provides a secure workspaces for developers by isolating workspaces from each other and by controlling changes
to software artifacts (e.g., models, code, documents, etc.).
RUP Process
The software lifecycle is broken into cycles, each cycle working on a new generation of the product. The Rational Unified Process divides one development cycle in four consecutive phases.
Inception Phase
During the inception phase the business case for the
system is established and the project scope determined. All external entities with which the system will interact (actors) are identified and
the nature of all interactions is defined at a high-level (ie.,
identifying all use cases and describing a few significant ones). Outcomes include:
- A vision document: a general vision of the core project's
requirements, key features, and main constraints.
- An initial use-case model (10%-20% complete).
- An initial project glossary (may optionally be partially expressed
as a domain model).
- An initial business case, which includes business context, success
criteria (revenue projection, market recognition, and so on), and
financial forecast.
- An initial risk assessment.
- A project plan, showing phases and iterations.
- A business model, if necessary.
- One or several prototypes.
Milestone: Lifecycle Objectives
At the end of the inception phase is the first major project milestone:
the Lifecycle Objectives Milestone. The evaluation criteria for the
inception phase are:
- Stakeholder concurrence on scope definition and cost/schedule
estimates.
- Requirements understanding as evidenced by the fidelity of the
primary use cases.
- Credibility of the cost/schedule estimates, priorities, risks, and
development process.
- Depth and breadth of any architectural prototype that was developed.
- Actual expenditures versus planned expenditures.
The project
may be canceled or considerably re-thought if it fails to pass this
milestone.
Elaboration Phase
Analysis of the problem domain, establishment of an architectural foundation, development of the project plan, and elimination of the highest risk elements of the project. Architectural decisions are made based upon the system's scope, major functionality and nonfunctional
requirements such as performance requirements.
At completion of this phase, the design is complete and a decision on whether to continue is made prior to incurring major expenditures. While the process
must always accommodate changes, the elaboration phase activities ensure
that the architecture, requirements and plans are stable enough, and the
risks are sufficiently mitigated to predictably determine the
cost and schedule for the completion of the development.
In this phase, an executable architecture prototype is developed, which addresses the critical use cases
identified in the inception phase and which typically expose the major
technical risks of the project. Outcomes include:
- A use-case model (at least 80% complete) - all use cases and actors
have been identified, and most use-case descriptions have been
developed.
- Supplementary requirements capturing the non functional requirements
and any requirements that are not associated with a specific use case.
- A Software Architecture Description.
- An executable architectural prototype.
- A revised risk list and a revised business case.
- A development plan for the overall project, including the
coarse-grained project plan, showing iterations" and evaluation criteria
for each iteration.
- An updated development case specifying the process to be used.
- A preliminary user manual (optional).
Milestone: Lifecycle Architecture
At the completion of this phase the Lifecycle Architecture is finished. The detailed system objectives and scope are determined as well as an architecture design and the resolution of major risks. The project may be aborted or re-configured if it fails to pass this milestone.
Construction Phase
During this phase, all remaining components and application
features are developed and integrated into the product, and all features
are thoroughly tested. Many projects are large enough that parallel construction tasks
might occur in unison . These parallel activities can significantly accelerate the
availability of deployable releases at the cost of added complexity
in resource management and workflow synchronization.
A robust architecture
and an understandable plan are tightly linked in that a critical qualities of the architecture is its "constructability". Thus, RUP stresses the balanced development of the architecture and
the plan during this phase. Outcomes include:
- The software product integrated on the adequate platforms.
- The user manuals.
- A description of the current release.
Milestone: Initial Operational Capability
The construction phase concludes with the Initial Operational Capability Milestone - the readiness of the software, the sites, and the users for deployment. This release is
often called a "beta" release. Transition may have to be postponed by one
release if the project fails to reach this milestone.
Transition Phase
This phase transitions the software product into the production environment. Once the product is in the hands of the end
user, issues may arise that require a new release or patch to correct some problems, or finish the features that were postponed. This phase is entered when a baseline is mature enough to be
deployed in the end-user domain, and typically requires some usable
subset of the system to be completed to an acceptable level of quality
and that user documentation is available. The transition phase focuses on the activities required to place
the software into the hands of the users, and normally includes
several iterations, including beta releases, general availability
releases, as well as bug-fix and enhancement releases. Considerable effort
may be expended in developing user-oriented documentation, training users,
supporting users in their initial product use, and reacting to user
feedback. At this point in the lifecycle, however, user feedback should be
confined primarily to product tuning, configuring, installation, and
usability issues. The phase can range from being very simple to
extremely complex, depending on the type of product.
Milestone: Product Release
The Product Release Milestone completes this phase. In
some cases, this milestone may coincide with the end of the inception
phase for a subsequent cycle.
Iterations
Each phase in RUP can be further broken down
into iterations. An iteration is a complete development loop resulting in
a release (internal or external) of an executable product, a subset of the
final product under development, which grows incrementally from iteration
to iteration to become the final system.
Disciplines & Workflows
Nine core "disciplines"N take place within an iteration described by the four phases:
- Business modeling
- Requirements
- Analysis & Design
- Implementation
- Test
- Deployment
and three core "supporting" workflows:
- Project Management
- Configuration and Change Management
- Environment
Business Modeling
One of the major problems with most business engineering efforts, is that the software engineering and the business engineering community do not communicate properly with each other. As a result the output from business engineering is not used properly as input to the software development effort, and vice-versa. The Rational Unified Process addresses this by providing a common language and process for both communitiesN, as well as showing how to create and maintain direct traceability between business and software models.
Requirements
Requirements describe what the system should do and allow developers and customers to reach agreement on that description. To do this, required functionality and constraints are elicited, organized and documented including tradeoffs and decisions.
A Vision document is created, and stakeholder needs are elicited. Actors are identified, representing the users, and any other system that may interact with the system being developed. Use cases are identified, representing the behavior of the systemN. Each use case is described in detail - showing how the system interacts step by step with the actors and what the system does. The use cases function as a unifying thread throughout the system's development cycle. The same use-case model is used during requirements capture, analysis & design, and test.
Analysis and Design
To show how the system will be realized in the implementation phase. The system needs to:
- Perform - in a specific implementation environment - the tasks and functions specified in the use-case descriptions.
- Fulfill all its requirements.
- Be robust (easy to change if and when its functional requirements change).
Analysis and Design results in a design model and optionally an analysis model. The design model serves as an abstraction of the source code; that is, the design model acts as a 'blueprint' of how the source code is structured and written. The design model consists of design classes structured into design packages and design subsystems with well-defined interfaces, representing what will become components in the implementation. It also contains descriptions of how objects of these design classes collaborate to perform use cases.
The design activities are centered around the notion of architecture. The production and validation of this architecture is a primary focus of early design iterations. Architecture is represented by a number of architectural views. These views capture the major structural design decisions. In essence, architectural views are abstractions or simplifications of the entire design, in which important characteristics are made more visible by leaving details aside. The architecture is an important vehicle not only for developing a good design model, but also for increasing the quality of any model built during system development.
Implementation
- To define the organization of the code, in terms of implementation subsystems organized in layers.
- To implement classes and objects in terms of components (source files, binaries, executables, and others).
- To test the developed components as units.
- To integrate the results produced by individual implementers (or teams), into an executable system.
RUP describes how you reuse existing components, or implement new components with well defined responsibility, making the system easier to maintain, and increasing the possibilities to reuse.
Components are structured into Implementation Subsystems. Subsystems take the form of directories, with additional structural or management informationN.
Test
- To verify the interaction between objects.
- To verify the proper integration of all components of the software.
- To verify that all requirements have been correctly implemented.
- To identify and ensure defects are addressed prior to the deployment of the software.
RUP proposes interative testing throughout the project permitting the detection of defects as early as possible. Tests are carried out along three quality dimensions:
- reliability,
- functionality,
- application performance and system performance.
For each of these quality dimensions, the process describes how you go through the test lifecycle of planning, design, implementation, execution and evaluation.
Strategies for when and how to automate test are described. Test automation is especially important using an iterative approach, to allow regression testing at then end of each iteration, as well as for each new version of the product.
Deployment
To successfully produce product releases, and deliver the software to its end users. It covers a wide range of activities including:
- Producing external releases of the software
- Packaging the software
- Distributing the software
- Installing the software
- Providing help and assistance to users
- In many cases, this also includes activities such as:
- Planning and conduct of beta tests
- Migration of existing software or data
- Formal acceptance
Although deployment activities are mostly centered around the transition phase, many of the activities need to be included in earlier phases to prepare for deployment at the end of the construction phase.
Project Management
Software Project Management is the art of balancing competing objectives, managing risk, and overcoming constraints to deliver, successfully, a product which meets the needs of both customers (the payers of bills) and the users.
This discipline focuses on the specific aspect of an iterative development process. Project management is facilitated by:
- A framework for managing software-intensive projects;
- Practical guidelines for planning, staffing, executing, and monitoring projects;
- A framework for managing risk.
Configuration and Change Management
Ensures that resultant artifacts are not in conflict due to some of the following kinds of problems:
- Simultaneous Update - When two or more peopole work separately on the same artifact, the last one to make changes might destroy the work of others;
- Limited Notification - When a problem is fixed in artifacts shared by several developers, and some of them are not notified of the change.
- Multiple Versions - Most large programs are developed in evolutionary releases. One release could be in customer use, while another is in test, and the third is still in development. If problems are found in any one of the versions, fixes need to be propagated between them. Confusion can arise leading to costly fixes and re-work unless changes are carefully controlled and monitored.
Managing configurations provides guidelines for managing multiple variants of evolving software systems, tracking which versions are used in given software builds, performing builds of individual programs or entire releases according to user-defined version specifications, and enforcing site-specific development policies.
We describe how you can manage parallel development, development done at multiple sites, and how to automate the build process. This is especially important in an iterative process where you may want to be able to do builds as often as daily, something that would become impossible without powerful automation. We also describe how you can keep an audit trail on why, when and by whom any artifact was changed.
This discipline also covers change request management, i.e. how to report defects, manage them through their lifecycle, and how to use defect data to track progress and trends.
Environment Control
To provide the software development organization with the software development environment - both processes and tools - that are needed to support the development team.
Environment Control focuses on the activities to configure the process in the context of a project and on activities to develop the guidelines needed to support a project. A step-by-step procedure describes how to implement a process in an organization.
Service Improvement Models
Improvement models describe processes for improving organizational capabilities. They may be general to the subject matter being improved (eg. six sigma) or be integrated with the reference model (eg. CMMI). These "Service Improvement Models describe processes an defined levels of organizational capabilities.
Six Sigma
Six Sigma is a data-driven, methodical program of continuous improvement focused on customers and their critical requirementsN. The ultimate goal is to eliminate defects and errors and the costs associated with poor quality. After defining which performance measures represent Critical to Customer (CTC) requirements, data are collected on the number of defects and then translated into a sigma numberN. Moving from 3 to 4 sigma is often classified as continuous improvement - 6 sigma is almost perfect quality.
Learning topics that enable the organization to use Six Sigma to its fullest capability include change management, teamwork, creativity, problem-solving, project management, statistics, process improvement and design of experiments. The project management component of this process is particularly important. There are five process steps to a Six Sigma Project:
- Define - Determines the scope and purpose of the project and includes a Project Charter, a process map of the problem to be investigated and an analysis of customers to determine the Voice of the Customer (VOC), resulting in Critical to Quality variables, or CTQs (sometimes CTC, Critical to Customer)
- Measure - The collection of information on the current situation. Baseline data on defects and possible causes are collected and plotted, and the sigma capability levels are calculated.
- Analyze - Determines the root causes of defects and explores and organizes potential causes
- Improve - The development of solutions that are implemented to remove the root causes and then measured and evaluated for desired results
- Control - Standardizes the improvement process to maintain the gains. The new standard practices are documented, and performance is monitored
FCAPS (fault-management, configuration, accounting, performance, and security)
FCAPS is an acronym for a categorical model of the working objectives of network management. There are five levels, called the fault-management level (F), the configuration level (C), the accounting level (A), the performance level (P), and the security level (S)R.
- At the F level, network problems are found and corrected. Potential future problems are identified, and steps are taken to prevent them from occurring or recurring. In this way, the network is kept operational, and downtime is minimized.
- At the C level, network operation is monitored and controlled. Hardware and programming changes, including the addition of new equipment and programs, modification of existing systems, and removal of obsolete systems and programs, are coordinated. An inventory of equipment and programs is kept and updated regularly.
- The A level, which might also be called the allocation level, is devoted to distributing resources optimally and fairly among network subscribers. This makes the most effective use of the systems available, minimizing the cost of operation. This level is also responsible for ensuring that users are billed appropriately.
- The P level is involved with managing the overall performance of the network. throughput is maximized, bottlenecks are avoided, and potential problems are identified. A major part of the effort is to identify which improvements will yield the greatest overall performance enhancement.
- At the S level, the network is protected against hackers, unauthorized users, and physical or electronic sabotage. Confidentiality of user information is maintained where necessary or warranted. The security systems also allow network administrators to control what each individual authorized user can (and cannot) do with the system.
The following table summarizes the features of each functional area of FCAPS:
Fault Management |
Configuration Management |
Accounting Management |
Performance Management |
Security Management |
Fault detection, correction, isolation
| Resource Initialization
| Track service & resource usage
| Utilization & error rates
| Selective resource access
|
Diagnostic test
| Network provisioning
| Cost for service
| Consistent performance level
| Enable NE functions
|
Network recovery
| Backup & restore
| Combine costs for multiple resources
| Performance reports
| Security events reporting
|
Error logging & handling
| Copy configuration and software distribution
|
| Maintaining & examining historical logs
| Security-related information distribution
|
Clear correlation
|
|
|
| Security audit trail log
|
The framework has had some success in the network management sphere. It provides essential foresight and knowledge to optimize the network performance through fault, performance and configuration management. It addresses security, which is a major concern to IT, service consumers with respect to data protection. The object oriented approach, which helps in modularity; and this increases the ability for abstraction and reliability. Also, the framework enables the development of concurrent network applications on different OS platforms. The cost of porting is less than new development effort.
Capability Maturity Modeling - Integrated
A capability maturity model delineates the characteristics of a mature, capable process. It identifies the practices that are basic to implementing effective processes and distinguishes additional practices necessary for more robust, mature processes. Typically a path is recommended through the various practices for achieving higher levels of maturity which improve an organization's processes and operations. The Integrated version of CMM was an attempt to find a more generic and widely applicable set of organizational practices which could be applied in many settings.
CMMI distinguishes four major process areas for an organization:
- Process Management - Contains the cross-project activities related to defining, planning, resourcing, deploying, implementing, monitoring, controlling, appraising, measuring, and improving processes.
Basic Functions
- Process FocusD - the organization's set of standard processes and the defined processes that are tailored from them. The organizational process assets are used to establish, maintain, implement, and improve the defined processes.
- Process DefinitionD - process asset library is a collection of items maintained by the organization for use by the people and projects of the organizationN.
- TrainingD - training to support the strategic business objectives and to meet the tactical training needs that are common across projects and support groups.
Advanced Functions
- Process PerformanceD - measure of the actual results achieved by following a process.
- Innovation & DeploymentD - enables the selection and deployment of improvements that can enhance the organization's ability to meet its quality and process-performance objectives.
- Project ManagementN - Covers the project management activities related to planning, monitoring, and controlling the project.
Basic Functions
- Project PlanningD - estimating the attributes of the work products and tasks, determining the resources needed, negotiating commitments, producing a schedule, and identifying and analyzing project risks. Iterating through these activities may be necessary to establish the project plan.
- Project Monitoring & ControlD - the basis for monitoring activities, communicating status, and taking corrective action.
- Supplier Agreement ManagementD - the acquisition of products and product components that are delivered to the project’s customer.
Advanced Functions
- Integrated Project Management for IPPDD - establish and manage the project and the involvement of relevant stakeholders according to an integrated and defined process that is tailored from a set of standard processes.
- Risk ManagementD - addresses issues that could endanger achievement of critical objectives.
- Integrated TeamingD - An integrated team understands its role in the structure of teams for the overall project.
- Integrated Supplier ManagementD - evaluating sources of products that might help satisfy project requirements, and using this information to select suppliers.
- Quantitative Project ManagementD - managing product performance using quantitative methods.
- Engineering - Covers the development and maintenance activities that are shared across engineering disciplines (e.g., systems engineering and software engineering.
Recursive Functions
- Requirements DevelopmentD - produce and analyze customer, product and product-component requirements.
- Requirements ManagementD - manage the requirements of the project's products and product components and identify inconsistencies between those requirements and the project's plan and work products.
- Technical SolutionD - design, develop and implement solutions to requirements.
- Product IntegrationD - assemble the product from the product components, ensure product is integrated, functions properly and deliver the product.
- VerificationD - ensure the selected work products meet their specified requirementsN.
- ValidationD -demonstrate that a product or product component fulfills its intended use when placed in its intended environment N.
- Support - Covers the activities that support product development and maintenance. The Support process areas address processes that are used in the context of performing other processes. In general the Support process areas address processes that are targeted towards the project, and may address processes that apply more generally to the organization.
The CMMI reference model distinguishes how these processes interact with each other according to the organization's current maturity characteristics. There are five "maturity" levels through which an organization can increase its capabilities and each level is distinguished by an implementation and subsequent "institutionalization" phase. These "generic" processes measure the organization's:
- Commitment to Perform,
- Ability to Perform,
- how it directs implementation, and
- how it verifying implementation
Institutionalizing Processes
Institutionalization” is an important dimension in CMMI. It implies that the breadth and depth of the implementation of the process and the length of time the process has been in place are appropriate to ensure that the process is ingrained in the way the work is performed.
"the CMMI is based on the premise that if processes are institutionalized, they will endure even when the circumstances around it are not optimal" (p. 123)
"If one were to select a single major contribution that the CMM and CMMI have brought to the field of process improvement it would be the notion of institutionalizationD."
Boris Mutafelija, Harvey Stromberg, Systematice Process Improvement Using ISO 9001:2000 and CMMI, Artech House, 2003, ISBN: 1-58053-487-2, p. 133
|
Level 1 - Performed Processes
At this maturity level the base practices of the process area are performed, formally or informally, to develop work products and provide services to achieve the specific goals of the process area.
Level 2 - Managed Processes
A critical distinction between a performed process and a managed process is the extent to which the process is managed and executed in accordance with policy, employs skilled people having adequate resources to produce controlled outputs, involves relevant stakeholders; is monitored, controlled, and reviewed; and is evaluated for adherence to its process description. Management of the process is concerned with the institutionalization of the process area and the achievement of other specific objectives established for the process, such as cost, schedule, and quality objectives. A managed process achieves the objectives of the plan and is institutionalized for consistent performance.
The objectives for the process are determined based on an understanding of the project’s or organization’s particular needs. At this level of maturity objectives may be stated qualitatively and may be specific objectives for the individual process or defined for a broader scope (i.e., for a set of processes), with the individual processes contributing to achieving these objectives.
A managed process is institutionalized by:
- Establishing an Organizational Policy - define the organizational expectations for the process and make these expectations visible to those in the organization who are affected.
- Planning the Process - determine what is needed to perform the process and achieve the established objectives, to prepare a plan for performing the process, to prepare a process description, and to get agreement on the plan from relevant stakeholders.
- Providing Resources - determine what is needed to perform the process and achieve the established objectives, to prepare a plan for performing the process, to prepare a process description, and to get agreement on the plan from relevant stakeholders.
- Assigning Responsibility - ensure that there is accountability throughout the life of the process for performing the process and achieving the specified results. The people assigned must have the appropriate authority to perform the assigned responsibilities
- Training People - ensure that the people have the necessary skills and expertise to perform or support the process
- Managing Configurations - establish and maintain the integrity of the designated work products of the process (or their descriptions) throughout their useful life.
- Identifying and Involving Relevant Stakeholders - establish and maintain the expected involvement of stakeholders during the execution of the process.
- Monitoring and Controlling the Process - perform the direct day-to-day monitoring and controlling of the process.
- Objectively Evaluating Adherence - objectively evaluate adherence of the process against its process description, standards, and procedures, and address noncompliance
- Reviewing Status with Higher Level Management - objectively evaluate adherence of the process against its process description, standards, and procedures, and address noncompliance
|
Level 3 - Defined Processes
A defined process is a managed process that is tailored from the organization's set of standard processes according to the organization’s tailoring guidelines, and contributes work products, measures, and other process-improvement information to the organizational process assets.
The organization’s set of standard processes, which are the basis of the defined process, are established and improved over time. Standard processes describe the fundamental process elements that are expected in the defined processes. Standard processes also describe the relationships (e.g., the ordering and interfaces) between these process elements. The organization-level infrastructure to support current and future use of the organization's set of standard processes is established and improved over time.
The organizational process assets are artifacts that relate to describing, implementing, and improving processes. These artifacts are assets because they are developed or acquired to meet the business objectives of the organization, and they represent investments by the organization that are expected to provide current and future business value.
A critical distinction between a managed process and a defined process is the scope of application of the process descriptions, standards, and procedures. For a managed process, the process descriptions, standards, and procedures are applicable to a particular project, group, or organizational function. As a result, the managed processes for two projects within the same organization may be very different.
At the defined capability level, the organization is interested in deploying standard processes that are proven and that therefore take less time and money than continually writing and deploying new processes. Because the process descriptions, standards, and procedures are tailored from the organization's set of standard processes and related organizational process assets, defined processes are appropriately consistent across the organization. Another critical distinction is that a defined process is described in more detail and performed more rigorously than a managed process. This means that improvement information is easier to understand, analyze, and use. Finally, management of the defined process is based on the additional insight provided by an understanding of the interrelationships of the process activities and detailed measures of the process, its work products, and its services.
A defined process is institutionalized by:
- Establishing a process description - establish and maintain a description of the process that is tailored from the organization's set of standard processes to address the needs of a specific instantiation.
- Collecting improvement data - To collect information and artifacts derived from planning and performing the process.
|
Level 4 - Quantitatively Managed Processes
A quantitatively managed process is a defined process that is controlled using statistical and other quantitative techniques. Quantitative objectives for quality and process performance are established and used as criteria in managing the process. The quality and process performance are understood in statistical terms and are managed throughout the life of the process.
Quantitative management is performed on the overall set of processes that produces a product or provides a service. The sub-processes that are significant contributors to overall process performance are statistically managed. For these selected sub-processes, detailed measures of the process performance are collected and statistically analyzed. Special causes of process variation are identified and, where appropriate, the source of the special cause is addressed to prevent future occurrences. The quality and process performance measures are incorporated into the measurement repository to support future fact-based decision making.
A quantitatively managed process is institutionalized by:
- Establishing quantitative objectives for the process -determine and obtain agreement from relevant stakeholders about specific quantitative objectives for the process. These quantitative objectives can be expressed in terms of product quality, service quality, and process performance.
- Stabilizing sub-process performance - stabilize the performance of one or more sub-processes of the defined (capability level 3) process that are critical contributors to the overall performance using appropriate statistical and other quantitative techniques.
|
Level 5 - Optimizing Processes
An optimizing process is a quantitatively managed process that is changed and adapted to meet relevant current and projected business objectives. An optimizing process focuses on continually improving the process performance through both incremental and innovative technological improvements. Process improvements that would address root causes of process variation and measurably improve the processes are identified, evaluated, and deployed as appropriate. These improvements are selected based on a quantitative understanding of their expected contribution to achieving the process-improvement objectives versus the cost and impact to the organization. The process performance of the processes is continually improved.
Selected incremental and innovative technological process improvements are systematically managed and deployed into the organization and the effects of the deployed process improvements are measured and re-evaluated.
A optimizing process is institutionalized by:
- Ensuring continuous process improvement - select and systematically deploy process and technology improvements that contribute to meeting established quality and process-performance objectives
- Correct root causes of problems - analyze defects and other problems that were encountered, to correct the root causes of these types of defects and problems, and to prevent these defects and problems from occurring in the future.
|
Basic Interactions
It is desirable for the organization to achieve and subsequently institutionalize the processes at a lower maturity level before undertaking more mature processes. Most organizations are aspiring to achieve level 3 maturity, so CMMI distinguishes process interactions between "basic" and "advanced" process descriptions. basic interactions are described in the following schematic:
Advanced Interactions
At an advanced operational level new process interactions augment the basic ones:
Control Objectives for Information and Related Technology - COBIT
COBIT is based on established frameworks, such as CMM, ISO 9000, ITIL and ISO 17799. However, COBIT does not include process steps and tasks because, although it is oriented toward IT processes, it is a control and management framework rather than a process framework. COBIT focuses on what an enterprise needs to do, not how it needs to do it, and the target audience is senior business management, senior IT management and auditorsR.
"Due to its high level and broad coverage and because it is based on many existing practices, COBIT is often referred to as the ‘integrator’, bringing disparate practices under one umbrella and, just as important, helping to link these various IT practices to business requirements. (p. 10)
COBIT and ITIL are not mutually exclusive and can be combined to provide a powerful IT governance, control and best-practice framework in IT service management. Enterprises that want to put their ITIL program into the context of a wider control and governance framework should use COBIT (p. 7).
" Aligning COBIT, ITIL and ISO 17799 for Business Benefit, ISAAC
|
Premise
|
CobIT notes that "Successful organizations ensure interdependence between their strategic planning and their IT activities." The alignment of the IT service provider with organizational vision, goals and objectives is, therefore, crucial to success. These goals and objectives provide organizational direction which indicates requisite enterprise activities, using the enterprise’s resources. The results of the enterprise activities are measured and reported on, providing input to the constant revision and maintenance of the controls, beginning the cycle again.
|
The underpinning concept of the COBIT Framework is
that control in IT is approached by looking at information
that is needed to support the business objectives or
requirements, and by looking at information as being the
result of the combined application of IT-related
resources that need to be managed by IT processes. To satisfy business objectives, information needs to conform
to certain criteria, which COBIT refers to as business
requirements for information.
The COBIT framework helps align IT with the business by focusing on business information requirements and organizing IT resources. COBIT provides the framework and guidance to implement IT Governance. An organization depends on reliable and timely data and information. COBIT components provide a comprehensive framework for delivering value while managing risk and control over data and information.
Elements
Reworking this logical flow results in the following framework..
A - Business Strategy
To satisfy business objectives, information needs to conform
to certain criteria, which COBIT refers to as business
requirements for information.
- Service Quality and Cost - priority is directed at properly managing risks:
- The usability aspect of Quality is transcribes into the Effectiveness Information criterion.
- The delivery aspect of Quality was considered to overlap with the Availability aspect of the Security requirements
and also to some extent Effectiveness and Efficiency.
- Cost is also considered covered by Efficiency.
- Fiduciary Requirements - includes best practice areas and audit requirementsN:
- Effectiveness and Efficiency of operations
- Reliability of InformationN
- Compliance with laws and regulations
- Security Requirements - world-wide agreement on three elements:
- Confidentiality
- Integrity
- Availability
B - Information Criteria
how IT is organized to meet those requirements.
- Effectiveness - deals with information being relevant
and pertinent to the business process
as well as being delivered in a timely,
correct, consistent and usable manner.
- Efficiency - concerns the provision of information
through the optimal (most productive
and economical) use of resources.
- Confidentiality - concerns the protection of sensitive
information from unauthorized disclosure.
- Integrity - relates to the accuracy and completeness
of information as well as to its
validity in accordance with business
values and expectations.
- Availability - relates to information being available
when required by the business process
now and in the future. It also concerns
the safeguarding of necessary
resources and associated capabilities.
- Compliance - deals with complying with those laws,
regulations and contractual arrangements
to which the business process is
subject, i.e., externally imposed business
criteria.
- Reliability of Information - relates to the provision of appropriate
information for management to operate
the entity and for management to
exercise its financial and compliance
reporting responsibilities.
C - IT Resources
a means to identify the resources required to execute processes.
- Data - are objects in their widest sense (i.e., external and internal), structured and non-structured, graphics, sound, etc.)
- Application Systems - are understood to be the sum of manual and programmed procedures.
- Technology - covers hardware, operating systems,
database management systems, networking,
multimedia, etc.
- Facilities - are all the resources to house and support
information systems.
- People - include staff skills, awareness and
productivity to plan, organize, acquire,
deliver, support and monitor information
systems and services.
D - IT Processes
what stakeholders expect from IT.
Model consists of 34 processes organized in four primary domains:
- Planning and Organization - covers strategy and tactics,
and concerns the identification of
the way IT can best contribute to the
achievement of the business objectives.
Furthermore, the realization of
the strategic vision needs to be
planned, communicated and managed
for different perspectives. Finally, a
proper organization as well as technological
infrastructure must be put in
place.
- Acquisition and Implementation - identification, development, acquisition, implementation of IT solutions. N
- Support and Delivery - actual delivery of required services,
which range from traditional operations
over security and continuity
aspects to training. In order to deliver
services, the necessary support
processes must be set up.
- Monitoring -
regular assessment of processes for their quality
and compliance with control requirements.
This domain thus addresses
management’s oversight of the organization’s
control process and independent
assurance provided by internal
and external audit or obtained from
alternative sources.N
The CobIT model and 34 processes are depicted in the framework below. To the left are the process descriptions. Those considered "core processes" are itemized in white with black background.N.
ISO 9001
The ISO 9000 series of standards is a set of documents dealing with quality systems that
can be used for external quality assurance purposes. They specify quality system
requirements for use where a contract between two parties requires the demonstration of a
supplier's capability to design and supply a product according to defined quality specifications. The two parties could be an external client and a supplier, or both could be internal, e.g., marketing and engineering groups in a
company. Ref. ISO 9001:2000 is a standard that specifies criteria for a quality management system (QMS). A QMS incorporates those elements of an organizations management system that direct and control it with regard to quality. Such a system will need to be supported by top management who will need to be able to demonstrate management commitment.
ISO 9001-2000 is the third revision to ISO 9001 since its inception in 1987. The new standard defines the requirements for a quality management system based on "the process model" and aimed at achieving customer satisfaction and continual improvement in performance. Effective document and record control now must address new areas of concern which include continual improvement and customer satisfaction as well as cope with the accelerating changes in the available technology for information and knowledge management.
The standard was developed using a core set of eight quality management principles based upon a simple "Plan-Do-Check-ActN" methodology which act as a common foundation for all standards relating to quality management.R
Elements inherent in this process approach include:
- customer focus - organizations depend on their customers and should understand current and future customer needs, should meet customer requirements, and strive to exceed customer expectations.
- leadership - leaders establish unity of purpose and direction for the organization. They should create and maintain the internal environment in which people can become fully involved in achieving the organization's objectives
- people involvement - people at all levels are the essence of the organization and their full involvement enables the organization to leverage their skills and experience
- process focus - desired results are achieved more efficiently when activities and related resources are managed as a process
- system approach to management - identifying, understanding and managing interrelated processes as a system contributes to the organization's effectiveness and efficiency in meeting goals and objective.
- continual improvement - continual improvement should be a permanent objective of the organization
- decision making based on empirical evidence - effective decisions are based upon the accumulated and interpretation of factual data.
- supplier relationship - organizations and their suppliers provide a interdependent web in service delivery which produces synergies in the creation of business value
The standard emphasizes the need for an organization to continually monitor their processes and systems. Many of the clauses in the standard reference self monitoring and/or measurement as key elements. This emphasis aims for an integrated approach to business processes. Instead of operating to a business plan on one hand and a quality management system on the other, the standard aims to integrate both of these functions into one system. R. An ISO 9001/2000 compliant QMS sees an organization as a set of interrelated processes, each of which must be planned to include:
- defined goals
- a defined set of interrelationships with other processes
- to be continually measured, under continuous review and improvement.
The standard consists of the following sections:
4. Systemic Requirements
- Establish your quality system - the organization must identify and manage the family of processes needed to ensure conformity
- Document your quality system - documentation forms the basis for understanding the system, communicating its processes and requirements within the organization, describing it to other organizations, and determining the effectiveness of implementation
5. Management Requirements
- demonstrate commitment by conducting certain activities:
- communicate the importance of meeting customer, statutory and regulatory requirements,
- establish quality policy,
- ensure quality objectives are established
- provide resources,
- perform management reviews,
- ensure availability of resources
- ensure that customer requirements are determined and are met with the aim of enhancing customer satisfaction,
- ensure that the quality policy:
- is appropriate to the purpose of the organization,
- includes commitment to comply with requirements and continually improve the effectiveness of the quality management system,
- provides a framework for establishing and reviewing quality objectives,
- is communicated and understood within the organization,
- is reviewed for continuing suitability
- ensure that quality objectives are established at relevant functions and levels within the organization. Quality objectives should be measurable and consistent with quality policy,
- ensure that the quality system conforms to established process requirements and that changes to the quality system are planned, implemented and documented,
- ensure that responsibilities and authorities are defined and communicated within the organization
- appoint a Quality Manager who has responsibility for:
- ensuring processes needed for Quality management are established, implemented and maintained,
- reporting on the performance of the Quality Management System (QMS) and the need for improvement,
- promoting awareness of customer requirements
- ensure appropriate communications processes are established including communicating effectiveness of QMS
- review the QMS, at planned intervals to ensure continuing suitability, adequacy and effectiveness
6. Resource Requirements
- the organization should determine and provide necessary resources to implement and maintain QMS and to enhance customer satisfaction by meeting customer requirementsN,
- personnel performing work affecting product quality shall be competent on the basis of appropriate education, training, skills and experienceN.
- the organization should:
- determine personnel competencies needed,
- provide necessary training,
- evaluate effectiveness of actions taken,ensure staff are aware of relevance and importance of activities and their contributions,
- maintain records of education, training, skills and experience
- the organization should determine,provide and maintain an infrastructure to achieve conformity to product requirements,
- the organization should determine and manage a work environment
7. Realization Requirements
- the organization should plan and develop the processes needed for product realization. Realization would maintain consistency with other processes in the QMS. This should include:
- quality objectives and requirements,
- processes, documents and resources,
- verification, validation, monitoring, inspection and testing specific to the product and its acceptance,
- evidentiary material to prove compliance against requirements (including regulatory)
- the organization should determine customer requirements including...
- delivery and post-delivery activities,
- non-customer stated,
- statutory and regularly.
- the organization should review the requirements prior to commitment to supply the product ensuring that:
- requirements are defined,
- issues are resolved,
- commitments are achievable.
The organization should determine and implement necessary communications with customers.
- Outputs of Design and Development should be provided in a form that enable verification against design and development inputs and should be approved prior to product release.
- at suitable stages, systematic reviews of Design and Development are performed in accordance with plans,
- the Design and Development should be Verified to ensure that outputs have been obtained. Records should be maintained.
- the design and Development should be Validated to ensure that products are capable of meeting their defined requirements,
- Changes to design and Development should be identified and records maintained. Changes should be reviewed, verified and validated before implementation. The review of design and Development changes should include evaluation of the effect of the changes on constituent parts and products already delivered,
- the organization should ensure that purchased products conform to purchasing requirements. The organization should select suppliers in accordance with requirements. Criteria for selection, evaluation should be established and records of the results of the evaluations (and subsequent actions) maintained. Purchasing information should describe the product to be purchased including:
- requirements for approval of product, processes and equipment,
- requirements for personnel qualifications,
- QMS requirements
. The organization should establish and implement inspections and other activities necessary for ensuring purchased products meet specified purchasing requirements.
- the organization should plan and carry out production and service provision under controlled conditions which include:
- information describing the product,
- work instructions,
- suitable equipment,
- implementation and/or availability and use of monitoring and measuring tools,
- delivery and post-delivery activities.
- the organization should validate any processes for production and service provision where the resulting output cannot be verified by monitoring or measurement,
- where appropriate, the organization should trace the product through any stages/milestones during the realization process(es). Where traceability is a requirement, the organization should control and record all unique outputs of the product.
- the organization should ensure the conformity of the product during all intermediate processes and ensure its overall integrity to the intended destination,
- the organization should determine, implement and review monitoring and measurement to be undertaken to product evidence of conformity
8. Remedial Requirements
- the organization should plan and implement monitoring, measurement and analysis and improvement processes as needed to:
- demonstrate product conformity,
- ensure conformity with QMS,
- continually improve effectiveness of QMS.
This should include determination of applicable methods.
- the organization should monitor information relating to customer perception as to whether the organization has met customer requirementsN,
- the organization should conduct internal audits at planned intervals to determine the success of the QMS:
- that it conforms to ISO standards and the requirements of the QMS as set by the organization,
- is effectively implemented and maintained.
- the organization should apply suitable methods for monitoring and measuring the QMS. Where planned results are not met, corrective action should be initiated
- the organization should monitor and measure the characteristics of the product to verify that its' requirements have been met. Evidence of conformity with acceptance criteria should be maintained including appropriate authorizations for release. Products should not be released until all planned arrangements have been satisfied or signed off by an appropriate authorization,
- the organization should ensure that non conforming products are identified and controlled to prevent unintended effects. Controls and related authorities for dealing with nonconforming products should be defined in documenting procedures. Nonconformance should initiate appropriate actions including:
- actions to eliminate detected nonconformance,
- authorizing its use or acceptance under concession by a relevant authority, and, where applicable, by the customer,
- actions to preclude its original intended use or application.
All actions and their reasons should be documented. When nonconforming products are corrected they should be re-verified to demonstrate conformity. When nonconforming products are detected after delivery or use has started, the organization should take actions appropriate to mitigate or eliminate potentially deleterious effects from the nonconformity.
- the organization should determine, collect and analyze information to demonstrate the suitability and effectiveness of the QMS and to identify where improvements are possible and cost efficient,
- the organization should continually improve the effectiveness of the QMS through use of quality policies, objectives, audit results, data analysis, corrective and preventative actions and management reviews. The organization should take actions to eliminate the causes of nonconformities in order to prevent their reoccurrence. The organization should determine actions to eliminate the causes of potential nonconformities which should be documented in procedures.
One of the basic tenets of ISO 9001:2000 is continuous improvement by critical self-evaluation. The output from the self-evaluation is fed into a planning stage to determine actions needed to improve the system. Following the planning and consultation comes the action phase where the proposed changes are implemented. Then the cycle starts again by checking that the changes are effective and meaningful by self-evaluation.
Governance
A framework can contribute to the creation of a business model of an IT enterprise, or constituent part(s) of that organization. When it does this, it will describe relationships amongst functions and processes which are inclusive of the total functioning of the organization which it is describing. When the organization under consideration forms part of a larger business enterprise the manner in which it determines its' strategic and tactical operations is through "alignment" with corporate goals and objectives. A key element in achieving this alignment is through a governance framework.
"We define IT governance as specifying the decision right and accountability framework to encourage desirable behavior in using IT."
Peter Weill, Jeanne W Ross, IT Governance, Harvard Business School Press, 2004, ISBN: 1-59139-253-5, p. 2
|
A governance framework is, therefore, a specific view of the organization designed to identify who systematically make and contributes to decisions.
All enterprises have IT governance. Those with effective governance actively designed a set of It governance mechanisms (committees, budgeting processes, approvals, and so on) that encourage behavior consistent with the organization's mission, strategy, values, norms and culture.....
Without a cohesive IT governance design, enterprises must rely on their CIOs to ameliorate problems through tactical solutions rather than position IT as a strategic asset.
Peter Weill, Jeanne W Ross, IT Governance, Harvard Business School Press, 2004, ISBN: 1-59139-253-5, p. 2-3
|
Weill and Ross, in their book IT Governance, looked at some 300 private and public sector organizations to assess best practice structures and practices for IT governance. They produce a grid to perform this assessment outlining the kinds of decisions required by the organization, (distinguishing each kind between providing input and making the decision) versus "archetypes" which described the decision arrangements found in their research
N.
While the authors point out that the best governance "archetype" really depends on the culture, goals and objectives of the organization (eg. need for flexibility due to market conditions versus stable industry, private versus public sector focus, etc) they outline seven characteristics of top governance performers.
R:
- More managers in leadership positions could describe IT governance in the organization,
- Top governance performers achieved a higher percentage of senior management knowledge about governance simply by engaging more often and more effectively,
- More direct involvement of the senior leaders in IT governance,
- Clearer business objectives for IT investment,
- More differentiated business strategies,
- Fewer renegade and more formally approved exceptions to standards,
- Fewer changes in governance arrangement from year to year.
Fusing ITSM and Governance
IT Governance serves a different purpose from that of IT Service Management. "IT Governance is often perceived as defining the “what” the IT organization should achieve and ITSM as defining the “how” the organization will achieve it. "R. The management of IT is shifting in response to the need for better strategic alignment.
R. Peterson distinguished between the two as “Whereas the domain of IT Management focuses on the efficient and effective supply of IT services and products, and the management of IT operations, IT Governance faces the dual demand of (1) contributing to present business operations and performance, and (2) transforming and positioning IT for meeting future
business challenges”R. This relationship is depicted on the right.
One of the IT Governance goals is to align with the business objectives
defined by the Enterprise Governance. These high-level organizational goals and objectives are used as
input to derive goals, objectives and performance metrics needed to manage IT effectively. At the same
time, the auditing processes are put in place in order to measure and analyze the performance of the
organization. Conceptually, the process can be seen as an “IT results chain”R. ITSM, people, processes and technologies manage and control the IT services and the IT infrastructure according to the objective received from Enterprise Governance. Another IT results chain is design to link ITSM with the service and infrastructure.
|
|
Concurrent to these changes, the IT infrastructure is moving towards a centralized, highly adaptive utility
modelN. The future of IT infrastructure is geared towards a new computing model under which infrastructure is shared among customers and dynamically
optimized to achieve efficient use of resources and minimize associated costs.
Multiple Views
There are some key elements which come out of the discussion of these frameworks:
- alignment is the key goal and can be represented as a boundary between the IT Provider and the business community. Moreover, services have impacts and effects which differ between the Consumer of the service and the Payer (ie. business unit management) - another boundary. Service Level Management, Customer Relationship Management and the Service Desk all operate at this boundary.
- ITIL Capacity Management highlights the difference amongst three primary views of the organization:
- Business
- Services
- Resources
This distinction is important in other ITIL disciplines are well. These viewpoints can be usefully employed to distinguish Change Management forums (ie. Federal, Provincial, Local changes), performance monitoring (component, service and end-to-end service monitoring), financial management (budgetary, service and component cost tracking) and SLM (SLAs, OLAs, service catalogues).
- ITSM discipline can co-exist at different levels of maturity in an organization with mixed results. Grouping of activities both amongst and within respective areas require either more or less maturity as measured by CMM). While acknowledged as important (and the subject of many white papers) a definitive examination of this need has yet to be concluded.
- Moreover, the ability to achieve IT Service Delivery at an achievable level (Level 3, Defined for many organizations) requires additional maturity in many non-IT areas - such as process management, strategic alignment procedures, communications policies, benchmarking and performance management.
To be useful a framework should capture a logical theme. It cannot capture too many themes without becoming overly complicated and confusing. The framework presented here is based upon the above two considerations.
There is a concept embedded in ITIL Capacity Management which should have a much wider resonance. It suggests that the management of an organization's capacity can be viewed at three distinction levels. An article by Humayun Beg in Smart Decision for Technology Leaders re-labels these three ITIL perspectives:
ITIL Term | Humayun Beg Term
|
Term
| Description
| Term
| Description
|
Business
| responsible for ensuring that the future business requirements for IT Services are considered, planned and implemented in a timely fashion. These future requirements come from business plans outlining new services, improvements and growth in existing services, development plans etc.
| Strategic
| done when decisions to expand or contract the infrastructure are made due to expected changes in demand by the business
|
Service
| management of the performance of the live, operational IT Services used by the Customers. It is responsible for ensuring that the performance of all services, as detailed in the targets in the SLAs and SLRs, is monitored and measured, and that the collected data is recorded, analyzed and reported.
| Tactical
| done when new services are added into the infrastructure
|
Resource
| the management of the individual components of the IT Infrastructure. It is responsible for ensuring that all components within the IT Infrastructure that have finite resource are monitored and measured, and that the collected data is recorded, analyzed and reported.
| Operational
| done by real-time monitoring and adjustments to capacity as needed
|
You must identify the correct echelon to which to pitch ITIL. For example, if IT operates in the background of an organization, then it is highly unlikely that the Corporate Strategy Level will have any interest in ITIL. In this case, the Corporate Strategy level would probably have the view that IT should look after its own shop. On the other hand, in an organization that sees IT as a business advantage, the Corporate Strategic Level would almost certainly be interested in ITIL and its deliverables.
Malcolm Frye, Selling ITIL - Building a Case for Pursuing ITIL Best Practices in your Organization, January 5, 2004 |
In short, the concept of views has is a useful methodological tool for describing ITIL processes and activities across the entire array of best practice categories. These three views have a wider theoretical history. ---
Principles are “general rules or maxims which have repeatedly proven successful when
followed.” Using established principles can provide a basic guideline for action - based on the success of other organizations - (ie., adopting best practices). Years ago the Canadian Federal Government used a "Federal Blueprint for Action" to describe a set of principles within a systemic framework.
The schema highlights five key architectural features of an action plan for the delivery of services. Looking at the organization from each of these views permits a systematic analysis of a process needs, one which distinguishes many of the primary features of a process:
- The business view highlights the high level businesses which support the organization.
- The work view details services and the physical activities performed throughout the organization in support of its’ businesses. They are subject to organizational priorities over time and hence should determined by strategic and operational planning designed to fulfill organizational mandates and objectives in a strategic and coherent sense.
- The information view presents the intelligence required to support the work and business views of the organization. It is at the juxtaposition of the four views because it supports each equally. Without information there is no progress, no quality improvement, no continuing research and analysis. The absence of good information may suggest a strategic thrust to collect it in order to augment the
organization's capabilities in the future.
- The application view establishes the software applications necessary to support the information requirements.
- Finally, the Technology View pinpoints the basic hardware architecture to be employed in achieving the objectives described by the other views.
Adapting these views to an IT service provider creates interesting and useful parallels with the three views used to describe Capacity Management. The business view encompasses the "Business-IT Alignment" quadrant in the HP model and is composed of the functions of IT Business Assessment, IT Strategy Development, Service Planning and Customer Management. Within the COBIT framework this would include many of the elements under "Planning and Organization".
The service or tactical view encompasses the Work view and the degree of conformance between this and the business view does much to determine the degree to which the IT Division is "aligned" with the business view. This view would include:
- an IT strategic plan
- the Availability Plan,
- the Capacity Plan,
- the Service Level Plan,
- Continuity Plans
- Financial Plan & IT budgets
"information is the "glue" that holds an organization structure together. Information can be used to better integrate process activities both within a process and across multiple processes."
Thomas H Davenport, Process Innovation, Harvard Business School Press, 1993, ISBN: 0-87584-366-2, p. 75
|
The information view describes the source and format in which information is available to manage the other views. This view would include:
- the Configuration management Database (CMDB)
- the Capacity database (CDB)
- the Availability Management database (AMDB)
- the Performance Database
- the Incident Management database
- Problem Management database
- Change calendar
The application and technology views together describe the "resources" which complete the pictures. They represent the traditional Configuration Items (CIs) which comprise the infrastructure.
Plotting the three views against the ITIL disciplines highlights some interesting variants, some of which have been addressed by the other previously noted frameworks:
The idea that an organization can be looked at in different ways is from new. In fact, it is the basic premise behind the Zachmann Framework. Two key ideas are illustrated in the Zachman Framework:
- There is a set of architectural representations produced over the process of building a complex engineering product representing the different perspectives of the different participants.
- The same product can be described, for different purposes, in different ways, resulting in different types of descriptions