As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Cloud computing is being adopted by commercial, government, and Department of Defense organizations as a way to reduce the operational cost of information technology because resources are scalable and billed on a usage basis as opposed to acquired and maintained. However, for a software architect, cloud computing means that elements of a system or solution may reside outside the organization; therefore, systems must be designed and architected to account for lack of full control over important quality attributes. This half-day presentation starts by briefly defining cloud computing, service models, deployment models, drivers, and barriers for cloud computing adoption and the importance of architecting for the cloud. It then focuses on quality attributes that are critical for the cloud consumer, such security, interoperability, scalability, monitorability, and availability. The focus then turns to the cloud provider and critical quality attributes such as multitenancy, availability, scalability, and performance. Finally, it concludes with a discussion of the present and future of cloud computing.
This half-day tutorial is designed for a broad range of stakeholders—including managers, architects, and developers—interested in adopting DevOps and continuous delivery principles and practices with a particular emphasis on challenges in large-scale, complex environments in industry, Department of Defense organizations, and other government agencies. The tutorial begins with an overview of continuous delivery practices followed by a discussion of DevOps security pitfalls and recommendations. Next, we will focus on designing software to enable continuous delivery and secure, resilient operations as well as engineering the infrastructure and tooling environments to enable continuous monitoring. We will close with an overview of approaches for successfully structuring development teams to enable rapid, confident deployment.
Agile software development methods are now well established in industry. How does the program office overseeing agile programs recognize good practice, and what new roles should we be prepared for? This tutorial will address those and other questions in the context of one of the most common frameworks we see in the Department of Defense setting: the Scaled Agile Framework (SAFe), from Scaled Agile, Inc. Senior staff at the Carnegie Mellon Software Engineering Institute who are supporting major programs will share their insights—including those from interactions with SAFe-implementing contractors—about how agile works, from the program office’s perspective. This tutorial will provide attendees with authentic information about SAFe and agile methods in government settings. Interactive sessions will give participants a chance to challenge common assertions made about agile methods and explore ways to manage risk in highly regulated settings.
This tutorial describes common programming errors and how these errors are exploited by attackers to perform remote-execution denial-of-service attacks and steal sensitive information. Web-based, automotive, and mobile device vulnerabilities will all be described. Understanding how common programming errors are exploited helps attendees to “think like an attacker,” anticipate attacks that may result from architecture or design flaws, and evaluate the effectiveness of mitigation strategies. Strategies to mitigate these attacks—including software engineering, secure development, and secure coding practices—are described.
Before adopting service-oriented architecture (SOA) as a development and operational paradigm, an organization needs to gain a realistic understanding of its potentials and pitfalls. This introductory tutorial begins with a review of SOA implications for an organization and introduces the three basic components of service-oriented systems: services, service consumers, and service infrastructure. It then outlines the basic operations of service discovery, composition, and invocation and introduces common technologies. Web Services is presented in some detail as one approach for implementing SOA, with a description of the basic supporting technologies. The tutorial also addresses SOA development challenges from three perspectives: the service developer, the application developer, and the infrastructure developer. As SOA concepts are revealed, the potentials of cost-efficiency, agility, adaptability, and leverage of legacy investments will become clear. Common misconceptions about SOA are presented, such as the belief that SOA can be implemented “out of the box.”
Are you confident in the accuracy of your program cost estimate? Will your estimate withstand the scrutiny of an independent cost review? Would you sleep better knowing that your experts’ judgments were properly calibrated prior to the estimate? If so, QUELCE is for you!
Cost overruns in Major Defense Acquisition Programs are common, and studies have implicated poor cost estimation as a contributor. MDAP programs expect several submissions to achieve independent cost-estimate approval, resulting in delays of 3 to 6 months or more. The GAO has reported that cost overrun growth in the DoD R&D portfolio amounts to $32 billion in past 5 years. Factors associated with poor cost estimates include
In this tutorial, we teach the steps of a novel cost estimation method called Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE). QUELCE synthesizes several future-scenario techniques into an estimation method that quantifies domain-specific uncertainties, allows subjective inputs by experts, visually depicts relationships among sources of uncertainty, plugs into the front end of existing cost models, and naturally produces a rich basis of estimate. Although QUELCE digests greater program execution information than traditional estimation tools, it leverages techniques to limit the explosion of complex, interacting, and cascading program change drivers for a more tractable cost estimate.
The need to capture, query, and manage data in massive-scale repositories has become pervasive in DoD and other government agencies in diverse mission areas ranging from command and control to health care and business systems. This tutorial drills into the buzzwords "scalable" and "NoSQL."
Participants will learn
The SERA method is a model-based approach for analyzing complex security risks in software-reliant systems and systems of systems across the lifecycle and supply chain. Security risk analysis can be employed to reduce design weaknesses in software-reliant systems. During the acquisition and development of software-reliant systems, the focus is primarily on meeting functional requirements within cost and schedule constraints, often deferring security to later lifecycle activities. Addressing design weaknesses as soon as possible is especially important because these weaknesses are not corrected easily after a system has been deployed. The SERA method provides systems engineers with a structure to connect desired system functionality with the underlying software to evaluate the sufficiency of requirements for software security.
The technical debt metaphor acknowledges that development teams sometimes accept compromises in a system in one dimension (for example, modularity) to meet an urgent demand in some other dimension (for example, a deadline), and that such compromises incur a “debt.” If not properly managed, the interest on this debt may continue to accrue, severely hampering system stability and quality and impacting the team’s ability to deliver enhancements at a pace that satisfies business needs.
Although unmanaged debt can have disastrous results, strategically managed debt can help businesses and organizations take advantage of time-sensitive opportunities, fulfill market needs, and acquire stakeholder feedback. Because architecture has such leverage within the overall development lifecycle, strategic management of architectural debt is of primary importance.
During this session, we will discuss the technical debt metaphor and learn about techniques for measuring and communicating technical debt. We’ll compare strategies and share practices to help make these choices. We will conclude by raising awareness of efforts to move beyond the metaphor and provide software engineers a foundation for managing tradeoffs based on models of their economic impacts.
A surprisingly large number of testing types can and are being used during the development and operation of software-reliant systems. We have identified nearly 200 general types of testing, and there are many additional types that are application-domain specific. While most testers, test managers, and other testing stakeholders are quite knowledgeable about a relatively small number of testing types, many people know very little about most of them or are unaware that others even exist.
Classifying testing types into a taxonomy that groups similar types together makes them easier to understand. One way to organize them is by the questions they answer: specifically, types of testing can be categorized by the five Ws and two Hs: what, when, why, who, where, how, and how well. Understanding the different types is important because different types of testing tend to uncover different types of defects, and projects need multiple testing types to achieve sufficiently low levels of residual defects. Not all of the types are relevant on all projects, and a complete taxonomy can be very useful for discovering the ones that are appropriate and ensuring that no relevant type is accidentally overlooked. Such a taxonomy can also be a useful a way to organize and prioritize your study of testing.
This tutorial introduces our taxonomy of testing types, clarifies the grand scope of testing, and enables you to better select the appropriate types of testing to perform.
The development and acquisition of modern weapons systems is an increasingly complex undertaking. Software-intensive systems, in particular, pose many challenges that developers and program managers must overcome. Operational Test and Evaluation’s role is the independent evaluation and validation of a system’s operational effectiveness, suitability, and survivability. This role is independent from, but not divorced from, developmental testing.
The systems engineering design and verification processes for software-intensive systems require approaches that address complexity and manage risk for the developer, the purchaser, and the user in a broadly integrated fashion. Mr. Duma’s keynote will address the importance of early user involvement in a software program’s lifecycle; the application of scientific test processes, tools and techniques to evaluate systems in complex environments; the value of automated software testing throughout a program’s lifecycle; and the imperative that cybersecurity be “baked in” from the start during the software design process.
Many government organizations are embarking on the development of enterprise-wide IT systems that will integrate and modernize a set of capabilities that even today are still being provided by a set of “siloed” legacy systems. The use of COTS business software and open standards should support better capabilities that can be integrated across the enterprise, while enhancing sustainability and reducing maintenance costs. Those who are contemplating, or even actively developing, such systems are grappling with a recurring set of issues that are independent of the system’s application domain. This presentation discusses five of these issues, describing the advantages and disadvantages of different ways of approaching them:
If you’ve heard consultants talking about the virtues of agile software development, you’ve probably heard their complaints about the “heavyweight” processes that have dominated the industry and all the excess documentation they require. What you don’t often hear is that it’s not really the amount of documentation that weighs you down, but rather the long wait for feedback about what you’re building that’s the source of the problem. In this presentation, you will learn about the sound principles and engineering-minded tradeoffs that occur when agile methods are applied successfully.
In an era of sequestration and austerity, the federal government is seeking strategies for software reuse that will allow it to move away from stove-piped development toward open, reusable architectures. In this 30-minute talk, we’ll discuss up-and-coming cloud technologies used by members of the intelligence community along with some of the challenges and opportunities they present.
Soldiers and frontline personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision making, and mission planning. These resource-constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data are offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This session presents the tactical cloudlet concept and an implementation targeted at promoting survivability of mobile systems. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that lead to enhanced situational awareness and decision making at the edge.
The Carnegie Mellon Software Engineering Institute (SEI) is researching the definition of complexity to determine what characteristics of avionics systems can be measured to help evaluate whether a system is capable of being certified as safe. The Federal Aviation Administration (FAA) has asked the SEI to identify appropriate definitions of complexity for this purpose, then to identify possible measures and effects of complexity on aircraft safety. We are analyzing how complexity negatively affects avionics systems and aircraft safety so that we can focus on a small number of measures most important to the FAA. In this participatory session, we use what we have learned and help you learn
After this session, you will understand the breadth of meanings of the term complexity and determine for yourself which meanings to include in your complexity-reduction effort. You will also understand what makes a good complexity measurement and how you might change or adapt the results that the SEI is considering for the FAA to your organization. Finally, you will learn where in your program complexity can be reduced or managed, using the kinds of data collected for the measurements.
Open Systems Architecture (OSA), an approach that integrates business and technical practices to create systems with interoperable and reusable components, has outstanding potential for creating resilient and adaptable systems, but the associated challenges make OSA one of the most ambitious endeavors in software architecture today. This panel discussion will focus on the progress made so far, the remaining challenges, and strategies for addressing those challenges.
Panel members will speak about OSA from several perspectives, including technical engineering, policy, contracting, and science and technology research. Participants will discuss their experiences with the practical trials of OSA and offer multiple perspectives—which might challenge one another—related to the technical, organizational, and business aspects of making it a reality.
Audience members from many different backgrounds will benefit from this discussion. OSA is a growing area of interest for the Department of Defense (DoD) as important DoD stakeholders recognize its significant potential. Federal workers who attend this panel will take away an understanding of where things really stand with OSA: How much is hype and how much is reality? General practitioners will also benefit from the lessons learned from the OSA adoption push, such as how software architecture can support reconfigurability, recomposability, and other -ilities.
OSA is a promising and important undertaking that deserves a broad, realistic treatment of what has been accomplished so far, how much of the underpinning is technical (especially architectural) versus organizational or business related, and how far we really have to go before its potential becomes reality.
Social media, a type of open source information, has exploded in recent years. Our adversaries, particularly ISIS, routinely use social media to recruit, threaten, and advertise their actions. Social media is also used by people who innocently share information relevant to U.S. personnel. The need for improved U.S. capability to analyze social media is widely recognized, but analyzing social media streams to provide tactical warnings is challenging for several reasons:
The Advanced Mobile Systems Initiative at the Carnegie Mellon Software Engineering Institute has developed the Edge Analytics system that ingests open-source social-media data streams and identifies significant events and emerging trends in time to inform and influence operations. We will discuss the architecture and implementation of Edge Analytics; present findings from analyzing Twitter data related to the 2012 attack on the U.S. Diplomatic Mission in Benghazi; discuss field trials with the Department of Defense, National Guard, and first responder communities; and demonstrate the system.
All organizations face challenges in changing their culture and adopting DevOps philosophies. This is especially true in many federal government agencies. Through well-intentioned policies and procedures, many agencies have created siloed environments where change is slow and difficult. Finishing the last leg of large-scale software development project acquisitions can be particularly challenging and expensive. Barriers often impede getting hardware and software systems fully tested, transitioned, and running in production on schedule. Through our experience as a passionately DevOps-focused software development group within the Carnegie Mellon Software Engineering Institute—a federally funded research and development center that is creating, delivering, and transitioning cutting-edge software solutions to government organizations—we have struggled with and overcome challenges in helping the government adopt DevOps principles. Learn how we have conquered these challenges and helped shift our government stakeholders’ thinking by coaching and initiating DevOps in their operational and development environments.
Despite tremendous technological advances in handheld and wearable devices, warfighters often cannot get information they need when they need it. Causes include continued reliance on paper reports, one-way information flow, and lack of network bandwidth and handheld devices to access information. More bandwidth and new devices can improve reporting and increase the volume of information, and the Department of Defense (DoD) is increasingly interested in having soldiers carry handheld mobile computing devices to support their mission needs, but these advances will also create information overload.
This presentation will discuss and demonstrate the ISE system built by the Advanced Mobile Systems Initiative at the Carnegie Mellon Software Engineering Institute. The goal of ISE is information superiority via group-context-aware mobile applications that support integration of contextual information from individual soldiers, nearby soldiers operating as a unit, and the enterprise. This information can then be used to enhance the precision of information provided to warfighters. Specific innovations of this work include consideration of a wide range of contextual information, including the dynamics of unit operations, to achieve a common mission goal.
The presentation will cover several experiments and field trials performed using ISE, including
Investigations into potential causes of Unintended Acceleration (UA) for Toyota vehicles have made news several times in the past few years. Some blame has been placed on floor mats and sticky throttle pedals. But a jury trial verdict found that defects in Toyota's Electronic Throttle Control System (ETCS) software and safety architecture caused a fatal mishap. This verdict was based in part on a wide variety of computer hardware and software issues. This talk will outline key events in the still-ongoing Toyota UA story and pull together the technical issues that have been discovered by NASA and other experts. The results paint a picture that should inform not only future designers of safety-critical software for automobiles but also all computer-based system designers.
Distributed Adaptive Real-Time (DART) systems are cyber-physical systems (CPS) with physically separated nodes that communicate and coordinate to achieve their goals and that self-adapt to their environment to improve likelihood of success. DART systems promise to revolutionize several areas of civilian and Department of Defense interest, such as robotics, transportation, energy, and health care. However, to fully realize this potential, the software controlling DART systems must be engineered to have high-assurance and certified to operate safely and effectively.
Achieving this goal is challenging—and infeasible with current testing-based approaches—due to complexity resulting from concurrency and coordination, environment uncertainty, and unpredictable system evolution caused by (self-)adaptation. In this talk, we present a sound engineering approach based on judicious use of domain-specific languages with precise semantics, rigorous analysis, and design constraints that leads to assured behavior of DART systems. Our approach uses a synergistic combination of analyses from different scientific domains. It is designed to assure, in a scalable manner, critical timing and functional and probabilistic requirements for systems with uncertain environments and coordination. We have implemented our approach in a workbench, and evaluated it on a model problem. As part of this research, we have combined architecture-based analysis with state-of-the-art verification algorithms for real-time schedulability of mixed-criticality systems, software model checking, and statistical model checking, along with proactive self-adaptation and middleware technology. The result is an evidence-based approach for producing high-assurance DART software involving multiple layers of the CPS stack. We conclude with open problems and directions for future work.
Agile projects can have issues with quality just as waterfall oriented projects. This talk will focus on how the government can experience true agility with quality. I will discuss the challenges that we faced and how we successfully overcame them.
Most agile projects have sprints of less than 4 weeks, and monthly status reports are of limited value to track progress. In addition, many agile projects accumulate technical debt with every sprint. In many organizations, up to 80% of maintenance spending is for bug fixes.
Our agile teams consistently deliver substantially defect-free software on budget and on schedule. Our developers collect and report size, time, defects, and tasks precisely and accurately. The teams uses Earned Value Management at the individual and team levels with the ability to detect as little as one day of schedule slip weekly.
Our teams avoid technical debt by not relying on testing alone for defect removal. The teams conduct team inspections and personal reviews for early effect removal and put the highest quality code into test. Individual developers strive for more than 80% of the components they develop to have zero defects in integration, system, and acceptance test, thereby minimizing and in many cases totally eliminating technical debt. We provide data from our projects to illustrate the use of these practices.
What this means to customers is a dramatically reduced number of security incidents attributable to poor quality software code. In addition to improved security, customers benefit from significantly reduced software operations and maintenance costs. Instead of investing time and money to fix bugs in the production software, customers can then reallocate spending on new features and enhancements.
The government and its contractors should make a commitment to quality from the boardroom on down. Quality should be the number one goal for every project. This means empowering software developers and teams with the proper skills and training needed to minimize the number of defects in their software and deliver products with fewer vulnerabilities the first time around. With quality at this level, the government can reasonably require contractors to provide a warranty against defects in production use.
The term technical debt describes an aspect of the tradeoff between short-term and long-term value in the software development cycle. For example, mistaking a heavy focus on rapid delivery of business features for agility may result in decreased focus on quality and architecture. Hence, the results of the tradeoffs may accumulate as technical debt. An ongoing focus on managing technical debt is critical to the development of high-quality systems that meet their customers’ needs in a timely manner. Left unmanaged, technical debt causes projects to face significant technical and financial problems, leading to increased maintenance, operation, and evolution costs. Agile practices of refactoring, test-driven development, and software craftsmanship are often mistakenly deemed sufficient to manage technical debt. For mission-critical, large-scale systems, there is more to consider with respect to technical debt; risks of accumulating debt are greater, practices (such as refactoring) start to break down, and technical debt becomes harder to find and fix because it is not as visible. In this session, we will explore common fallacies about technical debt and possible actions that development teams can take to better manage it.
Many Department of Defense programs today have a multitude of metrics data being reported by their contractors as well as those collected and tracked by the program office. However, how do you effectively aggregate and report the data at the program manager level (or higher) to get a complete picture of the health of a program? Often the data is reported too late to be useful or even actionable. This presentation will show a method of using a program dashboard representation to aggregate the data being reported as well as methods to provide some insight into schedule risk based on certain types of data. This approach has been implemented in various Air Force programs and is not limited to programs in development but can be implemented within sustainment efforts as well.
For systems of systems (SoS), severe integration and operational problems can arise due to inconsistencies, ambiguities, and gaps in how the architectures address the quality attributes (nonfunctional requirements such as availability, predictability, and security). The problems are exacerbated in contexts where major system and software elements of the SoS are developed concurrently and independently. The Carnegie Mellon Software Engineering Institute (SEI) has developed an approach—called the Mission Thread Workshop (MTW)—for eliciting quality attribute considerations as augmentations to end-to-end mission threads early in the architecture development process and for evaluating SoS, constituent system, and software architectures against these mission threads to identify architecture risks. These mission threads can be used throughout a program’s lifecycle.
The SEI has applied the MTW on a variety of SoS architectures in Department of Defense (DoD) organizations, and this talk will present the MTW in the context of a DoD mission-critical SoS example. The example includes derivation of system- and software-specific scenarios to drive a System and Software Architecture Evaluation of a constituent legacy system in the SoS. It also includes lessons learned from real-world application of the methods. At the end of this session, attendees will understand
Software-intensive acquisition programs continue to experience recurring cost, schedule, and quality issues despite long awareness of these problems, and their persistence indicates they are more difficult to resolve than one might think. As a result, since the 1990s the Carnegie Mellon Software Engineering Institute (SEI) has performed Independent Technical Assessments (ITAs) of mid- to large-sized software-intensive acquisition programs that have experienced problems, conducting interviews and reviewing documents to produce findings and recommendations for corrective action.
To better understand the persistent nature of the problems they encountered, SEI researchers analyzed data collected from 13 unclassified ITAs conducted over five years in a variety of systems. This analysis revealed that while almost all programs face both technical and programmatic issues, the most significant software-related challenges that Department of Defense (DoD) programs face are due to management and oversight concerns. This presentation reviews these “top 10” findings and compares results with prior DoD analyses to examine trends over time.
To explore these problems further, the SEI also looked at those underlying program dynamics that recur across acquisition programs to help identify root causes. Many of the behaviors contributing to the problems could be explained by the presence of “misaligned incentives” (e.g., trading off long-term value for short-term payoff or undermining group objectives to get individual gains) that drive decision making and create poor program outcomes. This presentation explains a set of recurring dynamics that drive the key high-level findings of the ITA analysis and provides qualitative models of each adverse behavior.
As one of the DoD's two R&D FFRDCs, the SEI conducts a research program spanning areas including software development, vulnerability discovery, digital forensics, malware analysis, embedded systems, formal methods, cyber training, and risk management. R&D projects are awarded using an internal competitive process that takes into account intellectual merit, potential government mission impact, collaborators, and potential to transition to practice.
The primary theme of our projects is "capabilities with confidence," generally provided by software. A particular emphasis is being placed on quantifiable evidence in support of assurance—not just in the security sense, but in the sense of assurance for acquisition, performance, testing, and sustainment. In this talk I will briefly introduce SEI as part of Carnegie Mellon University, discuss how the complexity of and dependence on today's software systems drive the need for tools and methods for greater assurance and security, and discuss several multi-year projects that have had significant ongoing impacts on our government clients and the larger community.
...Congress and others have expressed concern over the Department of Defense’s ability to develop and deploy major IT acquisition programs. This year’s National Defense Authorization Act highlights this issue, noting that gaps include lack of support for business process reengineering, for lowering costs of customization of commercial software, for lowering maintenance costs, for open architectures, for engagement with management schools and small businesses, and for the conversion of legacy software to modern systems. In this talk, Mr. Seraphin will summarize congressional concerns that such gaps in science and technology activities related to IT acquisition of business systems, if left unaddressed, could severely hamper the DoD’s ability to field a modern and efficient IT enterprise that meets the current and future needs of the Department.
Most software systems have some “defects” that are identified by users. Some of these are truly defects in that the requirements were not properly implemented; some are caused by changes made to other systems; still others are requests for enhancement to improve the users’ experience. All of these are recorded as defects and generally stored in a database so that they can be worked off in a series of incrementally delivered updates. For most systems, it is not financially feasible to fix all of the concerns in the near term, and indeed some issues may never be addressed. The government program office has an obligation to choose wisely among a set of competing defects to be implemented, especially in a financially constrained environment. This presentation presents a defect-prioritization method based on a risk priority number. This method will help program offices establish priorities for updating systems.
Many legacy systems were built decades ago using the technologies available at the time and have been operating successfully for many years. But they suffer from being built from components that are becoming obsolete, high licensing costs for COTS components, awkward user interfaces, and business processes that evolved based on expediency rather than optimality. In addition, new software engineers familiar with current technology are unfamiliar with the domain; documentation is scarce and outdated; the business rules are likely to be embedded in the code, which is written in an obsolete language using obsolete data structures; and the cadre of aging domain experts maintaining it is unfamiliar with newer technologies.
There are a number of optional large-grained approaches to modernizing a legacy system. We propose a rational way of using system architectural concepts to develop architectural options, create a scorecard, apply the scorecard, and present the results with recommendations to decision makers. The approach includes the following steps:
The presentation will describe how this approach was applied to a large-scale IT modernization effort.
The Lead System Integrator (LSI) approach to building and integrating large, complex systems has resulted in the failure of numerous high-visibility programs, leading Congress to pass legislation limiting its use and giving renewed interest to the idea of government acting as its own systems integrator, or Government as the Integrator (GATI).
The use of GATI promises a number of benefits over the use of an LSI, including government control of the design of the system and software architectures, better visibility into program status and progress, and the development of technical expertise within the government acquisition workforce. However, the steady growth of interest in GATI has come with its own set of issues, most notably the results of downsizing and loss of technical expertise within the defense acquisition workforce over the past 20 years—and GATI efforts have not always gone well.
This presentation identifies many of the factors that determine whether GATI is more likely to be successful in certain domains and circumstances. It then covers the issues that can impede the successful use of GATI and offers specific guidance that has been used in GATI contexts to help with contractual vehicles and language, architectural approaches to facilitate GATI, and managing the technical staffing issues that challenge most GATI efforts. Different organizational implementation approaches and their advantages and disadvantages are presented and analyzed.
Software-intensive systems present several challenges to the contracting officer (CO) and contracting officer representative (COR). These include what questions to ask at each stage of the contracting process (i.e., Initial Concept Study and contract execution) and how to interpret and evaluate the contractors’ answers. Currently, guidance is scattered among numerous references, directives, and instructions; the COR has to sift through this documentation to find the relevant information. The COR Desk Guide Wiki solves this problem through the creation of a single point of reference, consisting of a curated set of Department of Defense and local documents, templates, and checklists to aid the COR in addition to the general benefits of information sharing and collaboration provided by a Wiki. Built on SharePoint 2013, the initial prototype—though incomplete—demonstrates the potential of this approach. The Wiki can be accessed remotely, through the Carnegie Mellon Software Engineering Institute’s External Collaboration Environment, or it can be imported into an existing SharePoint 2013 instance and be up and running in less than a day.
Have you ever worked on a software project that didn’t result in what the users ultimately wanted? Stakeholders, especially end users, often have requirements in mind that they aren’t aware of. Uncovering them can be quite challenging and involves a way of thinking not found in more traditional elicitation approaches. It requires probing interviews and expanded use of context information to break through the confines of what the requirements engineer typically achieves with a specification-driven process. It requires a method that transforms stakeholders’ tacit knowledge into explicit statements so that insightful and innovative requirements can emerge.
The Elicitation of Unstated Requirements at Scale (EURS) research team at the Carnegie Mellon Software Engineering Institute developed and validated a method for determining the unstated needs of the varied stakeholders typical of today’s large, diverse programs (e.g., sociotechnical ecosystems). This method, called KJ+, is scalable to address the needs of multiple categories of stakeholders; usable by a diverse, noncollocated team performing requirements analysis; and results in a more complete set of requirements as the basis for subsequent system design, implementation, and continued sustainment.
This participatory session will include presentations and short exercises. The presentations cover the KJ method as initially practiced 20 years ago, as well as extensions that allow KJ to be used in a virtual environment (KJ+). The results of a KJ+ case study conducted in 2014 will also be presented. Two brief exercises will be conducted to give participants an opportunity to exercise their interviewing and affinitization skills.
In spite of many great testing “how-to” books, the people involved with system and software testing (such as testers, requirements engineers, system/software architects, system/software engineers, technical leaders, managers, and customers) continually make many different types of testing-related mistakes. These commonly occurring human errors can be thought of as system and software testing pitfalls, and when projects unwittingly fall into them, these pitfalls make testing less effective at uncovering defects, make people less productive at performing testing, and harm project morale. Donald Firesmith has created a repository of 167 of these testing anti-patterns, analyzed and organized them into a taxonomy consisting of 23 categories, and documented each pitfall in terms of its name, description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, recommendations for avoiding it and mitigating its harm, and related pitfalls. This presentation builds on Firesmith’s book by the same name, which documented 92 pitfalls in 14 categories.
This session describes how the interagency Joint Fire Science Program (JFSP) developed and assessed the Interagency Fuel Treatment Decision Support System (IFTDSS) to meet the needs of the wildland fire community for fuel-treatment planning for wildland fire. The past decade saw a dramatic proliferation of software systems intended to help fire and fuels managers. These systems were created by many developers and funded by a variety of sources. The systems were developed without any central control or vision and deployed without a governance process for transitioning developmental or research-grade software applications to operationally ready, supported applications. While this resulted in an increase in problem-solving capability, it also resulted in a fuels management environment with numerous stand-alone tools, system and data access problems, inconsistent fuels management planning, minimal and fragmented security, and ad hoc training. It also resulted in a frustrated fire and fuels management community.
To address this self-described “software chaos,” JFSP worked extensively with users to incorporate a set of existing tools into the IFTDSS using a services-based approach. Prior to the deployment decision, JFSP continued their user outreach by conducting an independent assessment of IFTDSS focused on four key areas: alignment with enterprise architecture guidance, impact of the SOA approach on software development by the community, usability by both novice and expert users, and impact on training and knowledge management. The assessment concluded that IFTDSS could be a major step toward meeting the wildland fire community’s strategic goals if fielded as part of a cohesive governance strategy.
This case study tells the story of the development of a critical IT system within a department of the U.S. federal government. This study focuses on the successes and challenges resulting from applying Agile and Lean methods in a government software development environment. The study is based on interviews, observations, documentation, program guidance, and examination of work products. The case study is written so that other government entities can benefit from the implementation experiences.
The role of software in critical missions continues to expand. As new technologies evolve that depend on the flexibility of software, attack surfaces increase and new vulnerabilities emerge. This talk explores the expanding landscape of vulnerabilities that accompanies the increasing reliance on software and examines key steps to help mitigate the increased risk. Topics include the development of appropriate requirements for the mission, system, and software; development and testing practices for increasing confidence in software assurance; and evaluation approaches for existing systems. The talk will conclude with a view of emerging approaches to further improve the delivery and sustainment of mission critical software.
Intellectual property (IP) rights are an important part of almost every acquisition strategy, but planning for and managing IP rights have special concerns for software and system acquisition. Your program must develop an IP strategy early in the lifecycle to ensure that the proper rights are available throughout the entire lifecycle. Understanding why IP rights are important throughout the lifecycle is vital to determining which IP rights to include in the acquisition. You then need strategies to ensure that your program includes the proper language in its acquisition documents and that the program and its contractors take the proper steps during execution to ensure compliance with the required IP rights.
The process of gathering and summarizing operational data to produce clear and interpretable displays of metrics can sometimes resemble the process of making sausage, especially if you have no tools to help you. This presentation will provide a brief demonstration of tools created by staff at the Carnegie Mellon Software Engineering Institute that help scan, analyze, and prepare data to be used on a weekly metrics report for one of our customers. The tools, and the process we’ve devised for using them, allow us to produce a weekly metrics report on behalf of a government program office. The data come directly from the contractor’s database, and the metrics report allows the government to have information derived by a neutral third party to help trigger productive conversations during status meetings. These government-owned metrics supplement what the contractor provides (rather than being redundant) and provide an independent basis for analyzing trends and patterns highlighted by the contractor-reported metrics.
Acquiring systems that provide the necessary capability and functionality to warfighters is at the heart of all system acquisition efforts. But what significantly impacts much of the development, integration, operation, and maintenance costs and risks are the nonfunctional drivers of the system, also called quality attributes. Examples of quality attributes include availability, security, openness, maintainability, reusability, performance, testability, and usability. Many quality attributes are embodied in the system and software architectures, and the supporting architectural approaches must be analyzed and traded off against each other to successfully achieve the mission and business drivers of the system. All too often, systems’ acquisition strategies and associated artifacts (e.g., RFI, RFP, SOO, SOW) do not adequately address the architecture and quality attribute drivers for the system. As a result, the program office has little control of or visibility into the architecture and design solutions provided by the contractor. When architecture and design issues eventually arise, it is often late in the lifecycle (e.g., during integration, operations, or maintenance), resulting in costly rework. Due diligence must be paid to the software architecture and quality attributes as early in the lifecycle as possible.
This presentation will describe approaches that the Carnegie Mellon Software Engineering Institute has used with program offices to adopt software architecture and quality attribute practices in acquisition contexts to give program offices better specificity, visibility, and management of the software architecture and the quality attributes of the system. The talk will also discuss lessons learned in the application of the approaches.
When the DOD-VA Interagency Program Office (IPO) needed to make architecture tradeoffs for their Integrated Electronic Health Record (iEHR) architecture, they asked the Army Telemedicine and Advanced Technology Research Center (TATRC) and the SEI to evaluate the suitability of NoSQL technology for this big data application. The SEI created the Lightweight Evaluation and Prototyping for Big Data (LEAP4BD) method and worked with TATRC's Advanced Concepts Team to perform a technology evaluation.
This talk discusses why prototyping is necessary for evaluating big data technology and how the LEAP4BD method provides a systematic framework for technology evaluation, and it presents a sample of the results we delivered to the IPO.
Most discussions of agile software development in the past focused on team management concepts and the implications of the Agile Manifesto for a single (small) team. The focus now includes scaling these concepts for a variety of applications. The context in which you employ agile methods drives important choices in how you work. Published frameworks and commercial training available in the market offer a variety of solutions for scaling agile. This talk addresses what is meant by scaling, contextual drivers for implementation choices, and the frameworks available for use today.