Abstract

Decomposition is a dominant design strategy because it enables complex problems to be broken up into loosely coupled modules that are easier to manage and can be designed in parallel. However, contrary to widely held expectations, we show that complexity can increase substantially when natural system modules are fully decoupled from one another to support parallel design. Drawing on detailed empirical evidence from a NASA space robotics field experiment, we explain how new information is introduced into the design space through three complexity addition mechanisms of the decomposition process: interface creation, functional allocation, and second-order effects. These findings have important implications for how modules are selected early in the design process and how future decomposition approaches should be developed. Although it is well known that complex systems are rarely fully decomposable and that the decoupling process necessitates additional design work, the literature is predominantly focused on reordering, clustering, and/or grouping-based approaches to define module boundaries within a fixed system representation. Consequently, these approaches are unable to account for the (often significant) new information that is added to the design space through the decomposition process. We contend that the observed mechanisms of complexity growth need to be better accounted for during the module selection process in order to avoid unexpected downstream costs. With this work, we lay a foundation for valuing these complexity-induced impacts to performance, schedule, and cost, earlier in the decomposition process.

1 Introduction

The complexity of engineered systems is consistently increasing [1]. While this is a testament to advancing technology, complexity has been blamed for many of the challenges facing modern design organizations, including cost and schedule overruns [28], and reliability issues post-production [911]. Decomposition is a dominant strategy for managing complexity because it enables problems to be broken up into more manageable chunks or modules [1221]. Although modularity is often framed as a solution to managing system complexity, this paper will document and characterize its previously unarticulated dark side: archetypical sources of complexity growth that certain modes of decomposition can introduce to the problem space.

The process of decomposition has several purported advantages. First, from a schedule perspective, decomposition enables independent tasks to be executed in parallel, reducing design iteration and overall schedule length [2224]. Second, from a knowledge perspective, it reduces the scope of multidisciplinary knowledge needed to solve the problem, making each module more tractable for designers and disciplinary domain experts [2529]. Third, from an innovation perspective, decomposition that focuses on modularity and commonality can enable creative solving efforts to focus on particular high-value subproblems [30,31] or ones that benefit from mass customization opportunities [3236]. At the same time, empirical works have cautioned of the potential downside of over modularization [3739], particularly when the system representation misses key interactions [40,41], and there are a large number of relevant objectives to consider [42,43].

Nonetheless, the promise of effective decomposition inspired extensive literature on methods and tools to support system architects and designers. These broadly fall into two categories: those that focus on grouping functions [37,4448] and those that focus on grouping design variables or components [39,4951]. These two approaches tend to lead to different module assignments [52], since they consider decomposition from different perspectives. In the functional view, the focus is on enabling changeability [31,39,53,54] and mass customization [55], which requires a streamlined mapping of functions to form. In the component view, the focus is on streamlining manufacturing through parallel work. More recently, several scholars have begun developing tools to integrate the functional and product-based objectives [56,57].

This paper focuses on elaborating and characterizing a fundamental issue that the above-described focus has created. During the process of decomposing a complex problem, once an optimal grouping scheme has been identified, the next practical step is to fully decouple any remaining dependencies in the system. This is a necessary step to realize the benefits of decomposition that obligates further design work. Moreover, depending on the problem, this process can be quite involved, leading to the creation of additional design artifacts and associated design choices that are essential to preserve the overall functionality of the system. Since the current literature on decomposition processes focus on reordering, clustering, and/or grouping operations within a fixed system representation, they miss the effects associated with this follow-on-step. As a result, although decomposition aims to reduce complexity, when the full scope of the decomposition process is considered, we observe that the act of decomposing a physically integrated system into loosely coupled modules tends to introduce additional complexity in the problem space. We explain how removing even small out-of-module dependencies in the solution space can introduce sometimes substantial additional design information into the problem space, thus increasing corresponding design complexity. We demonstrate that these variable effects can be quite large and unevenly distributed, depending on its placement and role within the system, and suggest that decomposition processes need to consider them more explicitly.

We document how the process of decomposition introduces complexity to the problem space through a detailed case study. Specifically, we compare the native problem specification of an autonomous robotic manipulator developed by NASA to be deployed on the International Space Station (ISS) [58], to the specification for a two-module decomposition of the same problem. We capture the complexity growth through a mixed-methods approach, with three complementary research thrusts. First, a visual inspection by translating the design information into a Design Structure Matrix (DSM) format and providing a basic count of system elements. Second, we adopt a quantitative perspective and utilize two modern measurement approaches [59,60] to quantify the complexity growth when comparing the native problem to its two-module decomposition equivalent. Finally, drawing on detailed design notes of the decomposition process, we employ inductive qualitative methods to explain the observed growth in problem complexity in terms of how design decisions needed to be made to support decomposition. Combined, these findings enable a rich description of the sources of observed complexity growth due to the decomposition process. Our research suggests that frequent difficulties in system development and integration (e.g., schedule and cost overruns, underperformance) could be attributed to the unaccounted-for characteristics of the decomposition process that we illustrate. Hence, our work establishes the linkage between the theoretical notion of partial decomposability [12] and the decomposition mechanism fundamental to engineering design.

2 Related Work

2.1 Decomposition as a Strategy.

As engineered systems become increasingly complex, there is an ongoing need to integrate knowledge across a wide range of disciplines, while simultaneously leveraging, and being constrained by, a heritage of sophisticated technical systems and artifacts. Within this context, decomposition is an architecting strategy for managing complexity [12], which involves dividing a complex problem, task, or a system into smaller pieces, and addressing them independently [15,17,19]. Decomposition serves multiple purposes that play a critical role in the successful execution of an engineered system design process: enabling tractability by leveraging specialized organizational knowledge across boundaries, enabling efficiency through parallel work, and ensuring value generation over the lifecycle of the artifact, including by facilitating technology upgrade or mass customization. We elaborate on these functions below.

Decomposition overcomes fundamental limitations of human information processing [61]. Since no individual can practically process, and interpret, the amount of information/specialized knowledge required to design, or develop, a complex integrated system, the act of decomposition serves to break the problem into a set of (nearly) decoupled, individually tractable, modules [12,17,62]. Intelligently selected modules have the property of being tightly coupled internally while being loosely coupled to other modules in the system. This facilitates work on individual modules to proceed independently and in parallel [17,23,24], which can reduce project schedules [22,23]. Additionally, by decomposing during the architecting process, designers pick which features to lock in—through design rules [17] or by imposing constraints that de-constrain [63]—so that design effort can focus on the most important subtasks (and not others) [38]. That enables creativity and effort to be focused through specialization [40,64,65], but it also constrains future design trajectories.

There is extensive research across different disciplines on the nature, merits, and implications of various design decomposition strategies in terms of system lifecycle performance [24,66]. Organization science literature has focused on the “fit” between the technical and organizational architectures [67,68]. This viewpoint focuses on the dimensions of institutional coordination across interdependent tasks [26,27]. The core principle is that there should be a match (or “mirroring”) between the structure of technical dependencies and communication channels. Though current theory does not define the direction of causality, a lack of mirroring is most problematic when a technical dependency is unsupported by communication channels [30,6871]. The correspondence between the organizational and technical architectures can be measured [72] and used to guide both technical decomposition and organizational design. It has been demonstrated that if the firm is unsure of the “right” structure, it is better to under-modularize than to over-modularize [38]. This is because of a tradeoff between the opportunity to adopt module-level innovation and the risks of critical dependencies [70]. In the design literature, decomposition focuses on the problem and is inherent in most engineering design methods [73], whether it is at the conceptual design level or at the parametric level. The Pahl and Beitz systematic design method [74], for example, prescribes hierarchical decomposition of the function structure as a core strategy for conceptual design. Within the multidisciplinary design and optimization (MDO) literature, the complexity of finding an optimal system-level solution is reduced by decomposing the parametric design space [7577]. MDO techniques including concurrent subspace optimization, collaborative optimization, bilevel integrated system synthesis, and analytical target cascading use different forms of problem decomposition and coordination between the subproblems [78].

The systems engineering literature is increasingly focused on the interaction of the system and its uncertain operating environment. It has focused on enumerating the advantages and disadvantages of different decomposition strategies, under different conditions. Individual studies have tended to focus on particular “-ilities”—including modularity [17,30], commonality [33,35], platforming [24,79], and changeability [66,80] and their impact on lifecycle performance. The common theme is an inherent tradeoff between systems that achieve optimal performance in a static sense and systems that are robust to future environmental uncertainties. Not surprisingly, the general consensus is that integral point solutions cost less and perform better under nominal operating conditions, while modular systems perform well under a wider range of potential futures [81].

2.2 General Module Identification for Decomposition.

The design structure matrix (DSM), an NxN matrix with system elements as identical rows and columns, is a popular tool for abstracting system architectures [82]. In a DSM, diagonal cells portray the elements and off-diagonal marks signify a dependency between the two elements. The concept has gained significant widespread use to represent systems with various complexity levels, ranging from handheld screwdrivers [83], software directories [67], to aerospace systems [49].

The DSM notation is flexible in terms of communicating different viewpoints of system design. When representing an organization, elements can correspond to people, teams, or business units [23]. When representing an engineered system, depending on the level of abstraction, elements could map to system components or subsystems [84,85]. Similarly, elements can correspond to tasks when representing a process [22].

Since Eppinger and Ulrich popularized DSMs in the systems engineering and design community [37,86], substantial effort has been dedicated to creating DSM-based algorithms and tools to support the design process and guide decomposition decisions [82]. The specific tools vary by the subject, for example, process-based DSMs emphasize shortest path methods [22], whereas product DSMs tend to focus on clustering into modules [39]. In this study, we focus on problem DSMs and specifically the relationship among problem descriptions and system requirements documents; for which, clustering algorithms are most salient.

Figure 1 illustrates the clustering-based decomposition process in a cartoon form, which could be summarized with the following. Given a problem formulation, first, the elements are organized in a DSM format capturing the interdependencies (on the left), and then, the elements of the DSM are reordered through the clustering approach (on the right). This could be achieved either through heuristics (by hand) or through a mathematical algorithm. In the case of an algorithm, the objective function usually involves maximizing the “diagonalness” of the DSM or, in other words, pursues to reorder the elements to maximize the intra-module dependencies while minimizing inter-module links. Depending on the algorithm, modules can be established after the reordering, or the algorithm can explore the optimal number of modules internally [56,57,87,88].

Fig. 1
Demonstration of the clustering-based decomposition of a “fixed” representation approach that is predominantly held across the literature. C represents complexity, and s.t. is the abbreviation for “such that.”
Fig. 1
Demonstration of the clustering-based decomposition of a “fixed” representation approach that is predominantly held across the literature. C represents complexity, and s.t. is the abbreviation for “such that.”
Close modal

From this perspective, the decomposition process ends when suggested modules are drawn around diagonal clusters, as highlighted with bold lines on the right side of the figure. Nominally, because the introduction of structure makes it easier for designers to conceptualize and realize a system design, the new system is deemed less complex than the original [8994].

2.3 The Limitation of Existing Methods to Represent the Nearly Decomposable Nature of Complex Systems.

While the aforementioned DSM-based decomposition approaches have received significant attention in the literature and the associated algorithms are effective at optimizing for their stated objectives, they are not well equipped to respond to the often multidimensional and contradictory objectives faced in complex system design [9597]. Of primary relevance to this paper, the formulation of decomposition as a grouping and/or clustering exercise misses a critical piece of what decoupling a nearly decomposable complex system involves [12,98]. In order to fully decouple proposed modules—which is a necessary element of achieving the promised advantages—once the modules are selected, additional design decisions are needed to complete the decoupling.

In most cases, optimal clustering still retains a few off-diagonal dependencies that need to be either “tear”ed [18,56] or, in practice, managed with an interface. This constitutes a substantial design activity, which is sometimes acknowledged, but we contend not adequately addressed in current methods. In Fig. 2, we elaborate Fig. 1 with an illustration that proposes a more complete understanding of the decomposition process that looks further than reordering a fixed set of system elements.

Fig. 2
Proposed view of decomposition. This view goes beyond a fixed representation of the system and makes explicit the need to consider the additional design work associated with fully decoupling modules.
Fig. 2
Proposed view of decomposition. This view goes beyond a fixed representation of the system and makes explicit the need to consider the additional design work associated with fully decoupling modules.
Close modal

Importantly, not all off-diagonal “Xs” are equal when it comes to interface creation. The right-hand side of Fig. 2 illustrates two alternative strategies for addressing an out-of-module “X.” In the above example, component A shares dependencies with components in each of the other modules; therefore, as part of the decomposition process, the function of A is allocated across the three modules as A1, A2, and A3. Another approach is captured by the creation of a new interface module J, which manages the interaction between G (in module 2) and F (in module 3). These types of choices manifest in real systems as well, with varying impacts on the overall design. For example, imposing the design rule that all systems respect a prescribed voltage/amperage limit may have little impact on the rest of the design space [17]. On the other hand, decoupling the dynamics of two dynamic components requires many more dimensions to be standardized.

The primary gap this study aims to fill is to provide a new understanding of the process through which decomposing can introduce new information into the system. Understanding the extent of this growth and the mechanisms through which it happens is important to mitigate the associated costs (e.g., in performance, development time, and complexity) during the module identification process. The overall goal of this work is illustrated in Fig. 2. Although the module planning process tends to yield an apparent reduction in complexity, the decoupling process brings back additional complexity at a level that is often greater than the initial reduction. Therefore, the initial planning should also consider the effects of the later decoupling process to support better module selection. While this notion is quite intuitive to practitioners, it has not been characterized in a systematic way, which is a precursor to adopting it in any of the modeling frameworks introduced previously. Therefore, this paper focuses on empirically characterizing the decoupling step, so that those implications can be incorporated into the initial module choice.

3 Research Approach

3.1 Research Setting: NASA Robotic Arm Challenge Series.

To understand and characterize how the process of decomposing affects the problem space, we needed to identify a research setting where the same reference problem had been fully specified in multiple ways. In most contexts, engineers arrive at one problem architecture and write the specification to reflect that instantiation of the problem, making it nearly impossible to find a suitable counterfactual. For this research, we leveraged a unique field experiment [58], wherein a reference problem—in this case, an autonomous robotic manipulator—was decomposed in multiple alternative ways. This represents a unique opportunity to study the process of decomposition because the decomposition strategy was varied, while the problem context and associated functional capability were held constant.

In this paper, we focus our analysis on two alternative decompositions of a reference problem. The high-level functions of the reference problem involve positioning a gripper within a workspace, attaching it to a handrail on the International Space Station, and being able to execute payload orienting functions as specified. In the first decomposition (D1), a single problem specification was written for the overall reference problem. D1 documentation includes requirements specifying all high-level functions and interfaces to the free-flyer that controls and powers the manipulator being designed, and the rules for interacting with the handrail. Solvers are left to break up the problem; however, they see fit to solve it. In the second decomposition, (D2), a 2-module decomposition is pre-defined. Separate specifications were written for each of the manipulator arms (D2M1) and gripping mechanism (D2M2). Together, D2M1 and D2M2 satisfy the same high-level functions as D1. However, because they are intended to be developed independently by separate solvers, additional design decisions were made to remove (or fix) potential inter-module dependencies. As shown in the middle panel of Fig. 3, solvers of D2M1 need to know the volume and positing constraints of D2M2 even if they will not design the gripping aspect.

Fig. 3
Excerpt from Problem Specifications: (1) D1 overview showing the free-flyer interface and handrail offset; (2) D2M1 overview adding a gripper keep out zone to the representation; (3) D2M2 overview showing only the interface to the manipulator and volume
Fig. 3
Excerpt from Problem Specifications: (1) D1 overview showing the free-flyer interface and handrail offset; (2) D2M1 overview adding a gripper keep out zone to the representation; (3) D2M2 overview showing only the interface to the manipulator and volume
Close modal

At the high level, this is a straightforward decomposition. In fact, it is a typical choice in industrial robotics since positioners are relatively generic and there is an opportunity to customize and vary grippers to suit context-specific needs without affecting the rest of the system. Therefore, this seemed to be a conservative choice for examining complexity growth due to decomposition. In other words, if we observe substantial information and/or complexity growth here, it will only be more significant in other more exotic decompositions.

3.2 Data Sources: Specifications and Design Notes.

In order to perform an apples-to-apples comparison between decomposed and undecomposed versions of the same problem, we needed equivalent documentation for both problems. We leveraged problem specifications and associated design notes for this purpose. Because of how the challenge series was run, we had published problem specifications for each of D1, D2M1, and D2M2. In addition, since the D2 modules were pre-decoupled with the intention to later recombine them, we also reviewed detailed design notes documenting the intentions for how they would be recombined. Figure 4 shows excerpts from the source requirement documents. The D1 specification is 10 pages long, the D2M1 is 16 pages long, and the D2M2 10 pages long.

Fig. 4
Example of DSM generation from design information in D2M2 Problem Description document: (a) coded design information and (b) DSM representation
Fig. 4
Example of DSM generation from design information in D2M2 Problem Description document: (a) coded design information and (b) DSM representation
Close modal

In order to compare the information content of these documents, we coded the specifications for each problem formulation line by line to extract design information and their interactions. Since problem descriptions are written in narrative form and intended to be human-readable, they often include repetition of information to make the reading easier. Therefore, our first step required the written documents to be transformed into a format more suitable for systematic analysis. This coding process identified unique, explicit, pieces of information required for solving each of the design problems and the dependencies among them. Figure 4(a) provides an example of how a narrative requirement was rewritten in terms of explicit within-module dependencies. This format of information element and dependency naturally lends itself to being encoded in a DSM. In this example, an excerpt of the D2M2 Problem Description is shown in the first panel that describes how the positioner should be able to “unpack” itself. The text contains four pieces of distinct, related design information as shown in the first panel. In this case, the first element “unpack” depends on the information contained in the subsequent three. In DSM format, this is encoded as three feedback dependencies.

To generate D2, the DSMs for D2M1 and D2M2 needed to be combined following the logic encoded in A2. A2 is an explicit extraction of design choices made during team decomposition meetings. For example, in an effort to leave the design space as open as possible, the full power budget was given to each module in the problem description. However, since the power budget is a hard limit at the system level, A2 recorded a rule that only combinations of D2M1 and D2M2 that met the power budget when combined could do so. Similarly, when an environmental constraint, like the internal pressure of the ISS, was leveed on both subproblems, it was only recorded once when they were combined as D2. We believe that this is a fair comparison because D1 and D2 intend to represent the same problem space [99]; however, it is important to acknowledge that the act of writing requirements on a decomposed system necessitates some consideration of the solution space. This was limited to the design of physical interface artifacts necessary to enable full decoupling, which is a key piece of what we are characterizing in this study.

3.3 Quantitative Approaches to Measure Complexity.

We aim to measure and quantify the complexity of the native problem D1 and its two-module decomposition D2, so that we can numerically document the complexity growth induced by the process of decomposition. There are various competing measurement approaches to achieve this goal [59,100106]. Nevertheless, the literature supports an understanding of the complexity that is based on the number of parts, interconnections, and the architectural structure of the system [107]. Therefore, for illustration purposes, we implement two of the most popular measures [59,60] that are representative of the predominant perspectives in the literature.

The first is the structural complexity [59] that considers the components, interfaces, and their topology, along with their relative difficulties (or complexities). While the structural complexity measure was initially formulated to analyze instantiated systems, it has since been applied to a wider range of applications including problem decomposition [108]. Representation of element complexities in this approach also allows us to explore the sensitivity of our insights with respect to the assumptions regarding the relative complexity of system elements. The measure C is defined in Eq. (1) and is composed of three main terms that correspond to the following. The first term, summation over αij, represents the sum of complexities of the individual components in the architecture. The second term inside the brackets represents the quantity of the interfaces Aij, multiplied by the complexity of the interfaces βij. Finally, the third term represents the topological complexity of the system, which is calculated by dividing the energy of the architecture E(A), or the sum of the absolute value of its eigenvalues, with the number of elements n in the architecture. For a more detailed description of how the measure is implemented, please see Refs. [59,108,109]
(1)
The second measure we implement is the widely cited coupling complexity measure [60]. This measure is fundamentally based on graph theory and was proposed with the purpose of comparing alternative systems in terms of their relative complexities given the interdependencies within their design problems and requirements documentation. While the coupling complexity measure does not facilitate relative complexity of elements, therefore is not suitable for sensitivity analysis, it provides a well-accepted interconnectedness-based view of the system, which is a proxy for the mechanical, electrical, and computational elements that need to be managed during design. Furthermore, both methods are conveniently compatible with the DSM representation, thus allow for elaborate analysis given the data at our disposal. In the coupling complexity (CC) approach [60], the DSM is treated as an adjacency matrix and converted into an undirected graph where the components of the DSM are represented as the nodes and the interfaces are represented as directionless edges connecting the nodes. CC is defined in Eq. (2), where Li represents the number of levels (set of nodes having the same depth) in a system decomposition, Mij the number of set sizes of a given length (size for each level of nodes), and Sn represents the set size (set of nodes having the same depth)
(2)

3.4 Qualitative Theory Building About Complexity Growth.

While quantitative measures provide important insight into the extent of complexity growth due to decomposition, they are less able to explain why it is occurring. Therefore, we augmented the quantitative analysis with a deep qualitative dive into the mechanisms that generated the observed growth [110112]. Specifically, we reexamined the problem descriptions to understand the nature of the new information. As will be shown when the qualitative results are presented in Sec. 4.3, we coded the information based on its physical or informational character and whether it was a design choice or implication of a previous choice. Following standard qualitative analysis techniques [113], we iteratively examined the resulting codes to extract general features of the process. The resultant features became the mechanisms we proposed to explain the observed growth.

4 Findings

We present two main results. First, the observation that decomposition significantly increased the scope of information required to specify D2 compared to D1, even in this conservative case. Second, a mechanistic explanation of why this happened here and how it might apply to decomposition more generally.

We arrive at these insights through a mixed-methods approach, with three distinct yet complementary analyses that are elaborated in dedicated subsections. In Sec. 4.1, we provide a visual inspection of the complexity growth, by translating the raw data into design structure matrices and providing a discussion of the change in the count of the system’s elements given their specific roles within the system. While this basic count provides an intuitive understanding, it does not necessarily constitute a proxy of system complexity given that it does not capture the interrelationships between the elements. Therefore, in Sec. 4.2., we provide a complexity quantification by implementing the structural and the coupling complexity measures, which are representative of the current state of the design literature in terms of estimation of design complexity. We present the quantification results along with a bounding analysis to explore the extent of the observed complexity growth. While Secs. 4.1 and 4.2 document the extent of complexity growth, they do not explain why or how the process of decomposition introduces additional complexity to the problem space. Thus, in Sec. 4.3, we employ qualitative inductive theory building to extract further meaning from the design documentations and articulate three key mechanisms of information addition that occurred through the D1-to-D2 decomposition process: interface creation, functional allocation, and second-order effects.

4.1 A Visual Inspection of the Complexity Growth.

In order to establish a baseline understanding of how D1 compares to D2, we depict the DSM representations of the D1 and D2 in Fig. 5. For both D1 to D2, we ordered the DSM elements based on the type of design information (functional, performance, etc.) and rearranged them to allow tightly coupled elements to be placed next to each other. This reordering retains the information regarding the decomposition decisions and allows us to keep track of the information growth. Each element corresponds to a piece of design information specified in the requirements document. For example, one element corresponds to the angular operating range for “yaw” (i.e., 0–90 deg). Since that requirement only has meaning with reference to a reference coordinate system, it has an off-diagonal link to the global coordinate system defined for all motion requirements. A consistent level of design representation was used relative to the problem specification documents. Figure 5 visualizes the different kinds of information embodied in the problem statement. Red captures global system definitions, for example, what is attached to a handrail means in useful engineering design terms. In this case, this involves including the concept of a “fixed attachment” that is defined as being able to resist slipping or twisting when subjected to a specific moment load without deformation of a handrail. Blue is the functional descriptions, which describe what the artifact needs to be able to accomplish. This is where the clean separation of the two modules is most apparent. Purple includes internal interfaces, like the connector plate which is a necessary part of defining two independent modules. Green highlights the external definitions, like the pressure environment inside the space station, or the form factor of the handrail that the system must attach to. These external definitions are intransient to decomposition.

Fig. 5
DSMs for D1 and D2 with rulers to represent how the problem elements evolve, along with a count of elements. Interconnections within the area of the internal and external interface are highlighted in gray.
Fig. 5
DSMs for D1 and D2 with rulers to represent how the problem elements evolve, along with a count of elements. Interconnections within the area of the internal and external interface are highlighted in gray.
Close modal

To understand how the process of decomposing leads to complexity growth, we followed the visual comparison enabled by the ruler lines in Fig. 5. This figure makes apparent that the growth in the size of the DSM is uneven across the type of requirements. Specifically, the size of external definitions is consistent, as expected; internal interfaces between D2M1 and D2M2 only exist in D2, by definition; functional definitions see an expansion, and even D2M1 is larger than D1 on its own; and global system definitions also see an expansion. Differences in these categories are useful for understanding how the processes of decomposing introduces new information and often corresponding complexity. Table 1 summarizes the count of changes in size from D1 to D2, within each color band. We use these changes to focus the subsequent qualitative analysis of generating mechanism.

Table 1

Functional allocation of original motion functions

Function in D1Functional decomposition descriptionAssigned to D2M1Assigned to D2M2
Communicate with AstrobeeSend and receive motion commandsX
Attach(A-1) Unstow and move to the handrailX
(A-2) Grip the handrailX
Stow(S-1) Release the handrailX
(S-2) Move away from handrail and stowX
PanMove free-flyer side to sideX
TiltMove free-flyer up and downX
Function in D1Functional decomposition descriptionAssigned to D2M1Assigned to D2M2
Communicate with AstrobeeSend and receive motion commandsX
Attach(A-1) Unstow and move to the handrailX
(A-2) Grip the handrailX
Stow(S-1) Release the handrailX
(S-2) Move away from handrail and stowX
PanMove free-flyer side to sideX
TiltMove free-flyer up and downX

4.2 Quantification of the Extent to Which Decomposing Adds Complexity.

In order to document the extent of complexity growth that can result through the decomposition process, we adopt the structural and coupling complexity measures introduced in Eqs. (1) and (2) and apply them to the problem DSMs for D1, D2, D2M1, and D2M2. Although D1 and D2 nominally capture the same problem, as noted earlier, D2 necessarily includes more detail associated with the decomposition. As a result, to make a fair comparison between the two DSM representations, it is necessary to consider the relative component and dependency weights. However, the coupling complexity does not allow for the representation of weights. Therefore; we use the structural complexity measure defined in Eq. (1) for the bounding analysis and report the unweighted coupling complexity values next to it.

We conducted the bounding analysis based on a retracing of the decomposition process. During the decomposition process, depending on their characteristics, some D1 requirements were directly assigned to the modules of D2 while others were used to derive new requirements that were allocated across the two modules; as it is standard in the practice of systems engineering [7981]. While the process is standard, assigning weights is not. For example, allocating a D1 manual release requirement to D2M1 and D2M2 is not simply a matter of splitting the same function half to each. They both accept a requirement to support manual release and it is not clear that either is “easier” than the original. Since it is non-trivial for experts to assign relative weights for the complexity of a decomposed requirement, we chose to perform a bounding analysis based on this parent-child relationship. The bounding cases also resonate with some of the alternative assumptions a practitioner could make when assessing the relative complexity of these architectures. We elaborate on these assumptions below.

For the upper bound, we consider one of the simplest and perhaps most naïve assumptions one could explore: all the components and the interfaces for D1, D2, D2M1, and D2M2 are of equal complexity. Or in other words, each new child inherits the parent’s complexity. Revisiting Eq. (1), we would expect this assumption to exacerbate the complexity growth induced by the decomposition process since its terms suggest an understanding of the complexity that is proportional to the count and individual complexity of its elements. Therefore, we adopt this case as the upper bound of our sensitivity analysis.

For the lower bound, we consider the relative differences in component and interface complexity by establishing a linkage between the parent requirements in D1 and the derived requirements in D2. We make the following assumptions. For directly assigned requirements, we assume that they all are equally complex, and the unity weighting is used in the upper bound. For each child requirement, we assume that their relative complexity is inversely proportional to the number of child requirements generated by a given parent. That is, if a D1 requirement was derived into four D2 requirements, we assign an individual component complexity score of 0.25 to each of the children. In the case of directly assigned requirements (i.e., where the same requirement is adopted by both children as is common for environmental requirements), we assign the complexity of the parent requirements to the child. Assignment of interface complexities is a bit more challenging as it is determined by a pair of interacting elements and the specifics of how they interact. Since we are interested in establishing a lower bound to our sensitivity analysis, we assign the lowest complexity score among the two interfacing components as their interface complexity. For example, if two interfacing components have relative component complexities of 0.2 and 0.5, we assign their interface a relative complexity value of 0.2. Assumptions of our bounding analysis are visualized below in Fig. 6, using the sample DSM modularized from Fig. 2.

Fig. 6
Illustration of the assumptions made in the bounding analysis using the modularized DSM image from Fig. 2: (a) upper bound—all components and interface complexities are assumed identical and equal to 1 and (b) the lower bound—components are weighted based on the parent-child relationship. Interface complexities are assumed to be equal to the minimum of two interfacing elements
Fig. 6
Illustration of the assumptions made in the bounding analysis using the modularized DSM image from Fig. 2: (a) upper bound—all components and interface complexities are assumed identical and equal to 1 and (b) the lower bound—components are weighted based on the parent-child relationship. Interface complexities are assumed to be equal to the minimum of two interfacing elements
Close modal

While one could pursue other approaches to assign interface complexities, such as using the maximum of the pair instead, the resulting value would lie somewhere in between our upper and lower bounds. Therefore, rather than suggesting a precise measurement of complexity, our bounding analysis explores the range of complexity values D2, D2M1, and D2M2 could take with respect to the native formulation of the problem D1. Based on these assumptions, we modify the weights in the problem DSMs and compute structural complexity following Eq. (1). We compute the coupling complexity following Eq. (2) assuming all weights are equal. We present the results in Fig. 7, where boundaries of rectangles correspond to the structural complexity of the upper and lower cases of our bounding analysis, the stars represent the value of the coupling complexity, and the horizontal black line represents the complexity of D1, which we use as the baseline.

Fig. 7
Bounding analysis of structural complexity growth compared to D1
Fig. 7
Bounding analysis of structural complexity growth compared to D1
Close modal

Several interesting observations arise from this analysis; however, the most crucial one is that decomposition can induce complexity growth, which contradicts a fundamental assumption of the literature. The complexity of D2 is always larger than the native problem D1, and this difference is quite significant, ranging from 70.9% to 325.5% for structural complexity, and 378.2% for coupling complexity. The wide range between the lower and upper bounds suggests that consideration of the element complexities has a strong impact on how the resulting problem is evaluated. While we expect the exact amount to vary with respect to the selected measurement method, the observed complexity growth is considerable, even under the exaggerated lower bound assumptions. Moreover, this points out the fact that heuristics-driven decomposition approaches that are frequently used in practice, such as the one that we illustrate, could unintentionally result in a more complex formulation (D2) than the original problem (D1).

Second, even the resultant modules do not decrease in complexity. The positioning mechanism D2M1 is more complex than the native problem D1 regardless of weight, with a 169.3% difference in the upper bound and 20.4% in the lower bound for structural complexity and 232.2% for coupling complexity. In the case of D2M2, the result depends on weighting. The upper bound is 46.1% more complex, and the lower bound is 56.0% less complex than the native problem for structural complexity; however, the coupling complexity still evaluates D2M2 as 36.3% more complex. The lower bound for D2M2 is the only case where we observe a reduction in complexity due to the decomposition of D1. Regardless, the resulting modules are not always less complex than the original formulation, further emphasizing the need to better understand the relationship between decomposition and problem complexity. In the sections that follow, we explain why this is happening.

4.3 Qualitative Theory Building: How the Process of Decomposing Induces Complexity.

Although the decomposition of a manipulator (D1) into a gripper and positioner (D2) is a typical one, fully decoupling the design problems required a relatively large number of design choices to be made at the outset regarding how the two modules will interact. This section describes the results of our qualitative analysis which explain three generic mechanisms that link those necessary design choices to the observed growth in complexity.

4.3.1 Mechanism 1: Interface Creation.

The first mechanism focuses on the need to create physical interfaces between modules. The associated design choices correspond to the purple elements in Fig. 5. Many of them are quite familiar with practicing systems engineers and designers. For example, in order to ensure that M1 and M2 can be physically integrated once complete, a common connector plate must be designed in advance. This defines the form of the physical interface between the two modules, including the material and bolt pattern. In addition, these design choices also influence how loads will be transferred between the two modules when the system is integrated. Similarly, designers make choices about how power and electrical signals are transferred between modules. In this case, a standard electrical connector was included in the connector plate, with additional definitions around current, voltage, and pin configurations.

Figure 8(a) is a drawing of the physical interface that was created for the D2 decomposition. This relatively simple interface introduced five new elements of design information into the D2 DSM representation as discussed in Fig. 5. The summary table in Fig. 8(b) shows these new elements of interface information categorized using an existing taxonomy for module interactions [18]. Given the logic of the D2 decomposition—breaking the robotic manipulator into two electromechanical modules, a positioner and gripping end effector, M1 and M2 are physically connected, transfer loads, and exchange data. Accordingly, the additional design information can be traced in the expected interface categories: spatial, energy, and information.

Fig. 8
Connector plate that defines the physical interface between D2 modules and additional design information: (a) drawing of new connector plate and (b) additional interface design information
Fig. 8
Connector plate that defines the physical interface between D2 modules and additional design information: (a) drawing of new connector plate and (b) additional interface design information
Close modal

Conceptually, this is an instance of “tearing” the DSM [56]. Tearing is the process of removing less critical off-diagonal dependencies to enable modules to be fully isolated from one another. While tearing was introduced in the early DSM-based architecting literature, it has not propagated into current decomposition algorithms. This is likely because it is only feasible to just “remove” an off-diagonal dependency in very rare cases. More often, these module-to-module dependencies are replaced with a shared dependency to an existing global standard [17] or a decomposition-specific interface between modules. For either case to occur, a new interface element is inevitably created and introduced into the DSM of the decomposed system, and this occurrence is currently not captured with the existing clustering and reordering algorithms.

4.3.2 Mechanism 2: Functional Allocation.

The second mechanism focuses on the need to allocated system-level functions across modules. Although axiomatic design idealizes a one-to-one mapping of function to form [44], doing so often introduces substantial inefficiencies in mass and volume [114]. In this problem context and in many other space applications, volume is fixed or highly constrained and minimal mass is strongly desired; thus, system-level functions generally need to be allocated across the modules. Table 1 summarizes the mapping of motion functions from D1 to D2. Where possible, whole functions were assigned to D2 modules, however, in the case of Attach and Stow, parts of the original function needed to be derived into child functions and allocated across M1 and M2. Allocation of these functions to formal elements [115] required them to be decomposed, i.e., identify which parts, or subfunctions of the original function can be accomplished by each module, and then a choice of assignment for each sub-function to a particular module. This act of allocating the two original functions, Attach and Stow, is responsible for the growth in the functional description of the problem, represented by the blue band of the DSM in Fig. 5. These allocations also induced ripple effects that reach beyond the functional description.

To illustrate this functional allocation mechanism, consider the Attach function in Table 1. In D1, Attach means that the manipulator must unpack itself from the payload bay, move to the handrail and attach to it. In D2, the functional allocation resulted in M1 being responsible for unpacking and moving to the handrail and M2 performing the gripping. However, to fully specify those functions such that independent designers could design systems that meet each and still be guaranteed to be integrable, there is a need to make choices about how M1 and M2 will eventually interact. This goes beyond what we typically think of as an interface, but it nonetheless introduced 12 interfacing subfunctions (shown in Fig. 9)).

Fig. 9
DSM of the additional functions generated through the decomposition of the “Attach” function
Fig. 9
DSM of the additional functions generated through the decomposition of the “Attach” function
Close modal

This significant increase in the number of DSM elements was generated as a result of an effort to simplify the dynamic coupling between the two modules. To elaborate, removing the off-diagonal X and isolating “position” to M1 and “attach” to M2 required several (functional) design decisions to be made. We chose to introduce the rule that only a single module to be actively moving at a time, which necessarily committed module designers to a certain mode of operations. Without this selection, D2 would have had to contain an interface for an active multidegree of freedom system moving through space and transferring loads. In other words, this would lead to a dynamic and considerably more complicated interface between the two modules, since it would introduce coupled motion and load sharing for all motion operations. While the commitment to motion control strongly decoupled the operational space for D2, it necessitated the introduction of new functions in D2M1 and D2M2 to enable sequential operations. For instance, a communication need emerged between the two modules so that the motion operations could be coordinated between the two modules. For D2M1, being able to send motion commands to and receive confirmations of completed motion from D2M2. Similarly, for D2M2, that meant being able to process motion commands received from D2M1 and send confirmations of completed motion back.

Additionally, a decision about the D2M2 operating concept led to the introduction of another function in D2M2. A design decision was made for D2M2 to be electrically powered, primarily because many end effectors that are commercially available are electrically powered, so this decision renders a significant portion of the known solution space feasible, and reduce uncertainties and costs. Consequently, this necessitated a D2M1 function to “provide electrical energy” to D2M2. Figure 9 illustrates how these additional 12 functions appeared when Attach function was decomposed and allocated to the D2 modules.

Returning to our argument regarding complexity growth, a relevant question is whether each of these 12 functions in D2 is equivalently difficult when compared to the original “Attach” function in D1 from which they were derived or their cumulative impact on the system is comparable. In the complexity quantification performed in Sec. 4.1, we showed that even if we make the most conservative assumption that each of these allocated requirements deserves a weight of 1/12 the original, growth in complexity is still observed due to their interactions.

It is also worth emphasizing that, whatever the weights, the standard “fixed” matrix size and DSM representation mask this phenomenon completely since it does not facilitate the addition of new row/column elements. As we have just shown in the DSM of the expansion of the original “Attach” function, accurate representation of decomposed functions can require significant added information in the form of new row/column elements for the newly defined subfunctions. Moreover, the impact of this decomposition path is not limited to the expansion from one to 12. In the following section, we will show that this initial expansion of the DSM based on active functional allocation choices is accompanied by secondary effects that further increase both size and interconnectedness of the DSM.

4.3.3 Mechanism 3: Secondary Effects.

In this section, we describe an additional mechanism of complexity growth that may be less intuitive: secondary effects that emerge through the coordination of coupled needs. These are in some sense similar to interface definition, but they show up as distinct, interconnected elements in a DSM representation, usually to facilitate the decoupling. Their decisions are necessary because they enable cohesion among the modules after a functional allocation has been defined.

A fundamental notion in systems architecture is that “all complex systems are entities of larger systems” [114]. Our case study was no exception. Since the native problem D1 was itself part of a larger system, it was subject to externally bounded constraints. Therefore, decomposition decisions had to be made about how to decouple the modules with respect to these constraints. This was necessary to achieve the goal of module independence while ensuring that the integrated solution, D2, still met all of the original problem constraints.

In the original problem, the initial state of the system is being packed in the payload bay of the free-flyer. So D2 had to fit into this volume and the modules D1M1 and D2M2 had to be decoupled such that they also support this initial “stowed configuration.” Available volume is a key design parameter and a limited spatial resource for electromechanical solutions that can accomplish the motion subfunctions allocated to M1 and M2 since there is an inherent physical footprint of systems that convert electrical energy to physical movement, e.g., motor. So, in this case, the “stowed volume” became a shared resource that needed to be partitioned and distributed between the two modules. By revisiting the decomposition of the original “Attach” function described in Fig. 9, we can observe how the stowed volume was broken up in order to further decouple D2M1 and D2M2. Since M1 was responsible for moving out of the payload bay (the “Deploy” sub-function), it needed to know the volume M2 would occupy when integrated and packed within the free-flyer payload bay. This led to the introduction of a volume limit on M2. Definition of this volume is shown in Fig. 10(a), appearing as a critical design constraint in the M2 problem description document (“Packed Volume”) and as an interface definition in the M1 problem description.

Fig. 10
Example of how information regarding secondary effects was introduced to D2. On the left (a) “Packed Volume”, in the middle (b) “Open Volume”, and on the right (c) “Dynamic Volume”.
Fig. 10
Example of how information regarding secondary effects was introduced to D2. On the left (a) “Packed Volume”, in the middle (b) “Open Volume”, and on the right (c) “Dynamic Volume”.
Close modal

It is important to note that extensive design effort was committed to define the “Packed Volume.” Since one of the over-arching goals of the D2 decomposition was to prevent over-constraining of the design space and enable as many feasible solutions as possible for both modules, an assessment of the physical footprints of the potential positioner and gripper solutions was required. Using active degrees of freedom as a proxy for relative physical size of M1 and M2 and considering commercially available potential solutions, a specific maximum bound for the “Packed Volume” was selected. It is also worth noting here that this was an instance (among many) where there was a need to consider specific instantiations of solutions in order to fully decouple the functional allocations. So, while the initial process of functional allocation remained in the “problem space,” as recommended by best architectural design practices [114,115], the secondary process of coordinating the modules often required forays into the “solution space”.

We would like to emphasize that not all the external constraints would necessitate or generate this kind of secondary growth in complexity. The additional design information we observed in the case of “Packed Volume” is a result of considering the original payload bay volume constraint in conjunction with the functional allocation across the modules. For example, if the control logic functions (typically implemented by software) were allocated separately from those that generate motion in physical space (typically electromechanical components), the software module would have no physical footprint, which would obviate the need for the shared “Stowed Volume” resource to be partitioned. Thus, we argue that it is not the external system-level constraint that drives the complexity growth, but rather it is coupling with functional allocation decisions. This also highlights the fact that these “secondary” effects are realized subsequent to and, as a consequence of, initial modularization choices.

To illustrate this point further, let us revisit the second and third panels in Fig. 8, which portray how a specific functional allocation decision resulted in secondary growth due to coordination needs of the decomposed problem, D2. Recall that the original “Attach” function was partitioned into a sequential set of independent motion operations, alternating between the positioner (M1) and the gripper’s (M2) motions. In order to support this motion control paradigm, M1 needed more information about M2’s precise location in space to ensure the integrated system would not unintentionally make contact with itself or anything in its environment while carrying out its positioning functions. Specifically, M1 would need to know what the maximum volume M2 could occupy while opening up after being packed in the free-flyer payload bay. This larger “Dynamic Envelope” is shown in Fig. 10(c). Additionally, M1 would also need the information regarding what the “Open” configuration of the M2 would be just prior to gripping the handrail so that it could move M2 into place around the handrail safely. Figure 10(b) depicts this limiting volume for the open configuration of M2. The volume intuitively resembles an open gripper and reflects the physical footprint of M2 configured for attaching to a handrail. Like all three new volume definitions illustrated in Fig. 10, this “open configuration” volume serves as an abstraction of parts of the integrated system needed by each module: M1 needs a bounding representation of M2’s physical footprint to avoid inadvertent contact while positioning, and M2 needs to know of any physical design constraints that exist for each of its operational configurations.

In addition to the emergence of the shared volume definitions, we observed how the same functional allocation choice for sequential motion operations led to the definition of a new system-level design parameter. In this case, alternating which module was actively moving and which was passively necessitated the definition of the position uncertainty of M2 for the configuration just before gripping. This newly defined design parameter appeared as a motion accuracy threshold for M1 and as uncertainty in the location of the handrail for M2. The two views of this new, system design parameter, each from the perspective of one of the modules, are illustrated in Fig. 11. In the first panel, we see M1’s accuracy threshold; and in the second, the uncertainty in the location of M2. We note here that the two views of the position uncertainty are not identical. There is more uncertainty in the assumed location error of the handrail for M2 than there is in the positioning accuracy of M1. This generates a buffer or margin for the integrated system, by allowing for some underperformance of the modules while maintaining system-level performance. Generating these complementary uncertainty thresholds ensures that the “Attach” functional allocation choice—to grip with M2 after M1 moves to a location near the handrail—operates correctly when the modules are integrated. This example highlights how, once the original function is split and distributed to the modules, the D2M1 and D2M2 designers would need to be aware of additional information to generate module solutions that would work as an integrated solution to the original problem. In contrast, this additional information is unnecessary for the original problem, D1, where the designer has full control over how the approach to the handrail and attachment to it are managed.

Fig. 11
Introduction of a new system-level design parameter, “Tool Position Uncertainty”
Fig. 11
Introduction of a new system-level design parameter, “Tool Position Uncertainty”
Close modal

The examples described in this section illustrate how well-reasoned functional allocations can lead to complexity growth when the resultant modules are fully decoupled due to the need to coordinate coupled functions. Further, these secondary “ripple” effects can have a more significant effect on problem complexity than the original allocation. In this case, the allocation of one of the original functions, “Attach,” across the D2M1 and D2M2 modules initially created 12 functions (described in Sec. 4.3.2), but then additional new design information was generated when fully decoupling to allow each module to be designed independently and in parallel. Figure 9 provides a summary of the increase in DSM size due to the functional allocation of just one of the primary motion functions of D1, “Attach.” In addition to the scale of the second-order effects, the type of information addition is also noteworthy. While the splitting of functions into subfunctions is arguably predictable and not necessarily a source of true complexity addition, second-order effects can lead to the introduction of brand new information. Further, the kinds of information is not limited to the kinds normally considered “interface” definition. Indeed, the growth in the functional description (recall the blue band in Fig. 5) shows how these secondary effects of decoupling modules can affect the characteristics of the decomposed system and speaks to just how complex the act of decomposing a complex integrated system is.

In describing the nature of the information addition observed, it may appear that we are conflating the problem and solution space since design decisions are made as part of the decomposition process. However, we contend that modularization does not have meaning without interface design. The D1 and the D2 (as the set of the D2M1 and D2M2) problem descriptions aimed to represent the same inherent problem, but for D2M1 and D2M2 to cover the same problems space as D1 that space needs to be explicitly partitioned. Our findings emphasize that even in the most “clean” case of decoupling a gripper from a positioner, substantial problem complexity is induced through the process of ensuring later integrability.

5 Discussion and Conclusions

Identifying good decompositions is increasingly important as the overall complexity of engineered systems is increasing. However, although it is well known that complex systems are rarely fully decomposable [12], much of the decomposition literature is framed around a reordering process within a fixed representation that ends with module assignment. As illustrated in this study, decoupling partially decomposable modules can require significant additional design work, with associated consequences that introduce considerable information to the design space. This process has been partially described in both the management and design literatures. Within management, design rules introduce the notion of replacing a module-to-module dependency with a shared global dependency [17]. In the design literature, this process is more commonly referred to as “tearing” [18,56,116]. However, neither literature elaborates on how this actually occurs within the DSM notation and its implications for the process of decomposing.

This work makes two important contributions. First, it provides existence proof that even in a straightforward example of a classic system decomposition, significant complexity growth was observed. This finding is important because it runs counter to widely held expectations in the literature. Currently, the literature focuses on reordering and grouping elements of a representation to identify decouplable modules, as a way to achieve complexity reduction. It is widely assumed that decomposing at these loosely coupled boundaries will reduce complexity, and many studies have shown that complexity is generally reduced when measures are applied to the reordered or clustered DSM [57,117,118]. This practice is reflected in industry structure as well. For example, in the terrestrial robotic manipulator industry, complex manipulator systems are most typically sold as automated multi-degrees-of-freedom positioning systems that are integrated with relatively less-complex, sometimes even passive, end effectors; the end effector–manipulator interface is standardized to enable efficient interchangeability with multiple end effectors [119,120]. This typical decomposition is reflected in our selection of D2M1 and D2M2 in D2. However, the aforementioned results show that the process of decoupling, even at what could be considered loosely coupled functional boundaries, introduces new information (and potentially quite a lot of it), which is not captured in a reordered, fixed-element DSM representation. These insights are consistent with the definition of elaboration in systems architecting, which suggests that the process of decomposition inherently involves the addition of detail to an existing concept definition prior to the exploration of solutions [21].

Second, this work represents a first empirical elaboration of the complex effects of decomposing, along with a discussion on the associated design mechanisms. Specifically, our discussion reveal three mechanisms through which decomposition adds information and correspondingly increases problem complexity. To begin with, in order to enable parallel work and achieve the corresponding schedule benefits of decomposition, explicit (new) interface artifacts are needed. Moreover, the process of allocating functional requirements to modules generally introduces new operational constraints to ensure integrability. Finally, the processes of allocating system-level requirements to decomposed modules tend to introduce the need for additional internal requirements for the coordination of functional roles. While it is not surprising that new internal interfaces and functional allocations were created, we observed that the secondary effects of decoupling the modules were substantial. The decomposition process, performed with the objective to create loosely coupled modules, increased complexity by driving design selections and thus creating new, interrelated design information.

We believe that these findings apply to any physically integral system since they stem from functional coupling and generic interactions. Given that it is the responsibility of the designer to ensure that decomposed modules satisfy the system level needs and function as intended when integrated as a whole, we argue that such information addition is inevitable and it is characteristic of the decomposition process. Therefore, there is a need for modularization and decomposition methods and algorithms to account for aspects of the decomposition process when identifying optimal modules.

While none of our observations will be particularly surprising, and perhaps would be intuitive to experienced designers, their implications for how novel decomposition support tools are being created is critical. Currently, design decomposition research adopts a fixed representation of the system. This system, whether it is a DSM or incidence matrix or functional mapping adjacency matrix, is manipulated with sophisticated algorithms to identify an optimal decomposition. However, as we have shown, the resultant system representation after decomposition might look drastically different due to the associated design decisions. Currently, decomposition algorithms are evaluating certain indicators such as out-of-module dependencies and not the complexity growth induced by removing or introducing them. These are some of the many complex tradeoffs that experienced designers naturally intuit when architecting [43,114].

We do not intend to suggest that decomposition should be fully left to experienced designers and heuristics. We also do not wish to imply that current clustering/reordering-based approaches are useless. On the contrary, algorithmic approaches have demonstrated enormous value in enabling designers to explore much larger trade spaces and consider options they never otherwise would. Moreover, algorithms are efficient at filtering reasonable options. However, algorithms inherently pursue to maximize an explicitly defined objective function; consequently, they could exaggerate unrealistic modeling assumptions such as the associated design decisions that we demonstrated to be overlooked in a DSM notation. We believe that the results presented here could lay a foundation for more rigorously exploring the process of decomposition and how alternative decompositions affect the resultant system.

Lastly, we began this work with a motivation to understand the mechanism of complexity reduction through decomposition. However, instead, we found that the widely held notion that decomposition reduces complexity appears to be overstated. Chen and Li have previously noted that there are instances where this may not be the case [121]. Here, we present evidence that if properly accounted for, complexity is likely to increase through decomposition in most cases.

Acknowledgment

Authors have initially discussed these ideas in ASME IDETC 2021, with paper no: 71917 titled “When Decomposition Increases Complexity: How Decomposing Introduces New Information into the Problem Space.” This study significantly differs from the IDETC paper with (i) the addition of the complexity quantification and (ii) the elaborate qualitative explanation of complexity generating mechanisms.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The data sets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper.

Funding Data

  • NSF (Grant No. CMMI-1535539; Funder ID: 10.13039/100000001).

Nomenclature

n =

Number of components

C =

the structural complexity measure

X =

number of interfaces

Aij =

quantity of the interfaces j between the components i in the architecture

Li =

the number of levels (set of nodes having the same depth) in a system decomposition

Mij =

the number of set sizes of a given length (for each level of nodes)

Sn =

the set size for set of nodes having the same depth

CC =

the coupling complexity measure

E(A) =

the energy or the sum of the absolute value of the eigenvalues

αi =

sum of complexities of the individual components i in the architecture

βij =

complexity of the interfaces j between the components i in the architecture

References

1.
Moses
,
J.
,
2004
, “
Foundational Issues in Engineering Systems: A Framing Paper
,” Engineering Systems Monograph, p.
2
.
2.
Maddox
,
I.
,
Collopy
,
P.
, and
Farrington
,
P. A.
,
2013
, “
Value-Based Assessment of DoD Acquisitions Programs
,”
Procedia Comput. Sci.
,
16
(
1
), pp.
1161
1169
.
3.
Locatelli
,
G.
,
2018
, “
Why are Megaprojects, Including Nuclear Power Plants, Delivered Overbudget and Late? Reasons and Remedies
,”
arXiv preprint
4.
U. S. Government Accountability Office
,
2017
, “
Columbia Class Submarine: Immature Technologies Present Risks to Achieving Cost Schedule and Performance Goals
,” Report No. GAO-18-158, https://www.gao.gov/products/GAO-18-158, Accessed February 25, 2021.
5.
U. S. Government Accountability Office
,
2018
, “
F-35 Joint Strike Fighter: Development Is Nearly Complete, But Deficiencies Found in Testing Need to Be Resolved [Reissued with Revisions June 13, 2018]
,” Report No. GAO-18-321, https://www.gao.gov/products/GAO-18-321, Accessed February 25, 2021..
6.
U. S. Government Accountability Office
,
2018
, “
Navy Shipbuilding: Past Performance Provides Valuable Lessons for Future Investments
,” Report No. GAO-18-238SP, https://www.gao.gov/products/GAO-18-238SP, Accessed February 25, 2021.
7.
U. S. Government Accountability Office
,
2019
, “
NASA: Assessments of Major Projects
,” Report No.
GAO-19-262SP
, https://www.gao.gov/products/GAO-19-262SP, Accessed February 25, 2021.
8.
U. S. Government Accountability Office
,
2020
, “
James Webb Space Telescope: Technical Challenges Have Caused Schedule Strain and May Increase Costs
,” Report No. GAO-20-224, https://www.gao.gov/products/GAO-20-224, Accessed February 25, 2021.
9.
Lew
,
K. S.
,
Dillon
,
T. S.
, and
Forward
,
K. E.
,
1988
, “
Software Complexity and Its Impact on Software Reliability
,”
IEEE Trans. Software Eng.
,
14
(
11
), pp.
1645
1655
.
10.
Rijpma
,
J. A.
,
1997
, “
Complexity, Tight–Coupling and Reliability: Connecting Normal Accidents Theory and High Reliability Theory
,”
J. Contingencies Crisis Manag.
,
5
(
1
), pp.
15
23
.
11.
Eckert
,
C. M.
,
Keller
,
R.
,
Earl
,
C.
, and
Clarkson
,
P. J.
,
2006
, “
Supporting Change Processes in Design: Complexity, Prediction and Reliability
,”
Reliab. Eng. Syst. Saf.
,
91
(
12
), pp.
1521
1534
.
12.
Simon
,
H. A.
,
1962
, “
The Architecture of Complexity
,”
Proc. Am. Philos. Soc.
,
106
(
6
), pp.
468
482
.
13.
Parnas
,
D. L.
,
1972
, “
On the Criteria to be Used in Decomposing Systems Into Modules
,”
Pioneers and Their Contributions to Software Engineering
,
15
(
12
), pp.
1053
1058
.
14.
Orton
,
J. D.
, and
Weick
,
K. E.
,
1990
, “
Loosely Coupled Systems: A Reconceptualization
,”
Acad. Manage. Rev.
,
15
(
2
), pp.
203
223
.
15.
Williamson
,
O. E.
,
1991
, “
Comparative Economic Organization: The Analysis of Discrete Structural Alternatives
,”
Adm. Sci. Q.
,
36
(
2
), pp.
269
296
.
16.
Eppinger
,
S. D.
,
1997
, “
A Planning Method for Integration of Large-Scale Engineering Systems
,”
International Conference on Engineering Design
,
Tampere, Finland
,
Aug. 19–21
, pp.
199
204
.
17.
Baldwin
,
C. Y.
, and
Clark
,
K. B.
,
2000
,
Design Rules: The Power of Modularity
,
MIT Press
,
Cambridge, MA
.
18.
Browning
,
T. R.
,
2001
, “
Applying the Design Structure Matrix to System Decomposition and Integration Problems: A Review and New Directions
,”
IEEE Trans. Eng. Manage.
,
48
(
3
), pp.
292
306
.
19.
Thompson
,
J. D.
,
2003
,
Organizations in Action: Social Science Bases of Administrative Theory
,
Transaction Publishers
,
New Brunswick, NJ
.
20.
Campagnolo
,
D.
, and
Camuffo
,
A.
,
2010
, “
The Concept of Modularity in Management Studies: A Literature Review
,”
Int. J. Manag. Rev.
,
12
(
3
), pp.
259
283
.
21.
Topcu
,
T. G.
,
Triantis
,
K.
,
Malak
,
R.
, and
Collopy
,
P.
,
2020
, “
An Interdisciplinary Strategy to Advance Systems Engineering Theory: The Case of Abstraction and Elaboration
,”
Syst. Eng.
,
23
(
6
), pp.
673
683
.
22.
Steward
,
D. V.
,
1981
, “
The Design Structure System: A Method for Managing the Design of Complex Systems
,”
IEEE Trans. Eng. Manage.
,
EM-28
(
3
), pp.
71
74
.
23.
Morelli
,
M. D.
,
Eppinger
,
S. D.
, and
Gulati
,
R. K.
,
1995
, “
Predicting Technical Communication in Product Development Organizations
,”
IEEE Trans. Eng. Manage.
,
42
(
3
), pp.
215
222
.
24.
Ulrich
,
K. T.
,
2003
,
Product Design and Development
,
Tata McGraw-Hill Education
,
New York
.
25.
Alexander
,
C.
,
1964
,
Notes on the Synthesis of Form
,
Harvard University Press
,
Cambridge, MA
.
26.
Galbraith
,
J. R.
,
1974
, “
Organization Design: An Information Processing View
,”
Interfaces
,
4
(
3
), pp.
28
36
.
27.
Tushman
,
M. L.
,
1977
, “
Special Boundary Roles in the Innovation Process
,”
Adm. Sci. Q.
,
22
(
4
), pp.
587
605
.
28.
Tushman
,
M. L.
, and
Nadler
,
D. A.
,
1978
, “
Information Processing as an Integrating Concept in Organizational Design
,”
Acad. Manage. Rev.
,
3
(
3
), pp.
613
624
.
29.
Reif
,
F.
,
1981
, “
Teaching Problem Solving—A Scientific Approach
,”
Phys. Teacher
,
19
(
5
), pp.
310
316
.
30.
Sanchez
,
R.
, and
Mahoney
,
J. T.
,
1996
, “
Modularity, Flexibility, and Knowledge Management in Product and Organization Design
,”
Strateg. Manag. J.
,
17
(
S2
), pp.
63
76
.
31.
Fixson
,
S. K.
, and
Park
,
J.-K.
,
2008
, “
The Power of Integrality: Linkages Between Product Architecture, Innovation, and Industry Structure
,”
Res. Policy
,
37
(
8
), pp.
1296
1316
.
32.
Pine
,
B. J.
,
1993
, “
Making Mass Customization Happen: Strategies for the New Competitive Realities
,”
Plan. Rev.
,
21
(
5
), pp.
23
24
.
33.
Boas
,
R.
, and
Crawley
,
E.
,
2011
, “
The Elusive Benefits of Common Parts
,”
Harvard Business Review
, https://hbr.org/2011/10/the-elusive-benefits-of-common-parts, Accessed February 5, 2021.
34.
Fogliatto
,
F. S.
,
Da Silveira
,
G. J.
, and
Borenstein
,
D.
,
2012
, “
The Mass Customization Decade: An Updated Review of the Literature
,”
Int. J. Prod. Econ.
,
138
(
1
), pp.
14
25
.
35.
Boas
,
R.
,
Cameron
,
B. G.
, and
Crawley
,
E. F.
,
2013
, “
Divergence and Lifecycle Offsets in Product Families With Commonality
,”
Syst. Eng.
,
16
(
2
), pp.
175
192
.
36.
Colombo
,
E. F.
,
Shougarian
,
N.
,
Sinha
,
K.
,
Cascini
,
G.
, and
de Weck
,
O. L.
,
2019
, “
Value Analysis for Customizable Modular Product Platforms: Theory and Case Study
,”
Res. Eng. Des.
31
(
1
), pp.
123
140
.
37.
Ulrich
,
K.
,
1995
, “
The Role of Product Architecture in the Manufacturing Firm
,”
Res. Policy
,
24
(
3
), pp.
419
440
.
38.
Ethiraj
,
S. K.
, and
Levinthal
,
D.
,
2004
, “
Modularity and Innovation in Complex Systems
,”
Manage. Sci.
,
50
(
2
), pp.
159
173
.
39.
Hölttä-Otto
,
K.
, and
de Weck
,
O.
,
2007
, “
Degree of Modularity in Engineering Systems and Products With Technical and Business Constraints
,”
Concurr. Eng.
,
15
(
2
), pp.
113
126
.
40.
Brusoni
,
S.
, and
Prencipe
,
A.
,
2001
, “
Unpacking the Black Box of Modularity: Technologies, Products and Organizations
,”
Ind. Corp. Change
,
10
(
1
), pp.
179
205
.
41.
Brusoni
,
S.
, and
Prencipe
,
A.
,
2006
, “
Making Design Rules: A Multidomain Perspective
,”
Organ. Sci.
,
17
(
2
), pp.
179
189
.
42.
Holmqvist
,
T. K. P.
, and
Persson
,
M. L.
,
2003
, “
Analysis and Improvement of Product Modularization Methods: Their Ability to Deal With Complex Products
,”
Syst. Eng.
,
6
(
3
), pp.
195
209
.
43.
Maier
,
M. W.
, and
Rechtin
,
E.
,
2009
,
The Art of Systems Architecting
,
CRC Press
,
Boca Raton, FL
.
44.
Suh
,
N. P.
,
1998
, “
Axiomatic Design Theory for Systems
,”
Res. Eng. Des.
,
10
(
4
), pp.
189
209
.
45.
Ericsson
,
A.
, and
Erixon
,
G.
,
1999
,
Controlling Design Variants: Modular Product Platforms
,
Society of Manufacturing Engineers
,
Dearborn, MA
.
46.
Jiao
,
J.
, and
Zhang
,
Y.
,
2005
, “
Product Portfolio Identification Based on Association Rule Mining
,”
Comput.-Aided Des.
,
37
(
2
), pp.
149
172
.
47.
Stone
,
R. B.
,
Wood
,
K. L.
, and
Crawford
,
R. H.
,
2000
, “
A Heuristic Method for Identifying Modules for Product Architectures
,”
Des. Stud.
,
21
(
1
), pp.
5
31
.
48.
Krause
,
D.
,
Beckmann
,
G.
,
Eilmus
,
S.
,
Gebhardt
,
N.
,
Jonas
,
H.
, and
Rettberg
,
R.
,
2014
, “Integrated Development of Modular Product Families: a Methods Toolkit,”
Advances in Product Family and Product Platform Design
,.
Springer
,
New York, NY
, pp.
245
269
.
49.
Eppinger
,
S. D.
, and
Browning
,
T. R.
,
2012
,
Design Structure Matrix Methods and Applications
,
MIT Press
,
Cambridge, MA
.
50.
Otto
,
K.
,
Hölttä-Otto
,
K.
,
Simpson
,
T. W.
,
Krause
,
D.
,
Ripperda
,
S.
, and
Ki Moon
,
S.
,
2016
, “
Global Views on Modular Design Research: Linking Alternative Methods to Support Modular Product Family Concept Development
,”
ASME J. Mech. Des.
,
138
(
7
), p.
071101
.
51.
Bruun
,
H. P. L.
,
Mortensen
,
N. H.
, and
Harlou
,
U.
,
2014
, “
Interface Diagram: Design Tool for Supporting the Development of Modularity in Complex Product Systems
,”
Concurr. Eng.
,
22
(
1
), pp.
62
76
.
52.
Suh
,
E. S.
,
Chiriac
,
N.
, and
Hölttä-Otto
,
K.
,
2015
, “
Seeing Complex System Through Different Lenses: Impact of Decomposition Perspective on System Architecture Analysis
,”
Syst. Eng.
,
18
(
3
), pp.
229
240
.
53.
Fixson
,
S. K.
,
2005
, “
Product Architecture Assessment: A Tool to Link Product, Process, and Supply Chain Design Decisions
,”
J. Oper. Manage.
,
23
(
3
), pp.
345
369
.
54.
Fricke
,
E.
, and
Schulz
,
A. P.
,
2005
, “
Design for Changeability (DfC): Principles to Enable Changes in Systems Throughout Their Entire Lifecycle
,”
Syst. Eng.
,
8
(
4
).
55.
Eckert
,
C.
,
Clarkson
,
P. J.
, and
Zanker
,
W.
,
2004
, “
Change and Customisation in Complex Engineering Domains
,”
Res. Eng. Des.
,
15
(
1
), pp.
1
21
.
56.
Helmer
,
R.
,
Yassine
,
A.
, and
Meier
,
C.
,
2010
, “
Systematic Module and Interface Definition Using Component Design Structure Matrix
,”
J. Eng. Des.
,
21
(
6
), pp.
647
675
.
57.
Borjesson
,
F.
, and
Hölttä-Otto
,
K.
,
2014
, “
A Module Generation Algorithm for Product Architecture Based on Component Interactions and Strategic Drivers
,”
Res. Eng. Des.
,
25
(
1
), pp.
31
51
.
58.
Szajnfarber
,
Z.
,
Zhang
,
L.
,
Mukherjee
,
S.
,
Crusan
,
J.
,
Hennig
,
A.
, and
Vrolijk
,
A.
,
2020
, “
Who Is in the Crowd? Characterizing the Capabilities of Prize Competition Competitors
,”
IEEE Trans. Eng. Manage.
, pp.
1
15
.
59.
Sinha
,
K.
, and
de Weck
,
O. L.
,
2016
, “
Empirical Validation of Structural Complexity Metric and Complexity Management for Engineering Systems
,”
Syst. Eng.
,
19
(
3
), pp.
193
206
.
60.
Summers
,
J. D.
, and
Shah
,
J. J.
,
2010
, “
Mechanical Engineering Design Complexity Metrics: Size, Coupling, and Solvability
,”
ASME J. Mech. Des.
,
132
(
2
), p.
021004
.
61.
Miller
,
G. A.
,
1956
, “
The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information
,”
Psychol. Rev.
,
63
(
2
), pp.
81
97
.
62.
Nadler
,
D.
,
Tushman
,
M.
,
Tushman
,
M. L.
, and
Nadler
,
M. B.
,
1997
,
Competing by Design: The Power of Organizational Architecture
,
Oxford University Press
,
Cambridge, MA
.
63.
Doyle
,
J. C.
, and
Csete
,
M.
,
2011
, “
Architecture, Constraints, and Behavior
,”
Proc. Natl. Acad. Sci..
,
108
(
Supplement_3
), pp.
15624
15630
.
64.
Jones
,
B. F.
,
2009
, “
The Burden of Knowledge and the ‘Death of the Renaissance Man’: Is Innovation Getting Harder?
,”
Rev. Econ. Stud.
,
76
(
1
), pp.
283
317
.
65.
Brusoni
,
S.
,
2005
, “
The Limits to Specialization: Problem Solving and Coordination in ‘Modular Networks
,”
Organ. Stud.
,
26
(
12
), pp.
1885
1907
.
66.
Ross
,
A. M.
,
Rhodes
,
D. H.
, and
Hastings
,
D. E.
,
2008
, “
Defining Changeability: Reconciling Flexibility, Adaptability, Scalability, Modifiability, and Robustness for Maintaining System Lifecycle Value
,”
Syst. Eng.
,
11
(
3
), pp.
246
262
.
67.
MacCormack
,
A.
,
Baldwin
,
C.
, and
Rusnak
,
J.
,
2012
, “
Exploring the Duality Between Product and Organizational Architectures: A Test of the ‘Mirroring’ Hypothesis
,”
Res. Policy
,
41
(
8
), pp.
1309
1324
.
68.
Colfer
,
L. J.
, and
Baldwin
,
C. Y.
,
2016
, “
The Mirroring Hypothesis: Theory, Evidence, and Exceptions
,”
Ind. Corp. Change
,
25
(
5
), pp.
709
738
.
69.
Von Hippel
,
E.
,
1990
, “
Task Partitioning: An Innovation Process Variable
,”
Res. policy
,
19
(
5
), pp.
407
418
.
70.
Henderson
,
R. M.
, and
Clark
,
K. B.
,
1990
, “
Architectural Innovation: The Reconfiguration of Existing Product Technologies and the Failure of Established Firms
,”
Adm. Sci. Q.
,
35
(
1
), pp.
9
30
.
71.
Chesbrough
,
H. W.
, and
Teece
,
D. J.
,
1998
, “When is Virtual Virtuous? Organizing for Innovation,”
The Strategic Management of Intellectual Capital
,
27
,
Butterworth-Heinemann
,
Boston, MA
.
72.
Cataldo
,
M.
,
Herbsleb
,
J. D.
, and
Carley
,
K. M.
,
2008
, “
Socio-technical Congruence: A Framework for Assessing the Impact of Technical and Work Dependencies on Software Development Productivity
,”
Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, Kaiserslautern, Germany
,
Oct. 9–10
,
New York, NY
, pp.
2
11
.
73.
Eck
,
D. V.
,
Mcadams
,
D. A.
, and
Vermaas
,
P. E.
,
2007
, “
Functional Decomposition in Engineering: A Survey
,”
Proceeding of the ASME IDETC/CIE
,
Las Vegas, NV
,
Sept. 4–7
.
74.
Pahl
,
G.
, and
Beitz
,
W.
,
2013
,
Engineering Design: A Systematic Approach
,
Springer Science & Business Media
,
Springer Verlag, London
.
75.
Kusiak
,
A.
, and
Wang
,
J.
,
1993
, “
Decomposition of the Design Process
,”
ASME J. Mech. Des.
,
115
(
4
), pp.
687
695
.
76.
Sobieszczanski-Sobieski
,
J.
, and
Haftka
,
R. T.
,
1997
, “
Multidisciplinary Aerospace Design Optimization: Survey of Recent Developments
,”
Struct. Optim.
,
14
(
1
), pp.
1
23
.
77.
Tribes
,
C.
,
Dubé
,
J.-F.
, and
Trépanier
,
J.-Y.
,
2005
, “
Decomposition of Multidisciplinary Optimization Problems: Formulations and Application to a Simplified Wing Design
,”
Eng. Optim.
,
37
(
8
), pp.
775
796
.
78.
Martins
,
J. R.
, and
Lambe
,
A. B.
,
2013
, “
Multidisciplinary Design Optimization: A Survey of Architectures
,”
AIAA J.
,
51
(
9
), pp.
2049
2075
.
79.
Suh
,
E. S.
,
De Weck
,
O. L.
, and
Chang
,
D.
,
2007
, “
Flexible Product Platforms: Framework and Case Study
,”
Res. Eng. Des.
,
18
(
2
), pp.
67
89
.
80.
Suh
,
E. S.
,
Furst
,
M. R.
,
Mihalyov
,
K. J.
, and
de Weck
,
O.
,
2010
, “
Technology Infusion for Complex Systems: A Framework and Case Study
,”
Syst. Eng.
,
13
(
2
), pp.
186
203
.
81.
O’Neill
,
M. G.
, and
Weigel
,
A. L.
,
2011
, “
Assessing Fractionated Spacecraft Value Propositions for Earth Imaging Space Missions
,”
J. Spacecr. Rockets
,
48
(
6
), pp.
974
986
.
82.
Browning
,
T. R.
,
2016
, “
Design Structure Matrix Extensions and Innovations: A Survey and New Opportunities
,”
IEEE Trans. Eng. Manage.
,
63
(
1
), pp.
27
52
.
83.
Tilstra
,
A. H.
,
Seepersad
,
C. C.
, and
Wood
,
K. L.
,
2012
, “
A High-Definition Design Structure Matrix (HDDSM) for the Quantitative Assessment of Product Architecture
,”
J. Eng. Des.
,
23
(
10–11
), pp.
767
789
.
84.
Pimmler
,
T. U.
, and
Eppinger
,
S. D.
,
1995
, “
The International Center for Research on the Management of Technology
,”
ASME Design Theory and Methodology Conference
,
Minneapolis, MN
,
Sept. 11–14
, p.
11
.
85.
Sharman
,
D. M.
, and
Yassine
,
A. A.
,
2004
, “
Characterizing Complex Product Architectures
,”
Syst. Eng.
,
7
(
1
), pp.
35
60
.
86.
Eppinger
,
S. D.
,
Whitney
,
D. E.
,
Smith
,
R. P.
, and
Gebala
,
D. A.
,
1994
, “
A Model-Based Method for Organizing Tasks in Product Development
,”
Res. Eng. Des.
,
6
(
1
), pp.
1
13
.
87.
Yu
,
T.-L.
,
Goldberg
,
D. E.
,
Sastry
,
K.
,
Lima
,
C. F.
, and
Pelikan
,
M.
,
2009
, “
Dependency Structure Matrix, Genetic Algorithms, and Effective Recombination
,”
Evol. Comput.
,
17
(
4
), pp.
595
626
.
88.
Sarkar
,
S.
,
Dong
,
A.
,
Henderson
,
J. A.
, and
Robinson
,
P. A.
,
2014
, “
Spectral Characterization of Hierarchical Modularity in Product Architectures
,”
ASME J. Mech. Des.
,
136
(
1
), p.
011006
.
89.
Suh
,
N.
,
2005
, “
Complexity in Engineering
,”
CIRP Ann.
,
54
(
2
), pp.
46
63
.
90.
Lindemann
,
U.
,
2009
,
Structural Complexity Management: An Approach for the Field of Product Design
,
Springer
,
Berlin
.
91.
Crawley
,
E.
,
De Weck
,
O.
,
Magee
,
C.
,
Moses
,
J.
,
Seering
,
W.
,
Schindall
,
J.
,
Wallace
,
D.
, and
Whitney
,
D.
,
2004
,
The Influence of Architecture in Engineering Systems
,
Massachusetts Institute of Technology
,
Cambridge, MA
.
92.
Sheard
,
S. A.
, and
Mostashari
,
A.
,
2010
, “
7.3.1 A Complexity Typology for Systems Engineering
,”
INCOSE Int. Symp.
,
20
(
1
), pp.
933
945
.
93.
Moses
,
J.
,
2004
,
Foundational Issues in Engineering Systems: A Framing Paper
,
Massachusetts Institute of Technology
,
Cambridge, MA
.
94.
Pahl
,
G.
,
Wallace
,
K.
, and
Blessing
,
L.
,
2007
,
Engineering Design: A Systematic Approach
, 3rd ed.,
Springer
,
London
.
95.
Keeney
,
R. L.
, and
Raiffa
,
H.
,
1976
,
Decision Analysis With Multiple Conflicting Objectives
,
Wiley& Sons
,
New York
.
96.
Hazelrigg
,
G.
,
1998
, “
A Framework for Decision-Based Engineering Design
,”
ASME J. Mech. Des.
,
120
(
4
), pp.
653
658
.
97.
Collopy
,
P. D.
, and
Hollingsworth
,
P. M.
,
2011
, “
Value-Driven Design
,”
J. Aircr.
,
48
(
3
), pp.
749
759
.
98.
Von Bertalanffy
,
L.
,
1968
, “
General System Theory
,” New York, 41973, 1968, p.
40
.
99.
Salado
,
A.
, and
Nilchiani
,
R.
,
2014
, “
The Concept of Problem Complexity
,”
Procedia Comput. Sci.
,
28
(
1
), pp.
539
546
.
100.
Halstead
,
M. H.
,
1977
,
Elements of Software Science
,
Elsevier
,
New York
.
101.
McCabe
,
T. J.
,
1976
, “
A Complexity Measure
,”
IEEE Trans. Software Eng.
,
SE-2
(
4
), pp.
308
320
.
102.
Ameri
,
F.
,
Summers
,
J. D.
,
Mocko
,
G. M.
, and
Porter
,
M.
,
2008
, “
Engineering Design Complexity: An Investigation of Methods and Measures
,”
Res. Eng. Des.
,
19
(
2
), pp.
161
179
.
103.
Tamaskar
,
S.
,
Neema
,
K.
, and
DeLaurentis
,
D.
,
2014
, “
Framework for Measuring Complexity of Aerospace Systems
,”
Res. Eng. Des.
,
25
(
2
), pp.
125
137
.
104.
Moses
,
J.
,
2004
, “
Foundational Issues in Engineering Systems: A Framing Paper
.”
Engineering Systems Monograph
105.
Broniatowski
,
D. A.
, and
Moses
,
J.
,
2016
, “
Measuring Flexibility, Descriptive Complexity, and Rework Potential in Generic System Architectures
,”
Syst. Eng.
,
19
(
3
), pp.
207
221
.
106.
Braha
,
D.
, and
Maimon
,
O.
,
1998
, “
The Measurement of a Design Structural and Functional Complexity
,”
IEEE Trans. Sys. Man Cyber. – Part A: Syst. Humans
,
28
(
4
), pp.
241
277
.
107.
Hennig
,
A.
,
Topcu
,
T. G.
, and
Szajnfarber
,
Z.
, 2021, “
Complexity Should Not Be In the Eye of the Beholder: How Representative Complexity Measures Respond to the Commonly-Held Beliefs of the Literature
,”
ASME IDETC/CIE
,
Virtual, Online
,
Aug. 2021
.
108.
Min
,
G.
,
Suh
,
E. S.
, and
Hölttä-Otto
,
K.
,
2016
, “
System Architecture, Level of Decomposition, and Structural Complexity: Analysis and Observations
,”
ASME J. Mech. Des.
,
138
(
2
), p. 021102.
109.
Sinha
,
K.
, and
Suh
,
E. S.
,
2018
, “
Pareto-optimization of Complex System Architecture for Structural Complexity and Modularity
,”
Res. Eng. Des.
,
29
(
1
), pp.
123
141
.
110.
Yin
,
R. K.
,
2003
,
Case Study Research: Design and Methods
,
SAGE
,
Thousand Oaks, CA
.
111.
Eisenhardt
,
K. M.
,
1989
, “
Building Theories From Case Study Research
,”
Acad. Manage. Rev.
,
14
(
4
), pp.
532
550
.
112.
Szajnfarber
,
Z.
, and
Gralla
,
E.
,
2017
, “
Qualitative Methods for Engineering Systems: Why We Need Them and How to Use Them
,”
Syst. Eng.
,
20
(
6
), pp.
497
511
.
113.
Miles
,
M. B.
, and
Huberman
,
A. M.
,
1994
,
Qualitative Data Analysis: An Expanded Sourcebook
,
SAGE
.
114.
Crawley
,
E.
,
Cameron
,
B.
, and
Selva
,
D.
,
2015
,
System Architecture: Strategy and Product Development for Complex Systems
,
Prentice Hall Press
,
Essex, UK
.
115.
Kossiakoff
,
A.
, and
Sweet
,
W. N.
,
2003
,
Systems Engineering: Principles and Practices
,
Wiley Online Library
,
Hoboken, NJ
.
116.
English
,
K.
,
Bloebaum
,
C. L.
, and
Miller
,
E.
,
2001
, “
Development of Multiple Cycle Coupling Suspension in the Optimization of Complex Systems
,”
Struct. Multidiscipl. Optim.
,
22
(
4
), pp.
268
283
.
117.
Chen
,
S.-J. G.
, and
Lin
,
L.
,
2003
, “
Decomposition of Interdependent Task Group for Concurrent Engineering
,”
Comput. Ind. Eng.
,
44
(
3
), pp.
435
459
.
118.
Ko
,
Y.-T.
,
2013
, “
Optimizing Product Architecture for Complex Design
,”
Concurr. Eng.
,
21
(
2
), pp.
87
102
.
119.
Dudek
,
G.
,
Jenkin
,
M. R.
,
Milios
,
E.
, and
Wilkes
,
D.
,
1996
, “
A Taxonomy for Multi-agent Robotics
,”
Auton. Robots
,
3
(
4
), pp.
375
397
.
120.
Gilpin
,
K.
, and
Rus
,
D.
,
2010
, “
Modular Robot Systems
,”
IEEE Robot. Autom. Mag.
,
17
(
3
), pp.
38
55
.
121.
Chen
,
L.
, and
Li
,
S.
,
2004
, “
Analysis of Decomposability and Complexity for Design Problems in the Context of Decomposition
,”
ASME J. Mech. Des.
,
127
(
4
), pp.
545
557
.