Server Rack

Cloud Infrastructure Servers

Caesar Wu , Rajkumar Buyya , in Cloud Data Centers and Cost Modeling, 2015

11.4.3 Rack vs. Blade Server

Is a rack server better than a blade server or vice versa? It is really dependent on what your business requirements are. If you would like utilize existing data center infrastructure, the blade server is subject to the power conditions because normally blade servers require more power than rack servers. On average, blade server power density is between 10 to 15kW per rack. Some ultra-high-density blade server racks are as high as 50kW per rack. In comparison, the average power density for a rack server is only between 3  kW and 5   kW per rack. In other words, the average power density of a blade server is about three to five times higher than a rack server. In some extreme cases, it could be 10 times as high.

The biggest issue with a blade server is lack of granularity in terms of capacity expansion. As we know the blade chassis can carry from 8 to 16 blades; it would be quite challenging to make such a purchase decision for hardware unless you require a very large size server fleet. This means that you either have 16 blades or nothing if you want to have a blade server. Of course, you can purchase a few blades along with a blade chassis first and add more blades later on. If you make this decision, you have to control the time between two purchases. Normally, it should be less than three years. If the time gap is too long, you will not only find the model is out of date but also that your vendor will not support your existing blade servers that you have already purchased. You may be able to find a third-party vendor but the cost would be very high because it would be quite difficult to find any spare parts. In these circumstances, you may find the best solution is to buy the latest blade server and decommission the existing blade server fleet. This leads to a very low utilization rate for the blade server chassis. Normally, a blade chassis is quite expensive.

In order to make clear comparisons between rack and blade servers, we can review blade and rack servers from 10 different perspectives (see Table 11.8).

Table 11.8. Pros and Cons of Blade and Rack Servers

Server Types Blade Server Rack Server
Pros Cons Pros Cons
Space Less space More space
Power density High power density Lower power density
Cabling Less cabling More cabling
Scale up Difficult to scale up Easier to scale up
Scale out Easier to scale out Difficult to scale out
Capex High initial capex Less initial capex
Speed to market Quick deployment Slow deployment
Power savings Power efficiency per unit No power savings
Redundancy No redundancy at chassis level Easier to make redundant
Remote management Easier to manage Difficult to manage

In summary, if you have a limited budget and relatively small server fleet, the rack server solution is better than blade servers. Even if you have a server fleet between 50 and 150 servers but the market outlook is very uncertain or sluggish, the rack server is still a relatively good solution because today, a pizza box has become much more powerful than ever before. With a virtualized infrastructure, you can quickly scale out even with rack servers.

However, if your business goal is to get a foothold in a massive market, the blade server would be a better solution. Of course, it is subject to power supply in your data center. Overall, whether it is a rack or blade server, most x86 servers are ODM or OEM in China. It has become a commodity product. It is not worth paying a high price for a fancy blade product. When we make a purchasing decision, we should always consider it from a life cycle and TCO/ROI perspective rather than just purchasing price alone.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128014134000118

Routing and Traffic Engineering in Data Center Networks

Deep Medhi , Karthik Ramasamy , in Network Routing (Second Edition), 2018

12.2 Data Center Network: A Simple Illustration

In this section, we discuss a data center network that uses a simple tree topology, as shown in Figure 12.2. At the bottom, server racks are located to which physical hosts are connected. Each physical host can be configured to provide multiple virtual machines, or virtual servers. Hosts are connected to the Top-of-Rack (ToR) switches, which essentially sit in the middle of the server rack in a physical setup for less cabling. Then, the ToR switches that are in a row are connected to an End of Row (EoR) or edge switch. Multiple rows of servers with an edge switch on each row are connected at an aggregation switch for traffic aggregation. Finally, the aggregation switches are connected to one or more core switches.

Figure 12.2

Figure 12.2. A tree based DCN topology.

The core switch has an outgoing link to connect to the Internet. There can be certainly more than one core switch and multiple parallel links can be installed for connectivity to the Internet. There are two types of traffic that can be envisioned in a data center network: east–west traffic and north–south traffic.

East–west traffic refers to traffic between server racks that is the result of internal applications requiring data transfers, such as for MapReduce computation for indexing [217,218], or for storage data movements between servers. If a data center is geared to do a lot of MapReduce computations, then it may have mostly east–west traffic. North–south traffic refers to traffic as a result of external requests from the Internet coming to the data center, to which the servers at the data center respond. For example, when end users request webpages that are hosted at data center servers, this traffic falls into the north–south traffic category. Depending on the specific business purpose of a data center, the amount of traffic that contributes to east–west traffic compared to north–south traffic could vary widely. A simple way to think of this is that if a data center mostly serves as a web hosting or email hosting site, then this data center is likely to have mostly north–south traffic. Note that a web request does not mean the content of the webpage is stored in the same web host that is serving the request. Often, such contents are stored internally in storage nodes that may also require database processing. This means that a north–south request may automatically cause some east–west traffic internally. Also, you may note that for a simple topology like the tree topology, there are many points of failure. For example, if a link between an aggregation switch and an edge switch goes down, then the hosts that are part of this edge switch will lose connectivity. Thus, one possibility is that virtual machines are migrated from one host to another host in a different row periodically, or a backup is copied to another virtual machine. This means that although this data center is primarily acting as a web hosting site, which will mostly consist of north–south traffic, there will still be internal east–west traffic that is the result of virtual machine migration or data migration, for example, to provide reliability. Thus, east–west traffic (for a variety of such functions) can be as significant as north–south traffic in a data center.

You might wonder why we need to construct a topology such as a tree-based topology or more complicated ones. Could we not just do with a single LAN that connects all the server racks? This partly has to do with addressing and scalability. Consider, for instance, that all servers are connected on a single LAN. Then for every bit of data transfers between two machines, which could be due to the computational work (such as MapReduce), would result in broadcast traffic. This could be overwhelming and inefficient if there are thousands of virtual machines and they all exchange at the same instant. Secondly, if we put them all in a single LAN, then we need to define an IP address block with a large subnet mask that may not be scalable as the number of VMs grows. However, eventually all of these issues are about trade-offs. Depending on the size of the data center, different factors may become more dominant; it is important to understand them well as the data center grows. A critically important thing is to remember about service level agreements (SLAs) for any customer applications. If any configuration misses the SLAs, it is perhaps time to rethink or redesign the data center in terms of topology, reliability, and routing.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128007372000144

Cloud, SDN, and NFV

Walter Goralski , in The Illustrated Network (Second Edition), 2017

Infrastructure as a Service (IaaS)

The most basic of cloud services, providers of IaaS offer physical computers or, more often today, VMs and other resources in an enormous data center consisting of thousands of server racks (there are some good videos of modern data center tours online). In a VM environment, a specialized operating system known as the hypervisor (Xen is a good example) runs the VMs as "guests" on the hypervisor and basic hardware platform. Working together, the VMs and hypervisors can use large numbers of VMs and scale service capacity up and down according to the customer's needs. This is much like a business foregoing the purchase of a new truck to rent one as needed when deliveries are very busy.

This might be a good place to point out that a VM is an entire "reproduction" of an operating system running on the bare metal server. One VM could be Windows, another Ubuntu Linux, and so on. The hypervisor doesn't care. Each VM runs the application that accesses the data stored on other specialized, storage servers. The issue here is that the whole VM often contains parts of the guest OS that are never needed by the application, which is often a network function or a database query. Also, it takes time to spin up all those OS pieces to start with, limiting the ability of VMs to come and go as workloads vary, which is one of the main attractions of virtualized services in the first place.

So there is also the concept of a "container" that packages only the bare minimum of the guest OS and runs that in a hypervisor environment (Docker is a good example). As a result, containers can come and go quickly, although many applications have to be "containerized" because they expect a full operating system beneath them, which in a Docker environment they do not have.

Clouds offering IaaS often also have resources available for customers such as a disk image library for the VMs, storage nodes for the constantly accumulating data and intermediate data mining results, firewalls to ensure security, load balancers, dynamic IP address pools (nice when VMs needing IP addresses come and go), VLANs or VXLANs, and bundled software packages. Some of these ideas, such as firewalls and load balancers, we will meet again when we consider NFV. IaaS supplies these resources to meet constantly changing customer demands from large pools of bare metal servers and other machines in very large data center. "Very large" no exaggeration: some RFCs are intended for data centers with "hundreds of thousands of servers" as we saw in Chapter 17.

It is important to emphasize the control that users have overall aspect of their "infrastructure." They choose operating systems and memory and number of cores. They have multiple software images available to install and run, in addition to the applications (as mentioned, slow software evolution often ties an application to a certain OS software release). Customers are billed on a utility basis for resources they use, resources that can be used by others at other times—like comparing a truck with a monthly payment that sits idle all day with a rental that is paid for only on days it is needed.

Resource sharing like this raises all kinds of issues regarding privacy, authenticity, and security. These issues are important enough to deserve their own section later in this chapter.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128110270000291

Case Studies

Stephen R. Smoot , Nam K. Tan , in Private Cloud Computing, 2012

Compute submodule high-level design

There are various ways to design the compute submodule. In this design study, we have narrowed it down to two, using either UCS or generic rack servers. The blade or rack server is where the virtual access layer "lives." Figure 9.19 shows the high-level view of the compute submodule design with UCS. Figure 9.20 depicts the compute submodule design using generic rack servers.

Figure 9.19. High-level compute submodule design 1

Note:

The UCS Manager (UCSM) runs on the UCS 6100 Series fabric interconnects and manages and configures the entire UCS system. The UCSM uses active/standby architecture, with an active instance called primary, and a standby instance called subordinate. All communication is handled by the primary instance that maintains the main configuration database. The main configuration database is stored on the primary instance and replicated on the subordinate instance. The primary instance sends updates to the subordinate when configuration changes occur. The UCSM instances communicate over the dual cluster links between the two fabric interconnects.

The high-level compute submodule design 1 illustrated in Figure 9.19 includes the following components:

Two UCS 6140XP fabric interconnects, AC-SW1 and AC-SW2 (for more details, see the Fabric Interconnect section in Chapter 8):

AC-SW1 has two pairs of 10GE uplinks: one pair to AGG-1 and the other to AGG-2. The same applies for AC-SW2.

AC-SW1 has four 10GE downlinks (fabric links) to FEX-1 (behind the UCS chassis). The same applies for AC-SW2 to FEX-2 (behind the UCS chassis). As only two blade servers are used in the design, two fabric links will suffice. However, all four fabric links are wired up (wire once) anyway for future expansion. For more details on UCS FEXs, see the Fabric Extender section in Chapter 8.

The abovementioned uplinks and downlinks for both fabric interconnects (AC-SW1 and AC-SW2) are 10GE/FCoE/SFP+ ports.

AC-SW1 has two native FC links to MDS-1 (SAN A). The same applies for AC-SW2 to MDS-2 (SAN B).

Two UCS 2104XP FEXs: FEX-1 and FEX-2 (both located at the back of the UCS chassis).

Two UCS B250 blade servers (for more details, see the Blade Servers section in Chapter 8): one for ESX1 and the other for ESX2.

Two CNAs, CNA-1 and CNA-2, per blade server (the UCS B250 is a full-slot blade server that supports up to two I/O adapters):

10GE/FCoE connection is implemented on CNA-1 and CNA-2 between the ESX hosts (ESX1 and ESX2) and the fabric interconnects (AC-SW1 and AC-SW2). For more details on the FCoE configurations, see the Access Layer to Ethernet LAN section.

For each ESX host, CNA-1 is connected through FEX-1 to AC-SW1 and CNA-2 is connected through FEX-2 to AC-SW2.

The CNAs are used by the ESX hosts to access both the DC Ethernet LAN and the FC SANs (via FCoE).

One UCS 5108 blade server chassis where the server blades and FEXs are housed (for more details, see the Blade Server Chassis section in Chapter 8).

ToR architecture:

Since the UCS comes with FEXs that slot into the back of the UCS chassis, no other ToR devices are required. The fabric interconnects are located at an EoR rack.

The ToR layout is similar to Figure 9.13 except without the ToR FEX.

Since all four fabric uplinks are used on each UCS FEX in the design, a total of eight fiber strands are utilized from the UCS server rack to the EoR rack.

The high-level compute submodule design 2 illustrated in Figure 9.20 includes the following components:

Two Nexus 5020 switches, AC-SW1 and AC-SW2:

AC-SW1 has two pairs of 10GE uplinks: one pair to AGG-1 and another to AGG-2. The same applies for AC-SW2.

AC-SW1 has two 10GE/FCoE downlinks: one to Rack Server-1 and the other to Rack Server-2. The same applies for AC-SW2.

The abovementioned uplinks and downlinks for both the NX5K switches (AC-SW1 and AC-SW2) are 10GE/FCoE/SFP+ ports.

AC-SW1 has two native FC links to MDS-1 (SAN A). The same applies for AC-SW2 to MDS-2 (SAN B).

Two UCS C200 or generic rack servers: one (Rack Server-1) for ESX1 and the other (Rack Server-2) for ESX2.

One dual-port CNA per rack server:

10GE/FCoE connection is implemented on the dual-port CNA between the ESX hosts (ESX1 and ESX2) and the NX5K switches (AC-SW1 and AC-SW2). For more details on the FCoE configurations, see the Access Layer to Ethernet LAN section.

For each ESX host (rack server), one CNA port is connected to AC-SW1 and the other is connected to AC-SW2.

The dual-port CNA is used by the ESX hosts to access both the DC Ethernet LAN and the FC SANs (via FCoE).

ToR architecture:

The two NX5K switches can be ToR switches within the server rack. The server rack layout is similar to Figure 2.26.

The two NX5K switches can also be located at an EoR rack. In this case, the ToR layout is similar to Figure 9.13 except without the ToR FEX.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012384919900009X

Layers and the Evolution of Communications Networks

Daryl Inniss , Roy Rubenstein , in Silicon Photonics, 2017

2.4.1 Data Center Networking is a Key Opportunity for Silicon Photonics

The data center is one of the most dynamic arenas driving technological innovation, whether in servers, storage, networking, or software. As such the data center represents a key market opportunity for silicon photonics. Server racks continue to evolve and need faster networking between them. And while copper cabling plays an important role in linking equipment in the data center—what we refer to as Layer 3—optical interconnect is gaining in importance as the amount of traffic rattling around the data center grows.

Copper cabling can link equipment up to 10   m apart but it is bulky, which can restrict equipment air flow and impede cooling. Copper cabling is also heavy: the sheer weight of connections on the front panel of servers or switches has been known to cause disconnects to equipment.

Optical connections are lighter, less bulky, and cover much greater distances—500   m, 2   km, and 10   km—more than enough to span the largest data centers.

Such connections are situated on the front of the equipment, referred to as the faceplate. The faceplate typically supports a mix of interfaces and technologies: electrical interfaces using copper cabling as well as optical links using fiber connectors and optical modules. Optical modules are units that plug into the faceplate. Modules support all sorts of speeds and distances within the data center (see Appendix 1: Optical Communications Primer). Speeds include 1, 10, 40, and 100   Gb/s; copper links may range up to a few meters, while optical spans up to 10   km. The optical modules, pluggable or fixed, can also use dense wavelength-division multiplexing to enable Layer 4 metro and long-haul distances. Such optical connections have become an early and obvious market for the silicon photonics players to target.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128029756000028

Cloud and Mobile Cloud Architecture, Security and Safety

C. Mahmoudi , in Handbook of System Safety and Security, 2017

10.2.1 Business Benefits

Building applications in the Cloud offers several benefits to organizations. One important benefit is related to the cost of installation. Building a large-scale system is a big investment in terms of cost and complexity. It requires investment in hardware infrastructure including racks, servers, routers, and backup power supplies. It also requires a location for the data center that requires investment in real estate and physical security. Moreover it necessitates recurring charges for hardware management and operations personnel. The delays to obtain approvals for this high upfront cost would typically involve several rounds of management approvals before the project could even get started. The Cloud-based solution bypasses such startup costs.

Even if the organization has an existing on-premise infrastructure, the scalability of an application could be a problem if this application became popular. In such cases you become a victim of your own success when the on-premise infrastructure does not scale to offer the resources needed by the application. The classical solution of this kind of problem is to invest heavily in infrastructure, hoping that the popularity of the application will be addressed by the size of the infrastructure. By using a Cloud infrastructure, the Cloud provider manages the infrastructure and you can rescale the infrastructure allocated to the deployed application in a just-in-time manner. This feature increases agility, helps the organization reduce risk, and lowers operational cost. That means that the organization can scale only as it grows. Moreover the organization only pays for their real resource usage.

To have a more efficient resource utilization, system administrators have to deal with ordering delays while procuring hardware components when the datacenter runs out of capacity. They also have to shut down some parts of the infrastructure when they have excess and idle capacity. By using the Cloud, the management of resources becomes more effective and efficient, since system administrators can have immediate resources on-demand.

Cost is one of the most important factors for businesses. With on-premise infrastructures, organizations have fixed costs, independent of their usage. Even if they are underutilizing their data center resources, they pay for the used and the unused infrastructure in their data centers. The Cloud introduces a new dimension of cost saving that is visible immediately on the next bill and provides cost feedback to support budget planning. The usage-based costing model is very interesting for organizations that actively practice application optimization. Applying an update that uses caching to reduce calls to their back office by 50% will have an immediate impact on costs. This savings will accrue immediately after the update. This on-demand costing model also affects organizations that have picks of activity. The picks will be reflected on their invoice as an additional charge.

Organizations where the business is data analysis oriented can get impressive results in term of the reduction of time to market by using the Cloud. Since the Cloud offers a scalable infrastructure, parallelization of data analytics is one effective way to accelerate time to results. Putting parallel analysis processes, which normally take 100 hours of effort on a machine, on 100 instances in the Cloud will reduce the overall processing time to 1 hour. Swapping machine instances is at the heart of the Cloud IaaS. Moreover Cloud providers offer specific solutions to exploit parallelization using big-data techniques. By using this elastic infrastructure provided by the Cloud, applications can reduce time-to-market without any upfront investment.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128037737000103

Next-Generation Data Center Architectures and Technologies

Stephen R. Smoot , Nam K. Tan , in Private Cloud Computing, 2012

Rack topologies

The DC access layer can be partitioned into pods for a modular build up. A pod represents an access-layer building block that defines the number of servers connected to a network aggregation block. Within the pod, server connectivity is handled at the rack level by ToR access switches that in turn are connected to the network aggregation layer. The idea is to facilitate the rack-and-roll approach for rapid server provisioning in the DC. A pod is comprised of racks of servers that are dependent on the following rack topologies (that are often used together):

End-of-the-row topology: This topology uses large, director-class switching devices at the end of each row of server racks that require significant cabling bulk to be carried from all server racks to the network rack. The main benefit of this topology is that fewer configuration points (switches) control a large number of server ports.

Top-of-the-rack topology: This topology consists of one rack-unit (1RU) or two rack-unit (2RU) access-layer switching devices at the top of each server rack, providing the server connectivity within each rack. This topology is more efficient in terms of cabling because fewer cables are required from each rack to the end-of-the-row switch. Nevertheless, the top-of-the-rack topology requires more ToR switches as compared to the end-of-the-row topology for the same number of switch ports, which increases the management burden.

I/O consolidation with FCoE reduces the number of adapters required on servers. This reduces the total number of cables required to link up the servers and thus eases the cabling bulk in end-of-the-row topologies. Because FCoE extends native FC access to Ethernet, there is no need for separate ToR FC switches. In other words, one homogeneous set of ToR FCoE switches will suffice and this alleviates the requirement to have more ToR switches in top-of-the-rack topologies.

Figure 2.26 illustrates a simple top-of-the-rack topology based on FCoE phase 1 deployment (see Figure 2.23) but this time from the rack level perspective with 12 servers instead. The server I/O consolidation effort with FCoE reduced the overall cables from 56 to 32 (approximately 43% reduction). The number of single-port adapters was reduced from 48 to 24 (50% reduction) and the number of ToR switches was reduced from 4 to 2 (50% reduction)–not bad from the DC consolidation standpoint and certainly a plus in private cloud computing environments.

Figure 2.26. Cabling reduction with FCoE

In short, the ToR architecture helps to modularize and mobilize server racks into a rack-and-roll deployment model. For more details on the ToR architecture and design, see the Top-of-Rack Architecture Design Study section in Chapter 9.

Note:

If dual-port adapters are used, a total of 24 units are required before the FCoE phase 1 deployment. This total is cut in half to just 12 units after the consolidation.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123849199000027

Methodology for the Evaluation of SSD Performance and Power Consumption

Jalil Boukhobza , Pierre Olivier , in Flash Memory Integration, 2017

8.1.2 Hardware platform

The analysis of the energy consumption of the SSD will be achieved by means of two techniques. The first technique makes it possible to measure the overall energy consumption of a system according to the used storage system. To this end, we have used a Power Distribution and metering Unit (or PDU). A PDU is a device for the distribution of electric power in rack servers and other equipment. A metered PDU allows us to measure the power consumption per electric outlet. Therefore, by means of a PDU, we can measure the overall energy consumption of a given server. The second technique that was used and which allows for a finer measurement, specific to the storage system, is the use of a device that is capable of measuring the energy consumption of storage devices. For this purpose, we have inserted a power sensor on the power cable of the SSD and of the HDD (described in section 8.5). This allows a finer and more accurate measurement than the PDU. These two views are complementary: the overall measurement shows the impact of I/O operations on the overall performance of a system, whereas the finer measurement makes it possible to measure the precise difference between different storage devices in terms of behavior and energy consumption.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785481246500116

Data Center Power

Caesar Wu , Rajkumar Buyya , in Cloud Data Centers and Cost Modeling, 2015

5.3.2 Circuit Breaker Coordination

This part could also be placed in the section on PDUs or rack panels since they are closely associated with the circuit breaker. If we configure the circuit breaker incorrectly, it will cause many headaches during data center operation. To illustrate this issue, let's have a look the following example.

As we can see it on the left of Figure 5.7, when server A is connected to a rack, server A can draw a maximum of 20   A current or power. If the rack panel circuit breaker is configured with 16   A, this will cause trouble because if the rack panel circuit breaker is blown out, all other servers (servers B, C, and D) connected to this rack panel will lose their power.

Figure 5.7. Wrong and correct circuit breaker configuration.

The purpose of a circuit breaker is to protect IT equipment when the power is overloaded or a short circuit occurs. The circuit breaker is supposed to be open if the current is higher than the specified level. However, circuit breaker coordination may become very complicated. When the setup isn't designed carefully, it could bring other IT equipment down and trigger many unnecessary, unplanned outages. An example is illustrated on the left side of Figure 5.7.

In contrast, if we can configure the circuit breaker value correctly, unnecessary server outages can be avoided. On the right side of Figure 5.7, if we increase the value of the rack panel circuit breaker from 16   A to 25   A and reduce the outlet circuit breaker from 25   A to 16   A, basically swap the two circuit breakers, the rack panel circuit breaker will not trip if server A is overloaded.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128014134000052

Access Control Design

Thomas Norman , in Electronic Access Control, 2012

System Acceptance

System Acceptance is the time every Integrator, Project Manager, and Installer likes best, because the project is almost complete. For the system to be accepted, the Project Manager must show the Owner that the project is complete in accordance with the project requirements.

The Project Manager should conduct a project tour to include the Owner's Representative, Consultant, and others as required. The tour should include viewing of every equipment control panel, power supply, equipment rack, server rack, console, and workstation. It should also include a floor-by-floor tour of all installed devices. Finally, show all of the functions on the workstations so that they can see that everything works.

The Owner's Representative or Consultant should note any discrepancies or remedial work items that still need attention. This will be issued by the Owner's Representative or Consultant as an official Punch List. Take your own notes of the Punch List items they intend to list.

Begin working on the Punch List immediately, even before receiving the official copy. Complete the Punch List and resubmit for final acceptance. Don't submit bits and pieces; finish the whole Punch List and then resubmit. This will finish the project much faster.

When the Punch List is complete, submit a System Acceptance form to include the Warranty Statement. Also include As-Built Drawings, Manuals, cabinet/rack keys, and any Portable Items or Spare Parts called for in the contract. Receive a signature for all items. Have an initial beside each item delivered with the date of acceptance. You will need this in case there is a dispute over what has and has not been delivered.

Congratulations! Your project is complete!

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123820280000259