Quick Start Tutorial #2

The purpose of this tutorial is to introduce you to more aspects of CUTS that were already completed for you in Quick Start Tutorial #1. For this tutorial, we will use a larger example from the domain of shipboard computing environments called the SLICE Scenario. The purpose of the SLICE Scenario is to identify different deployment and configurations that allows the system to meet its end-to-end response time. The configuration of the system can vary, but it provides you with an understanding of one key use of CUTS. At the end of this tutorial, you will have a basic understanding of the following:

  1. Adding workload to a component's behavior
  2. Calibrating the CPU workload generator to support emulation

 

1. Adding workload to a component's behavior

In Quick Start Tutorial #1, the PongComponent already contained behavior. In particular, the PongComponent behavior contained a CPU action that allowed it to execute workload on a CPU when it receives an event. In this section of the tutorial, you will learn how to create a similar model with CPU workload (i.e., a CPU workload generator modeled using the Workload Modeling Language (WML)). This learned knowledge can then be applied to any model that needs to add workload for realistic behavior and emulation.

Importing the CPU workload generator

There are several ways you can import the CPU workload generator into a model. In this tutorial, we are going to import the CPU workload generator by attaching it as a GME library to the current model. This can be done via the following steps:

  1. Right-click the model's RootFolder named QuickStart2 and select Attach Library....
  2. Select the CPU workload generator model from the following $(CUTS_ROOT)/cuts/workers/CPU/models/CUTS_CPU_Worker.mga. If a .mga files does not exist in the specified directory, then please see the notice below.

The following figure highlights the model with the CPU workload generator model attached as a library.

Note: If the CPU workload generator model file above is in the .xme format and not the .mga format, then you will have to first import that model into a new instance of GME and save it as a .mga format. This will then allow you to attach the CPU workload generator model as discussed above. This approach also applies to any model that you want to attach as a library.

Adding CPU workload generator to the model

Once the CPU workload generator model has been imported as an attached library to the current model, you can now add a CPU workload generator to a behavior model. With this in mind, we are going to add CPU workload to ConfigOp, PlannerOne, and PlannerTwo components. In addition, we must make sure we push through each component the event id so we can properly correlate timing information across each of the components. We are going to start with the PlannerOne component. To add behavior and CPU workload to this component, please complete the following steps:

  1. Open the InterfaceDefinitions/ComponentTypes/PlannerOne model element.
  2. Switch to the Behavior aspect.
  3. Insert a WorkerType model element aned set to reference the WorkerLibraryFolder/CUTS_CPU_Worker/CPU_Worker/CUTS_CPU_Worker model element in the imported CPU workload generator library.
  4. Change its name to cpuGen.
  5. Insert an InputAction model element.
  6. Connect the recvEvent InEventPort model element to the newly inserted InputAction model element.
  7. Insert an Action model element. You will notice its name change to cpuGen.
  8. Insert an OutputAction model element. You will notice its name change to pushEvent.
  9. Connect the final State model element to the initial InputAction model element.
  10. Open the cpuGen action model element, select the msec property and input 30 for its Value attribute.
  11. Open the OutputAction and insert a SimpleProperty model element. Change its name to eventCount.
  12. Set the Value attribute of the eventCount property model element to: ev->eventCount().

The following image highlights the behavior model for PlannerOne component. You model should look similar to this figure.

Please repeat the steps above for the PlannerTwo and ConfigOp components. Once you have completed the behavior models for the remaining components, you can generate their source code and proceed with executing the emulation, as explained in Quick Start Tutorial #1.

 

2. Calibrating the CPU workload generator to support emulation

When you executed the system generated in Quick Start Tutorial #1, it was not performing in real workload. It was executing, but if you recall there was a warning message about the CPU workload generator not being calibrated. This means that in order to produce realistic CPU workload, you must calibrate the CPU workload generator for that particular machine. This also implies that the CPU workload generator must be calibrated for each machine that uses the CPU workload generator. Although this statement has some true to it, it is not entirely true. Instead, you must calibrate the CPU workload generator for each architecture type in your experiment. Once you have calibrated the CPU workload generator, you copy the configuration between different machines of the same architecture type. This is explained later on in the tutorial.

To calibrate the CPU workload generator, execute the following command:

%> $CUTS_ROOT/bin/cuts-calibrate -f CUTS_CPU_Worker

The calibration process can take between 10-30 minutes. Once the calibration process is complete, the configuration file for the CPU workload generator will be stored in $CUTS_ROOT/etc/calibration/CUTS_CPU_Worker.[MACADDR] where MACADDR is the MAC address of the machine.

 

Copying calibration files

As mentioned above, once you have calibrated the CPU workload generator on one machine of the specific architecture, you can copy it to different machines of the same architecture. This can be done by simply using your copy command, such as:

  
  %> cd $CUTS_ROOT/etc/calibration
  %> cp CUTS_CPU_Worker.[source MACADDR] CUTS_CPU_Worker.[destination MACADDR]

This is very useful when working in an environment with a large number of hosts, such as an integration testbed.