Trung tâm đào tạo thiết kế vi mạch Semicon

  • Create an account
    Fields marked with an asterisk (*) are required.


E-mail Print PDF
User Rating: / 0

1. About this document

This document has three target audiences. Those who are not at all familiar with Specman should find in the first chapter a brief objective account of its main working principles and a discussion of its pros and cons. 

The second and the third chapters aim at beginners with the E language and describe some of its most salient characteristics, along with examples. I hope that readers of these chapters will gain some knowledge both of low level syntax through the examples and comments, and of the high level ideas that stand behind them. The last chapter deals with the verification methodology associated with Specman and E, and can profit both beginners and experienced users.

Though it will soon become clear to the reader, I must emphasize that this document was not written with the knowledge or on behalf of Verisity, and that all the information, examples and tips that you will find herewith have not been verified or approved in any way by Verisity.






3. At A Quick Glance

Directed verification vs. Random verification

Specman is a development environment for the E language, somewhat like MFC or Borland C are development environments for C++, the main difference being that an E code can never execute stand alone without Specman. Since for the moment, at least until E becomes an industry standard, working with Specman means writing E and vice versa, the term Specman in this chapter is used to refer to both.

As an automated verification software the purpose of Specman is to help you find bugs in a Verilog or VHDL design. The simplest way to find bugs in a Verliog or VHDL design is with a Verilog or VHDL testbench. Verilog or VHDL testbenches are usually called directed testbenches while a Specman testbench is called a random testbench. You must bear in mind that random is not a synonym for Specman or E - there are other companies (for example VERA) and a lot of other ways, except for Specman, to build a random testbench. I will now shortly explain the difference between directed and random testbenches.

When you build a directed testbench you first have to think a lot about the places in your design where bugs might hide, or the weakest points in your design. Once you have a list of these, you assign values to the inputs in order to check your design at these specific points. For example, if you have a counter, you might want to check the behavior of your design when this counter is zero, when it reaches its maximal value or when it wraps around. Hence, you have to think about an input sequence that will make your counter reach these states. Therefore, normally, a directed testbench is made up of several separate sequences of input values, or tests, and each of these is supposed to make your design go into a specific state, which seems to you to be problematic.

The problem here is of course that you have to think about most of the problematic parts by yourself. There might be a lot of problematic parts where bugs might be hiding that you haven't thought of. Also, in order to know where the problematic parts are, you usually have to be quite familiar with the design. It takes a very good engineer with a lot of experience to find problematic points in a design made by someone else. This means that normally designers write both the design and the directed testbench for the design. However, if a certain conceptual bug did not occur to the designer while he was writing the code, he is not likely to think about it when he is checking the code. The best way is of course if somebody else could check it.

Random verification is meant, first and foremost to overcome the problems just presented. Usually it means that you just provide constraints or certain limits on the inputs. Within these limits, values are selected randomly by the software. Verilog and to a somewhat lesser extent VHDL, both support random generation of values or simply put some sort of a method like “rand() in C. However, this is not quite enough. There are plenty of times when you would like to limit the values to a certain range or to create a dependency between the values that you allow for one input and the values that you allow for the other. If you are randomly creating (or generating) an Ethernet packet, you definitely want the values of some fields and even their length in bits, to be dependent on the values assigned to other fields. Trying to do this with the limited support of Verilog or VHDL is more or less like banging your head against a concrete wall.

Now, once your inputs are free to move within certain limits at random, you let your randon verification testbench run a long time (from several nights to months and even years) in the hope that it will find interesting bugs. Of course, if you have a lot of inputs and your design is very complicated, your random verification might never produce all the possible sequences. This, however, is not a problem – you never wait for all the possibilities to be exhausted before you call it a day. Instead, you will stop running your testbench when the intervals between interesting bugs become too long since this means that either your design is more or less clean (hopefully) or that your random testbench is not doing its work properly. In any of these two cases it would be better to dedicate your computing resources, which are usually limited, to another purpose.

It is important to note that random testing does not mean the designer does not have to think a lot about the most problematic points in his design, only that now you don’t have to count on him as much as before. The problematic points can now be used as test cases for the random testbench. You should check that your random testbench made the design go into all these problematic states that you meant to check using a directed testbench. You do this with Coverage which, despite some significant developments (and very good public relations), is still essentially like placing a breakpoint on a complicated line in your VHDL or Verilog design, and checking that it works properly. Another option is for each designer to write a small directed testbench for his/her block in Verilog or VHDL. This might save you some money on Specman licenses at the price of depriving you of your simple testcases.

How does Specman work?

The idea on which Specman is based is quite simple. Both Verilog and VHDL, in somewhat different ways, allow the user to call external functions (also called callbacks). Also, almost every simulator on the market supports standard C libraries that enable external applications to perform all kinds of operations on its data structures. You can, for example, assign values to signals, run the simulation, stop it or find nets in the design, all from an external application. In this way it is possible, for example, to call an external callback from a Verilog design in the middle of a Modelsim simulation, and then make that callback find the names of all the signals in the design that begin with the letter A and print those signals to the simulator console. I once did that when I was extremely bored.

As mentioned above, Verilog and VHDL, which are also supported by almost every simulator on the market are too limited to support all the capabilities that random verification requires. So instead of writing a Verilog or VHDL testbench and then compiling it inside the simulator, we can write complicated callbacks in C or C++ and have all the flexibility and arithmetic libraries we need. In this way, for example we can implement a much more complicated random generation then Verilog or VHDL allow. This is in fact more or less what Specman does.

The only purpose of this complicated explanation is to make you ask the following question : If all of Specman is based on a C/C++ interface, why is it that the designers of Specman have chosen to invent this new language, called E, and then sell us all an integrated environment that includes a special debugger, optimizers, libraries and so forth? Why didn't they just write some C/C++ libraries that could provide exactly the same abilities as this new language? If they would have done this then all we would have to buy from Verisity, would be these libraries. All the rest of the development environment (debuggers, optimizers whatever) could come from Borland, Microsoft or whoever sells an integrated C/C++ development environments. By the way, it is good to know that there are a lot of companies that have exactly such libraries for their own private use or for sale.

Of course, Verisity might fill a book with the reasons for the invention of E. Elegance, the English like structure of the statements or all the new abilities they added and that would be too complicated to add through libraries. One must admit that there is some truth to that, but personally I think that it was more then everything else an ingenious marketing decision. You just can't sell several C/C++ libraries for the prices they demand for their integrated environment (namely Specman). So they just complicated the market a bit in order to have a justification for the prices they demand. As a by product they got all kinds of other advantages, not the least of which is the big money they pocket for the courses they give in this new language, which is now almost a recognized standard.

Specman pros and cons

The cons of Specman are obvious it costs a lot of money to train your staff and to buy the licenses. In case you are training your electrical engineers, who normally don't posses a lot of programming experience, to do the Specman verification, expect the learning curves to be about half as steep as those shown in Verisity's presentations. Generally speaking C/C++ programmers would learn a lot faster, but this requires, of course, separate design and verification teams, which is something that not all companies, especially the smaller ones, can afford. There are other cons too the environment contains a lot of bugs and it is my impression that Verisity programmers, just like any programmer in the world, are keener on inventing all kinds of complicated new features, rather then fixing the bugs in the old ones. In my dealings with Verisity's technical support I have found it to be neither quick nor very helpful, but that is of course my own personal experience, and I believe that people who work for larger companies might tell you a different story.

The most important pro one can find for Specman is its competitors, which are usually a lot worse. Other tools are a lot more complicated or contain a lot of bugs (although VERA is closing fast). And after all you can't say that you don't get at least some of your money's worth. Specman is sometimes very annoying but after a while, having gained some insights that I'll share with you soon, the work can become reasonable and sometimes even rewarding. Also, one has to admit that there are some cool parts, such as the possibility to extend structs and methods, and if you haven't worked with random generation before, you will probably be amazed by the amount of bugs that you can find using quite a simple testbench.

The conclusion is: Before you buy Specman, have a good look around. There might be some company who will provide the most reasonable solution good C/C++ libraries - quite soon. Don't let Specman salesman seduce you with other features that Specman has, since the most important and effective part of Specman is his random generation. Other parts are, in my opinion, mostly nice to have.

The Special Features Of e

The most important thing to say about E is that we should not make a big fuss about it. Around 95 percent of its features, syntax and capabilities will not be a big surprise to anyone who has done a bit of object oriented programming, and the 5 percent left do not make the difference between E and other languages nearly as big as the one between a human and a monkey. In this part I will go over the major features that make E a bit different then the rest. While reading you might find it helpful to consult the proposed E standard which is now on the net (see it here)

Content oriented

E is said by its developers to be content oriented. The term content oriented refers to a bunch of features, which allow the user to extend data structures and methods in other file locations then the ones in which they were originally defined. For example, if you have the following structure:

  1 // example 1


  3 <'

  4 struct fruits {

  5    apples : uint;

  6    oranges : uint;


  8    sum_fruits_up() : return uint is {

  9       result = apples + oranges;

 10    };


 12 };

 13 '>

you can extend it, or in other words, add other member variables, methods, and constraints (more on them below) in another file or in the same file. The methods in the data structure can be extended as well:

  1 // example 2


  3 <'

  4 extend fruits {

  5    mangos : uint;

  6    papayas : uint;  

  7    sum_fruits_up() : return uint is also {

  8       result += mangos + papayas;

  9       // the last value written into result is the one that will be returned from the

 10       // method. Note that if in the original method I would have written instead:

 11       // return apples + oranges;

 12       // I would not have been able to extend it.

 13    };        

 14 };

 15 '>

Apart from being a nice feature, the best thing about the content oriented approach is that it allows you to add constraints on the random generation of variables, from other files. This is extremely useful for directing your tests. Usually, when you write the part that generates the inputs (inputs mean also packets, CPU model or whatever), you want it to be as general as possible and generate all the possible inputs. However, when you start testing your design, you normally want to check it feature by feature, starting from the most basic features and then proceeding to the more exotic ones. Therefore, at the start you add, through extensions, a lot of other constraints on your inputs to your test file, in order to direct your verification to the basic features only. As the verification advances, you start removing the constraints from your tests, thus letting your verification reach other features as well. You will probably even create special tests, which, through another set of constraints, will direct your verification to these more complicated features.

Generation and Constraints

As mentioned before, random generation works with constraints. Most of the inputs of a design are limited to a specific range, follow a certain order and, in general, are not totally random. For example, the size of an Ethernet packet is between 64 and 1500 bytes and the data contained in every field must obey certain rules. A CPU or a bus controller must perform its operations in a certain order and so on. Constraints are usually the rules that define the allowed behavior for the inputs of a design. Here is a basic example:

  1 // example 3


  3 <'

  4 struct  packet_s {  // structs are more or less like classes or structures in C,

  5                     //I will talk more about them later on


  7    length : uint;     

  8    // This is a regular field that will be given a random value within the

  9    //relevant constraints when a new packet is

 10    // created.

 11    keep length >= 10 and length <=20; // length of the packet in bytes


 13    %header1 : byte;           

 14    // The percent sign indicates that this field is a real part

 15    //of the packet, which will be sent (physical field).

 16    // The  length field for example is a virtual field and

 17    //will not be sent. After the packet is created,  

 18    // you can translate it into bits using a method of every

 19    // struct called pack() (which is a lot like

 20    // the serialize() method that C++ provides for writing

 21    //data structures into a file).

 22    // pack() translates all the real, physical fields

 23    //into bits and then concatenates them.


 25    keep header1  ! = 0; // header1 can get any random value except 0



 28    %header2 : byte;

 29    keep header1 header2  ! = 0;

 30    // The Value of header2 depends on the value of header1.

 31    // If header1 is smaller than 128

 32    // header2 must not be  equal to zero. If header1 is

 33    // equal or greater than 128 header2 can get any value.

 34    // Note that header1 is assigned a  random value (generated)

 35    // before header2 because header1 is defined above header2.


 37    %data : list of byte// data is a list of bytes  

 38    keep data.size() == length-2;

 39    // size() is a method of every list. One of the best things

 40    // about Specman is the large

 41    // number of predefined methods for list  objects.

 42    // When a list is randomly generated 

 43    // the size of the list is also random. The constraint

 44    // assures that the length of the packet

 45    // including the two header bytes will in fact be  equal

 46    // to the  field length

 47 };

 48 '>

You can see that constraints are usually static struct members. However, it is possible to assign random values to fields in sequential code too, in other words, in a method. It is my recommendation that you use static generation only where it is really needed, since debugging static code can be quite a headache, and is definitely much more complicated then debugging sequential code. for more information on this issue see this link

A struct is usually generated on the fly (i.e., during runtime) at the time that you would like to use it. Note that time in Specman means simulation time and not the computer system time, but that is not such a big difference from the programmer point of view (I will talk more about that later). For example if your design is a router that gets a new packet every 10 uS of simulation time, you should generate a new packet every 10 uS. This might be done in a method that is aware of the simulation time. The following example shows how this is done:

   1 // example 4


   3 <'

   4 extend sys {

   5    // ‘sys’ is a unit that is automatically generated by Specman. During the

   6    // generation of ‘sys’ all its fields, including other unit instances, are

   7    // generated. These unit instances generate other unit instances until

   8    // all the unit instances in your hierarchy are generated. In this case, for

   9    // example, the generation of ‘sys’ leads to the generation of “chip_env”,

  10    // which is an instance of “chip_env_u”. In its turn, the generation of

  11    // “chip_env_u”, causes the generation of “packet_generator”, which is

  12    // an instance of “packet_generator_u”, and of “packet_driver”, which is

  13    // an instance of “packet_driver_u”. Other units might be instantiated by

  14    // these units, or directly by “chip_env”.

  15    // Verisity recommends that you do not make ‘sys’ the root of your

  16    // hierarchy.In other words, sys should instantiate only one unit, in this case

  17    // “chip_env_u”, and this unit should start all the other branches of your

  18    // hierarchy. Following this suggestion is supposed to make integration of

  19    // several  environments easier.


  21    chip_env : chip_env_u is instance;

  22 };




  26 unit chip_env_u {


  28    //...


  30    event clk_sys is rise('clk_sys')@ sim;

  31    // When the signal “clk_sys” in the design goes from '0' to '1' this event will be emitted

  32    // (see more in the "temporal expressions"  section below). The events for the main clocks

  33    // in your design should be located at the root of the hierarchy since they are used by almost all 

  34    // units and therefore must be easily accessible from anywhere.


  36    //...


  38    packet_generator : packet_generator_u is instance;

  39    packet_driver : packet_driver_u is instance;


  41    // It is recommended to separate between the “generator”and the “driver”.

  42    // The generator is responsible for the generation of ‘high level’ data

  43    // structures, such as packets. The driver translates the ‘high level’ data

  44    // structures into bits and bytes and implements the physical  level

  45    // protocols. This divide, which might seem a bit forced and unnecessary

  46    // sometimes, should be strictly kept for better code reuse (see more below).


  48    //...

  49 };




  53 unit packet_generator_u {          

  54    // unlike a struct, which may be generated on the fly using the  ‘gen’ command (see below in this example), 

  55    // a unit can not be generated on the fly. It is generated once when the simulation starts and 

  56    // destroyed when the simulation ends. However, a unit is not just a degraded struct since there are

  57    // some statements that can be placed only inside a unit.


  59    p_env : chip_env_u;

  60    // This means that “p_env” is a pointer to a “chip_env_u” unit.

  61    // Note that “p_env” is not an actual “chip_env_u” unit. To define a

  62    // real “chip_env_u” unit, you must add ‘is instance’ at the end as

  63    // shown above in the extension to ‘sys’

  64    keep p_env == get_enclosing_unit (chip_env_u);


  66    // The unit “packet_generator_u” is instantiated by “chip_env_u”.

  67    // Since the hierarchy of units is fixed (units are static objects), it can

  68    // always get a pointer to its father by calling the global E method

  69    // “get_enclosing_unit()”. In this case when the pointer “p_env” is

  70    // generated, it is assigned with a reference to the father.                                                   


  72    p_driver : packet_driver_u;         // This is another pointer, this time to the object “packet_driver”

  73    keep p_driver == p_env.packet_driver;

  74    // “p_env” is generated before “p_driver” since it is located higher in the file.

  75    // Since “p_env” already references the object “chip_env” we can use it in

  76    // order to initialize the pointer to the driver.


  78    //...


  80     ! last_packet_time : time;           

  81    // The exclamation mark means that this field should not be assigned a random value when the unit is

  82    // generated. The default value is zero.


  84    event rdy_to_send is true('ready_for_pkt' === '1') @ p_env.clk_sys;

  85    // When the signal “ready_for_pkt” in the design will be '1' at the system clock edge, this

  86    // event will be emitted (see more in the "temporal expressions" section below). The event “clk_sys”

  87    // is a part of “chip_env_u”. Therefore it is accessed through the pointer “p_env”.


  89    packet_gen()@ rdy_to_send is{

  90       // A method defined in this way is called TCM or Time Consuming Method. Unlike regular 

  91       // method a TCM is not executed in zero simulation time: It can wait on events from the DUT or from

  92       // E. The event “rdy_to_send” is called the sampling event. When this event occurs, the TCM wakes up

  93       // and proceeds along its line of execution until it encounters a time consuming action such as ‘wait’

  94       // or ‘sync’.


  96       while(TRUE) { // Written in this form the while will run until the  simulation ends.


  98          var packet : packet_s; // This is only a declaration, i.e., it is only used by  the compiler.


 100          if sys.time – last_packet_time >= 10 {

 101             // sys is an object that exists in every Specman simulation (more about it later). sys.time holds

 102             // the simulator time. Note that this line will be evaluated only when “rdy_to_send” occurs.


 104             gen  packet;     

 105             // Only here the packet is really generated and the different fields are given a random value. 


 107             packet_driver.drive_packet(packet);


 109             // This line calls the method “drive_packet()” of “packet_driver” which is an instance of

 110             // “packet_driver_u”.


 112          }else{

 113             wait cycle// wait until the next time my_event happens before you check the condition again.

 114          };        

 115       };

 116    };


 118    run() is also {   

 119       // The method “run()” is a predefined method of every struct or unit.

 120       // This means that even though not defined in the code above, it is already defined automatically by E.

 121       // The “run()” method of all the units in the design is automatically executed after the simulation

 122       // starts running. In this case it starts the TCM “packet_generator” that will continue to

 123       // generate packets until the simulation ends.


 125       //...


 127       start packet_gen();        

 128       // A TCM is a separate execution thread that has to be launched. The TCM “packet_gen()” will now

 129       // work in parallel with other processes in the system until the simulation ends.

 130    };

 131 };


 133 unit packet_driver_u {


 135    //...  

 136    drive_packet()@ p_env.clk_sys is {

 137       //...

 138    };

 139 };

 140 '>

A unit is generated at the simulation zero time. This is why a unit is called a static element and a struct is called a dynamic element, more or less like the main window (static object) and a pop up menu (dynamic one) in an application.As mentioned in the comments above, In order to instantiate a unit you must declare it under the object sys, which is generated for every new simulation. For example:

  1 // example 5


  3 <'

  4 extend sys {

  5    chip_env : chip_env_u is instance;              

  6    // Specman support recommends not to put your units directly under sys, but to create another level.

  7 };


  9 unit chip_env_u {

 10    // other units in the design ...

 11    packet_driver : packet_driver_u is instance;

 12 };

 13 '>

Now, when we generate the test the sys object will be first generated, then the object chip_env, an instance of chip_env_u, which is supposed to hold all the other units in the design. Then the unit packet_driver, an instance of packet_driver_u”, will be instantiated. After the generation is complete you can run the simulation. Once you do, the run() method of all the units under sys will be called and the packet_sender() TCM will be started. This TCM in turn, will generate packets on the fly every 10 uS until the simulation ends.


'When' Inheritance

E supports two types of inheritance. The first type, using the keyword like is just like a normal inheritance in every object oriented language. The other type, using the word when is inheritance which is based on the value of a random variable. This is best suited for the generation of packets of different information protocols. In such packets it is common for the length and content of fields, to change considerably according to one field in the header of the packet. For example, we will add to our packet from above a checksum field, which will exist only if the LSB of the field header1 is '1'. In order to do this we will add a boolean non-physical field named with_checksum. When with_checksum is FALSE we will force the LSB of header1 to have a value of '0' and the field checksum will not be added. When with_checksum is TRUE we will force the LSB of the header to have a value of '1' and a field called checksum will be added. This is how you do it:

  1 // example 6


  3 <'

  4 struct  packet_s { 


  6    length : uint;

  7    keep length >= 10 and length <=20;


  9    with_checksum : bool;   

 10    // This Boolean variable will be generated randomly. If it is FALSE the packet will not have a field called

 11    // “checksum” at the end and the LSB of the field “header1”  will be '0'. If it is TRUE a field called

 12    // “checksum” will be added to the packet through a WHEN inheritance and the LSB of the field

 13    // “header1” will be '1'.


 15    %header1 : byte;

 16    keep header1  ! = 0;


 18    keep with_checksum == FALSE => header1[0] == '0';   

 19    // If the packet is without checksum the first bit of the field “header1” will be '0'

 20    keep with_checksum == TRUE => header1[0] == '1';

 21    // otherwise it will be '1'


 23    %header2 : byte;

 24    keep header1 header2  ! = 0;


 26    %data : list of byte;                     

 27    keep data.size() == length-2;


 29    when TRUE'with_checksum { // When the field “with_checksum” is TRUE


 31       %checksum : byte;         // an extra field called “checksum” is added to the packet.

 32       keep checksum == data.xor();   

 33       // The value of the new field is equal to the “xor()” of all the bytes in the data list

 34       // You can look up the “xor()” method of a list on the e manual on the net.

 35    };       

 36 };


 38 '>

Temporal expressions

Just like the processes in a regular program are triggered by system events such as user clicks on buttons or windows, or timers, the processes in an E program are triggered by events from the simulator such as specific signals rising or falling, or at a certain simulation time. For example, take another look at the method packet_gen() and the event rdy_to send from the unit packet_generator_u, shown above and copied below for your convenience:

  1 // example 7


  3 <'


  5 unit packet_generator_u {


  7    //...

  8    event rdy_to_send is true('ready_for_pkt' === '1') @ clk_sys;


 10    packet_gen()@ rdy_to_send is {

 11       while(TRUE) {

 12          var packet : packet_s;

 13          if sys.time - last_packet_time >= 10 {

 14             gen  packet;

 15             send(packet);

 16          }else{

 17             wait cycle;

 18          };

 19       };

 20    };

 21 };

 22 '>

The event rdy_to_send will be emitted whenever the signal ready_for_pkt, which is a signal in our Verilog or VHDL design, will be equal to '1' and the event clk_sys happens (The triple equality sign is to prevent the expression from getting a true value when the Verilog or VHDL signal is 'X', 'Z' etc.). The event clk_sys happens whenever the signal clk_sys in our Verilog or VHDL design, rises, or goes from 0 to 1 and the event sim happens. The event sim is a predefined Specman event (i.e the user does not have to define it). This event is emitted whenever the simulator calls a Specman callback function (see the how does Specman work section above). The simulator calls a Specman callback function for for every change in a signal that is used by Specman (and usually a lot more often).

It is important to understand that the expression rise(clk_sys) is calculated every time that the event sim happens. Since the event sim usually happens a lot more often then a clk_sys rise, this means that we waste a lot of time in unnecessary calculations. For this reason it is recommended to use sim as little as possible. For example, if we know that the signal ready_for_pkt is sampled (i.e. looked at) only on a clk_sys rise, we will evaluate it only at this specific event and not every time that sim happens. Usually, the event sim should be used only for the system clocks. All other events (unless they are not in sync with the clock events which is rare) should use one of the events defined for the system clocks as their sampling (or evaluation) point. The system clocks events are usually defined in the unit, which is at the top of the hierarchy, so that they will be accessible to all of the hierarchy below. For example, as shown above, the event clk_sys is defined in the unit chip_env_u and is accessed through a pointer to this unit from all the other units in the design

Common Features

In this chapter I explore some of the features of E that are available, albeit sometimes in slight changes, in other object oriented languages.

If you already have some experience with object oriented programming you can skip it and never return. If you are absolute beginners with E, leave it aside and come back later, since the topics discussed are a bit advanced.

'Like' Inheritance

As mentioned before, there are two types of inheritance in E: one using the keyword 'when' and the other using the keyword 'like'. 'When' inheritance, discussed above, is unique to E and is one of its best innovations. 'like' inheritance is just the E version of the common object oriented inheritance and is not widely used. Its unpopularity owes a lot to Verisity support, which underestimates it and discourages programmers from using it in their code.

The idea of inheritance is pretty simple: it allows you to separate the common features of some objects, from the specific features that are unique to each object. For example, there are numerous types of windows in all kinds of geometric shapes, sizes or colors. Some windows have manus and others don't, some demand immediate user response while others can wait patiently forever. Still, almost all windows have some common features: they all have to do something in response to a mouse click, they all draw themselves on the screen, and almost all of them have close and minimize buttons. Back to inheritance, the features that are common to all windows will be defined in the "base class" (or "base struct" in E), while their specific implementation and other specific features will be defined in the classes that inherit from it. Here is a simple example, written in E, although it is highly improbable that anyone will use E for anything like this in the near future:

  1 // example 8


  3 <'

  4 type size_t : [small, medium, large];

  5 type color_t : [yellow, red, blue];


  7 struct window {

  8    // All windows must have some response to a mouse click. The specific response: a sound, a change in color etc,

  9    // will be determined in the specific(inherited) windows. This method is defined as "empty"- the specific

 10    // windows will provide the appropriate implementation


 12    draw() is empty;

 13    // All windows must draw something on the screen. The specific graphics of

 14    // each window will be determined in the specific (inherited) windows. This

 15    // method is defined as "empty" and the specific windows will fill in the details

 16 };



 19 struct my_window like window {

 20    // "size" and "background_color" are also fields of the inherited class "my_window" since they are

 21    // fields of the base class "window" therefore I can constrain them here.


 23    keep size == big;

 24    keep background color == blue;

 25    // "when_mouse_is_clicked" is defined in the base class since every window

 26    // has to do something when it is clicked. However, what is it that it should do exactly is different

 27    // from window to window. Mine, for example, sounds a  bip, and then shows a pop up message.


 29    when_mouse_is_clicked() is {

 30       sound_a_bip();

 31       jump_a_pop_up("Come on and milk my cow");

 32    };


 34    // same for "draw"

 35    draw() is {

 36       draw_caw();

 37       draw_milk_bottle();

 38    };

 39 };



 42 struct her_window like window {


 44    keep size == small;

 45    keep background color == blue;


 47    when_mouse_is_clicked() is also {

 48        sound_a_wof();

 49        jump_a_pop_up("Come on and bite my dog");

 50    };


 52    draw() is also {

 53       draw_dog();

 54    };

 55 };

 56 '>

As you probably noted, the base class (struct) in this case is just an empty shell that does nothing. Obviously this is not always the case. For example, with real windows, the base class performs some crucial steps for all the windows that inherit from it, like registering the window with the operating system so that the operating system can tell the window when it is clicked. In the case of specific packet structures that inherit from a single base packet, the base packet might calculate the checksum. Still, even when the base class doesn't do much it has a great importance since it keeps the code orderly and makes it more comprehensible to other people. If someone understands how one window work, it will not take him more than ten minutes to understand how another window does. Give him five minutes more and he will even be able to do minor changes in the code. Of course, he wouldn't be able to do that if every window would name its basic methods differently and assign them with different functionality.

As I mentioned before, 'like' is not unique to E. Verisity, like any software company, is keen on promoting the special features it provides and therefore consistently dissuades programmers from using 'like'. It is true that most of the things that could be done with 'like' could also be done with 'when'. Still, with 'when' the objects you will get depend, after all, on a random value assigned to a determinant field. In those times when you know the object you need in advance why should you relay on random generation? For example, say a chip has several fixed interfaces that, although they are different, have a basic set of common features. In this case it is definitely better to use 'like'. Also, using 'when' has some inconveniences, but I will not go into them now.

Last point – 'like' is quite useful if you would like to give yourself the possibility, some day in the future, of adding some functionality instantly to all your structs, without using a global method. You could achieve this if you make all structs in your environment inherit from a basic global struct. However, since as far as I know E doesn't allow multiple lines of 'like' inheritance, it might limit you in other cases.


Encapsulation has been incorporated rather lately into E (version 4.1). Simply put, it is a way to separate a struct into an interface and an implementation (or core). The interface, including shared struct members like high level methods or important fields, stays fixed and other programmers in the team know they can use it without fear of sudden changes. The implementation, on the other hand is dynamic, and the programmer is free to do in it as he likes – change method names and their parameters, add struct members or remove them without a warning, and so on. The implementation is accessible only to the programmer who owns the struct. Other programmers can't access it, which is better for them, since, as just said, it might be subject to sudden changes. An exact parallel from the, so called, hardware world is a Verilog or VHDL block, where the inputs and outputs are fixed and defined in the specification, but the implementation is totally up to the owner of the block and can undergo dramatic changes.

How does encapsulation prevents other programmers from using anything but what they are supposed to, namely the interface? Quite simply: The definitions of all of the methods and struct fields that belong to the implementation, and where no one except the owner of the struct is expected to shove his nose, are prefixed with the words "private", "protected" or "package" (the difference will be soon explained). If someone insists on accessing these fields from methods in other structs, the code will simply not compile (see exceptions below). If a field is not declared to have a limited access using "private" or "protected" it means that it is public and therefore belongs to the interface, and accessible to everyone. Thinking about it, it would have probably been better if things were the other way around, i.e. if the default was "private" or "protected" and the interface methods or fields would be prefixed with "public". I have two reasons for that: public struct members are smaller in number and since they are important, it would be nice to mark them with a prefix. Anyway, that's life. To show how simple it is, here is an example of a bus state machine:

  1 // example 9


  3 <'

  4 type bus_state_t : [busy, idle, waiting, error];


  6 struct bus {

  7    // These are the interface or 'public' methods and fields. They are not prefixed

  8    // with anything since 'public' access level is the default


 10    reset()@clk_sys is { // "reset" can be called from other structs like the data driver or the CPU model. 


 12       'reset_bus' = '0'; // drive '0' to the reset pin 'reset bus'


 14       wait 3*cycle// wait 3 'clk_sys' cycles for reset to finish


 16       state = idle;

 17       // "state" is a private struct member (see below). Therefore other users can not access it directly.

 18    };


 20    bus_state_t get_state() is {    

 21       //To get the current state, users from other structs must call "get_state()" since state is private

 22       // and they can not access it directly.


 24       return state;


 26    };


 28    send_data (l: list of bit)@clk_sys is {

 29       // send data calls some private TCMs that take care of actually  sending the data.


 31       if check_data(l) {

 32          signal_data_start();

 33          drive_data(l);

 34          signal_data_end();

 35       };

 36    };


 38    // Below are the private methods and fields. Nobody is supposed to use them except the owner of the struct.


 40    private state : bus_state_t;    

 41    // other structs can change "state" or see its value only through "reset()" or "get_bus_state()"

 42    keep state == idle;


 44    // All the methods below are hidden from other users. They can not use them or they will get a compilation error.


 46    private bool check_data (l: list of bitis {

 47    //...

 48    };


 50    private signal_data_start () is {

 51    //...

 52    };


 54    private drive_data (l: list of bitis {

 55       //...

 56    };


 58    private signal_data_end (l: list of bitis {

 59       //...

 60    };

 61 };

 62 '>

Diving a bit deeper into the nuances, there are some special cases in which structs can access the limited access fields of another struct. The "package", "protected" and "private" access modifiers that were mentioned above, simply define different groups of structs that are allowed to access the limited access fields. Fields that are "protected" can be accessed by all structs from the "same struct family", that is, by all structs that are related to each other through inheritance. This means that if you use 'like' or 'when' to extend a struct, the son (the new struct you have just created) can see all his father's (the base struct) "protected" fields and can use his father's "protected" methods.

Sometimes you would like to limit the access to a specific field to a group of structs that are not necessarily related to each other through inheritance. This is what packages are for. If you define several structs as belonging to the same package (see E language LRM chapter 26 for more details) they and only they will be permitted to see each other's "package" prefixed fields.

The "private" access modifier is a "logical and" of "protected" and "package". This means that "private" fields are accessible only to structs in the same struct family and in the same package. "private", "package" and "protected" can be used to create several layers of access permissions. For example, a programmer could build his code in three layers – a "public" layer, a "package" layer and a "private" layer. The "public" layer should be used by people who are working on totally different things, maybe even in another team, and that have only a very limited and superficial knowledge about the way his struct works. Then there is the "package" layer, which other people who work in the same team, on similar things, and have better knowledge of his struct can access. Finally there is the "private" part, which no one should touch since it is sensible or changes quite often.

It must be said that the limited access options in E are somewhat relaxed. None of the three options limits access only to a specific struct. Even "private" which is the most severe allows access to structs from the same struct family and the same package.

 A Specman Testbench

This chapter describes the basic division of a Specman testbench into functional blocks.

Diagram of a basic testbench

The above figure shows the main logical parts of a testbench (by logical I mean that they do not necessarily correspond to the actual files or units in your verification testbench).

The Generator and the Test

The Generator is supposed to be able to generate all the possible correct input data to your chip and also some interesting erroneous data structures. For example, if your chip is supposed to handle Ethernet packets, your generator should be able to generate Ethernet packets of every legal length (64 bytes to 1500 bytes) and every legal combination of data (as mentioned before, usually the fields in a packet are related to each other in some way, so your generator must make sure that these rules are respected). Also, your generator should be able to generate some possible erroneous data for example, Ethernet packets that are smaller or larger in size, or that contain a field where the data is corrupted. While testing your chip with problematic data you should always have the protocol standard and the specification for your own chip at a hand's reach. Otherwise, you might find yourself coping with some totally imaginary exotic errors, that no chip will ever be able to handle. In such cases you should limit yourself to making sure that the system does not crash.

If your generator is well thought it will be able to generate, as mentioned above, every possible kind of data. However, normally you would like to restrict your data to specific cases: Either because your Verilog or VHDL code is not finished yet and you would like to check only the completed parts in your design, or you would like to direct your testbench to a specific interesting case whose chances of happening naturally are slim, or a costumer reports a bug and you would like to reconstruct it in your verification environment. Whatever the reason is, this is the role of the Test. The test contains additional constraints (using extensions as shown above), whose purpose is to direct the generation to a specific area. After a while you might find yourself with quite a lot of different tests, some of which are quite useless, since they fail to find any bugs. Coverage might help you select the best tests but this is only a limited solution, since it is based on a self defined criteria, and not on the number of actual bugs found (more below).

The Driver

After the data is generated it should be injected to the DUT. This is the task of the Driver. The driver gets the high level’ data, for example a complete Ethernet packet, and converts it into low level data, which is usually a list of bits, bytes etc. In other words, the driver is responsible for implementing the physical level protocol (request to send, wait two clock cycles, acknowledge, and all this hardware bullshit if you have an electrical engineer in your team, this is the job you should assign him with J). In order to transform the high level data (packets) into low level data (a list of bits) E provides you with a method called pack() (which is very similar to serialize in C++). This method, innocent to the unsuspecting eye, is the second cause of suicides among E beginners, the first being the generation debugger. Do not be tempted to combine the generator with the driver. Verisity support engineers insist that this dividing line should be kept, and in this case they are definitely right: The physical level interfaces of an ASIC can change very often (widespread changes include change in pins names, removing or adding pins, turning single purpose pins into multifunctional pins etc), while the high level data formats are more or less fixed. For obvious reasons of code reuse, you do not want to touch one of the most sensible parts in your E code (the generation) whenever the physical interface changes.

The Collector

The Collector, at the other side of the DUT (i.e your Verilog or VHDL code), is the inverse of the driver. Here you use the method “unpack()” to turn the low level data into high level structures, while losing whatever is left of your sanity. Everything else is more or less the same as the driver.

The Scoreboard

Once the data is arranged into high level structures, it is handed over to the Scoreboard, which is the most complicated part of every testbench, even a Verilog or a C one. The main role of the scoreboard is of course to check that the output data is correct, that it came out on time and as expected. Unlike the random generation, where one can immediately see the advantages of E over directed verification methods, in the parts where the data is checked, E is more or less just like its predecessors, which explains why Verisity salesman will always take a generator for their demonstrations.

Since E definitely does not revolutionize the world of data checking, the methods for building a scoreboard with E are similar to those used before. The best way to go is to have separate design and verification teams, whose only medium of communication is a specification that was written by someone else. Writing this specification is the most difficult part and once it is written, it should be clear that any requests to change it (usually from the design team) are not welcome. Working like this guarantees that your scoreboard will not turn into a carbon copy of the chip/block and therefore eliminates the risk of having the same bug in both the scoreboard and the chip/block. Also, cloning a block in a scoreboard is a gruesome task, which you should do your best to avoid. However, so far I have not seen this model in real life – usually because the specification is not good enough, goes into too much detail, or was written by the design team (!! - a stupid mistake that makes the specification totally useless except for documentation).

Sometimes the nature of your design allows you to make important shortcuts. For example, communication chips are normally two-way, transmitting and receiving data at the same time. In this case, you could use the transmitter to check the receiver, and hence simplify the scoreboard considerably, since it will only have to make sure that the transmitted data is back. Another way for making things easier is to use some kind of pseudo random data, instead of using totally random data – for example, incremental data in an Ethernet packet. When using random data, you have to keep a record of all the data that was transmitted in order to check it at the other end. Incremental data or any other foreknown sequence makes the scoreboard totally independent of the generator. In other words, while random data usually means that the generator inserts data into some shared data base and the scoreboard takes it out, with pseudo random data this is obviously unnecessary. Usually, you build your generator in such a way that it can generate either random or pseudo random data, and use a boolean variable with a constraint in your test file to control the generation. Finally, for better reuse sanity checks on data should not be located at the scoreboard, but instead be included as methods of the data items themselves. For example, checking that the number of data bytes in a packet corresponds to the value in the length field should be done by a method of the packet structure and not by the scoreboard.

Most projects proceed like this: the chip is divided into blocks, the blocks are coded into Verilog or VHDL, tested separately using Specman, then reintegrated and tested end to end. Therefore, most major blocks, will have a dedicated testbench (including all the elements shown in the figure above). Even after integration, these dedicated testbenchs retain an important role (so keep them updated and run them every once in a while). First, when a bug is detected in a specific block during integration, it might be a lot faster to get to the bottom of it and fix it using the dedicated environment. Second, the scoreboards and collectors of all blocks might be used in the end to end tests, alongside an end to end scoreboard. True, simulation with several scoreboards might go a bit slower, but this will pay off because most bugs will be discovered earlier, and will be a lot easier to debug. In some cases, when an end to end scoreboard is too complex to build, a chain of scoreboards on internal blocks is a reasonable replacement, despite its obvious shortcomings.

Simulation End Control

The block named Simulation End Control in the diagram above is an example of Verisity's neglect in addressing methodological issues (see more below). While the other blocks in the diagram (generator, driver, collector, scoreboard, coverage) are all a part of Verisity's basic methodology, this block is my own humble contribution. It is supposed to answer a simple question: when should the simulation end? A famous rule says that every simulation, without exception, has to end at one point or another. Which of the five parts in Verisity's scheme (generator, driver, collector, scoreboard, coverage) is responsible for this task? None. The result of this omission is that the end of the simulation can come from almost everywhere, and sometimes it indeed does: a limit on the number of generated items in the generator, a time limit in the simulator, a limit on the elements checked by a scoreboard, or when any of the scoreboards reports an error. It is enough to call the method dut_error() or stop_run() from anywhere in the code, to stop the simulation. To me it seems that it makes more sense to have one piece of code that decides when a test should end, since this would make it obviously much easier to change the criteria dynamically. This is especially true when you have several scoreboards operating in a single environment and you would like to ignore the errors originating in one of them (which means you should try to avoid using the method dut_error()). Or, you would like to end the test according to an algorithm which takes into account the number of data items checked by the scoreboard, the simulation time, and maybe the actual duration of the run. Also, controlling the end of the simulation from a centralized location, helps to integrate code from different writers.

Bear in mind that a bug discovered after several hours of simulation is not a useful bug, since trying to fix it would cost too much time. Its not enough to find a bug, you also have to find it at the shortest verification time possible. So, controlling the end of a simulation is in fact equivalent to specifying the maximal amount of time you are ready to “pay” for a bug.


The last block in our testbench is responsible for Coverage. There are two main types of coverage. Input coverage should check if the generator is in fact generating all the types of data that one expects it to generate. For example, to check if an E model of a 8086 program RAM is generating all the available assembler commends. Input coverage should be taken seriously because of a curious behavior of Specman when its generator fails to generate a certain combination, due to a true or imagined contradiction in your constraints, it will sometimes just generate an easier type of data without bothering to share this crucial information with you. Output coverage is there in order to tell you if you have in fact tested all the parts that you wanted to test in your design. In other words, when the output coverage is 100% you can send your design to the ASIC vendor and start packing for a vacation in Hawaii. The output coverage should be based on a detailed document, written after the specification by the same writer, then passed over to the design engineer, who should look as best as he can for the weakest spots in his design and suggest how they are to be tested. There might be situations that seem to need testing, and yet, are very difficult to detect from the outside of the block or the chip. A good example of this are state machines, where you should check the behavior of your code in all states, but these states, are hardly visible from the outside. Therefore, output coverage is usually collected both from the outside, using the scoreboard in order to identify specific interesting situations, and from the inside, either by spying on internal signals, such as state machine registers, or in an end to end simulation, by relying on internal scoreboards.

A few words on methodology

In conclusion to this section a few, somewhat abstract words on methodology might be in place. Verisity offers one way for building a testbench: all testbenchs should include a generator, a driver, a collector, a scoreboard and coverage (as mentioned above, the block called Simulation End Control, is my own invention and is not a part of Verisity’s methodology). As far as Verisity is concerned, no matter if you are designing a 8086 or a Pentium 5, your testbench will look just the same except for the scale. Whether building a CPU core or an Ethernet Mac, whether at the very start of your project or already approaching the end, your testbench will be more or less identical. In software projects, choosing the right methodology is one of the heaviest decisions and its implication are quite well felt, especially when the projects undergo changes (such as data base or algorithm replacement). It is my impression that Verisity, in a smart move, prefers to distract the attention of its potential clients from the real burning issues of methodology, by presenting them with a simple magic solution, whose shortcomings they will discover only after they are already deeply involved with Specman and E. I have already given one example of this above with the Simulation End Control block. What follows are a few other examples and a lesson to be learned from all of them.

First, the methodology Verisity offers does not refer to the natural development of a project: what should one do with the dedicated verification environments that were written at the beginning as the project advances? This is one of the most painful questions since in a large project, you might have as many as 20 separate environments or more, each with its own generator and scoreboard. As mentioned above, these environments can be helpful, but this depends very much on how they are written. For example, if in a dedicated environment the scoreboard relies on the generator, it will be useless in an end to end environment from which all dedicated generators are removed. Another example: scoreboards comprise at least 60% of the E code you will write, but apart from the very general notion that your testbench should include a scoreboard, Verisity offers almost no clues as to how scoreboards should be built or how they should be adapted to common types of DUTs (CPUs, communication chips etc). Or yet another example: most dedicated generators have some common parts with other generators but Verisity’s methodology, which is suited only to the verification of a single block does not draw a line between the general parts and the specific parts of a generator. Of course, you might say that anyone who is not clever enough to draw such a line by himself deserves some extra work. This, however, only precludes my own recommendation: you should give serious thought to methodology and structure and avoid taking Verisity’s somewhat simplistic schemes as given. You will be surprised to find out that some people, in search for a magic solution, do.

    Bạn Có Đam Mê Với Vi Mạch hay Nhúng      -     Bạn Muốn Trau Dồi Thêm Kĩ Năng

Mong Muốn Có Thêm Cơ Hội Trong Công Việc

Và Trở Thành Một Người Có Giá Trị Hơn

Bạn Chưa Biết Phương Thức Nào Nhanh Chóng Để Đạt Được Chúng

Hãy Để Chúng Tôi Hỗ Trợ Cho Bạn. SEMICON  



Related Articles

Chat Zalo