FPGA Central - World's 1st FPGA / CPLD Portal

FPGA Central

World's 1st FPGA Portal

 

Go Back   FPGA Groups > NewsGroup > Verilog

Verilog comp.lang.verilog newsgroup / usenet

Reply
 
LinkBack Thread Tools Display Modes
  #1 (permalink)  
Old 02-10-2005, 02:13 PM
DW
Guest
 
Posts: n/a
Default nets/variables inside functions

Hello,
could someone explain why it is that variables are allowed to be declared
within functions whereas nets are not. This seems counter-intuitive to me
at the moment as I thought that functions were reserved for simple
logical/mathematical combinatorial functions and I can't see how declaring a
wire would harm this. On the other hand, to declare a variable seems to be
wrong as procedural code isn't allowed in functions. I see functions as a
way to encapsulate combinatorial statements not procedural statements.

Where am I going wrong in my reasoning?


Reply With Quote
  #2 (permalink)  
Old 02-10-2005, 03:28 PM
Jonathan Bromley
Guest
 
Posts: n/a
Default Re: nets/variables inside functions

On Thu, 10 Feb 2005 14:13:20 -0000, "DW" <[email protected]>
wrote:


>could someone explain why it is that variables are allowed to be declared
>within functions whereas nets are not. This seems counter-intuitive to me
>at the moment as I thought that functions were reserved for simple
>logical/mathematical combinatorial functions and I can't see how declaring a
>wire would harm this. On the other hand, to declare a variable seems to be
>wrong as procedural code isn't allowed in functions. I see functions as a
>way to encapsulate combinatorial statements not procedural statements.
>
>Where am I going wrong in my reasoning?


Here:

>... procedural code isn't allowed in functions ...


What a strange notion! Where did you get that one?

What you CAN'T do in a function is anything that would consume time,
such as an event wait with @ or a time delay with #.

A function is a home for a single procedural statement...

function <return_type> <function_name>
input <any inputs, but no outs or inouts>;
reg <any local variable declarations>

<any single procedural statement>

endfunction

In practice, of course, the "single procedural statement" is
almost invariably a string of statements enclosed in begin...end.

Since it's procedural code, it could not possibly write to a net.
Anyways, nets can be declared only at the outermost level of
a module. So, no net declarations in the function. However,
the procedural code might very well need some local variables
to do its work.

You also say something else that rings warning bells:

> I see functions as a way to encapsulate combinatorial
> statements not procedural statements.


I don't understand what a "combinatorial statement" might
be, nor do I understand how it might differ from a
"procedural statement". But I can guess that by "combinatorial
statement" you mean a continuous assignment, introduced using
the assign keyword at the outermost level of a module.
Such a statement is most definitely NOT allowed inside a
function; indeed, if you were to write one inside a function
it would be in the context of procedural code, and it would
mean something quite different.

Functions are a great way to encapsulate the description of
a piece of combinational logic that cannot easily be described
in a single Verilog expression - a good example is a function
to count the 1 bits in a vector...

function [3:0] countOnes; // return value in range 0..8
input [7:0] V;
reg [3:0] result;
integer i;
begin
result = 0;
for (i=0; i<8; i=i+1)
result = result + V[i];
countOnes = result;
end
endfunction

Here I've declared a local integer variable 'i' to scan the
vector, and a local reg to hold the result. (You could also
use the function name as that local, but it's sometimes a bit
confusing to do that.) Now I've created the function, I am
of course free to use it not only in a continuous assignment...

assign SomeNet = countOnes(SomeByte);

but also in procedural code:

reg [7:0] A, B, C;
...
always @(A or B) begin
if (countOnes(B) > 0)
C = countOnes(A);
else
C = 8'hFF;
end

HTH
--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
Tel: +44 (0)1425 471223 mail:[email protected]
Fax: +44 (0)1425 471573 Web: http://www.doulos.com

The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.
Reply With Quote
  #3 (permalink)  
Old 02-10-2005, 03:52 PM
DW
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


"Jonathan Bromley" <[email protected]> wrote in message
news:[email protected]
> On Thu, 10 Feb 2005 14:13:20 -0000, "DW" <[email protected]>
> wrote:
>
>
> >could someone explain why it is that variables are allowed to be declared
> >within functions whereas nets are not. This seems counter-intuitive to

me
> >at the moment as I thought that functions were reserved for simple
> >logical/mathematical combinatorial functions and I can't see how

declaring a
> >wire would harm this. On the other hand, to declare a variable seems to

be
> >wrong as procedural code isn't allowed in functions. I see functions as

a
> >way to encapsulate combinatorial statements not procedural statements.
> >
> >Where am I going wrong in my reasoning?

>
> Here:
>
> >... procedural code isn't allowed in functions ...

>
> What a strange notion! Where did you get that one?
>
> What you CAN'T do in a function is anything that would consume time,
> such as an event wait with @ or a time delay with #.
>
> A function is a home for a single procedural statement...
>
> function <return_type> <function_name>
> input <any inputs, but no outs or inouts>;
> reg <any local variable declarations>
>
> <any single procedural statement>
>
> endfunction
>
> In practice, of course, the "single procedural statement" is
> almost invariably a string of statements enclosed in begin...end.
>
> Since it's procedural code, it could not possibly write to a net.
> Anyways, nets can be declared only at the outermost level of
> a module. So, no net declarations in the function. However,
> the procedural code might very well need some local variables
> to do its work.
>
> You also say something else that rings warning bells:
>
> > I see functions as a way to encapsulate combinatorial
> > statements not procedural statements.

>
> I don't understand what a "combinatorial statement" might
> be, nor do I understand how it might differ from a
> "procedural statement". But I can guess that by "combinatorial
> statement" you mean a continuous assignment, introduced using
> the assign keyword at the outermost level of a module.
> Such a statement is most definitely NOT allowed inside a
> function; indeed, if you were to write one inside a function
> it would be in the context of procedural code, and it would
> mean something quite different.
>
> Functions are a great way to encapsulate the description of
> a piece of combinational logic that cannot easily be described
> in a single Verilog expression - a good example is a function
> to count the 1 bits in a vector...
>
> function [3:0] countOnes; // return value in range 0..8
> input [7:0] V;
> reg [3:0] result;
> integer i;
> begin
> result = 0;
> for (i=0; i<8; i=i+1)
> result = result + V[i];
> countOnes = result;
> end
> endfunction
>
> Here I've declared a local integer variable 'i' to scan the
> vector, and a local reg to hold the result. (You could also
> use the function name as that local, but it's sometimes a bit
> confusing to do that.) Now I've created the function, I am
> of course free to use it not only in a continuous assignment...
>
> assign SomeNet = countOnes(SomeByte);
>
> but also in procedural code:
>
> reg [7:0] A, B, C;
> ...
> always @(A or B) begin
> if (countOnes(B) > 0)
> C = countOnes(A);
> else
> C = 8'hFF;
> end
>
> HTH
> --
> Jonathan Bromley, Consultant
>
> DOULOS - Developing Design Know-how
> VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services
>
> Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
> Tel: +44 (0)1425 471223 mail:[email protected]
> Fax: +44 (0)1425 471573 Web: http://www.doulos.com
>
> The contents of this message may contain personal views which
> are not the views of Doulos Ltd., unless specifically stated.


Yes, this helps a lot. I think I was confusing the term "procedural" with
an altogether different notion. Thanks.


Reply With Quote
  #4 (permalink)  
Old 02-10-2005, 06:42 PM
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


DW wrote:

> On the other hand, to declare a variable seems to be
> wrong as procedural code isn't allowed in functions.


A function doesn't allow anything *but* procedural code.
That is why you can't declare nets in functions, because
procedural code can't write to nets.

> I see functions as a
> way to encapsulate combinatorial statements
> not procedural statements.


This may be a matter of confusion on terminology, but
I would have described this exactly the opposite way.

Functions contain procedural statements. You can call
them from a continuous assignment, which is probably
what you would call a combinatorial statement. So a
function could be viewed as a way of encapsulating
procedural statements so that they can be used from
a combinatorial statement. They are sort of a way of
computing combinatorial results in a procedural way.

Reply With Quote
  #5 (permalink)  
Old 02-11-2005, 09:32 AM
DW
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


<[email protected]> wrote in message
news:[email protected] ups.com...
>
> DW wrote:
>
> > On the other hand, to declare a variable seems to be
> > wrong as procedural code isn't allowed in functions.

>
> A function doesn't allow anything *but* procedural code.
> That is why you can't declare nets in functions, because
> procedural code can't write to nets.
>
> > I see functions as a
> > way to encapsulate combinatorial statements
> > not procedural statements.

>
> This may be a matter of confusion on terminology, but
> I would have described this exactly the opposite way.
>
> Functions contain procedural statements. You can call
> them from a continuous assignment, which is probably
> what you would call a combinatorial statement. So a
> function could be viewed as a way of encapsulating
> procedural statements so that they can be used from
> a combinatorial statement. They are sort of a way of
> computing combinatorial results in a procedural way.
>

So if a function uses variables, and that function is then used in a
continuous assignment, how is this synthesized? My picture of a variable is
a register requiring a clock to transfer information from its D input to its
Q output so it has storage and an inherent time delay, so if a continuous
assignment uses a function using variables, how can it be combinatorial?

I understand that I haven't grasped something here yet so I'm not
challenging your response - please bear with me...

Thanks DW


Reply With Quote
  #6 (permalink)  
Old 02-11-2005, 09:18 PM
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


DW wrote:
>
> So if a function uses variables, and that function is then used in a
> continuous assignment, how is this synthesized?


A "pure" function (one which computes an output based solely on
its inputs, and has no side-effects) can always be synthesized
as combinational logic when used in a continuous assignment.
All it does is produce an output that is a function of its inputs,
just like combinational logic. It is irrelevant what algorithm,
coding style or language constructs were used, they are ultimately
just a way of expressing a truth table. A synthesis tool may have
to do a lot of work to make that translation, and may not be able
to handle all constructs, but in theory it is always possible.

> My picture of a variable is
> a register requiring a clock to transfer information from its D input

to its
> Q output so it has storage and an inherent time delay, so if a

continuous
> assignment uses a function using variables, how can it be

combinatorial?

Clearly variables were originally designed with that concept
in mind, since they were originally called "regs". However,
many uses of variables do not fall into that category. Many
uses of a variable do not have an associated clock controlling
when data is transfered into them. Many modeling styles use
them in ways that are clearly not representing registers. And
with the advent of synthesis, there are now tools that actually
have to decide when a variable was intended to represent a
register and when it is something else.

Consider a continuous assignment

assign out = (a & b) | c;

Here there is an intermediate value (a & b) that has no name,
but represents the net that is the output of an AND gate. It
could have been written explicitly as

assign tmp = a & b;
assign out = tmp | c;

The same thing could be represented with

assign out = andor(a, b, c);

function andor (input a, b, c);
begin
andor = (a & b) | c;
end
endfunction

Or we could rewrite the function as

function andor (input a, b, c);
reg tmp;
begin
tmp = a & b;
andor = tmp | c;
end
endfunction

Here the variable tmp is not being clocked, and doesn't
need to represent an actual storage element. It just
symbolically represents the partial value we computed
and will use slightly later in the computation. It is
just like the explicit intermediate net tmp used above.
A synthesis tool will produce an intermediate net for
the variable tmp, just as it would for the implicit
intermediate value of (a & b) when used in (a & b) | c.

When considered as a programming language (as by a
simulator), there is a single variable tmp that is
used by all the callers of function andor. When a
synthesis tool tries to produce logic that gives
approximately the specified functionality, it will
produce a separate copy of the combinational logic
(an AND gate and an OR gate) for each call to andor,
each with its own net representing tmp.

You could also model the same combinational logic
with purely procedural code:

always @(a or b or c)
begin
tmp = a & b;
out = tmp | c;
end

This is generally called a combinational always block.
When a synthesis tool recognizes a certain pattern of
procedural code, it knows that it is probably intended
to represent a piece of combinational logic that performs
approximately the same functionality. Both tmp and
out must be declared as regs so that they can be written
to by this procedural code, but the synthesis tool will
not create a hardware register for either of them.

For a more extreme case, consider a variable used as a
for-loop index. If there are no event or delay controls
in the loop, then the entire loop happens in zero time.
The loop variable takes on a sequence of values with no
actual simulation time passing between them. This doesn't
represent anything real in hardware, certainly not a
hardware register. It is just a convenient algorithmic
way of representing a set of operations to be performed.
A synthesis tool will try to handle this by unrolling
the loop, after which the loop index variable ceases
to exist, replaced by a constant value in each unrolled
version of the loop body.

Reply With Quote
  #7 (permalink)  
Old 02-14-2005, 09:35 AM
DW
Guest
 
Posts: n/a
Default Re: nets/variables inside functions

`
<[email protected]> wrote in message
news:[email protected] ups.com...
>...
> When considered as a programming language (as by a
> simulator), there is a single variable tmp that is
> used by all the callers of function andor. When a
> synthesis tool tries to produce logic that gives
> approximately the specified functionality, it will
> produce a separate copy of the combinational logic
> (an AND gate and an OR gate) for each call to andor,
> each with its own net representing tmp.
>...

Thanks, I appreciate you taking the time to provide such a detailed reply.
In the above extract, when you refer to "a single variable tmp that is
used by all the callers of function andor", I take it you mean that a
simulator would treat tmp much like a 'C' automatic variable (rather than a
static).


Reply With Quote
  #8 (permalink)  
Old 02-14-2005, 07:46 PM
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


DW wrote:

> In the above extract, when you refer to "a single variable tmp that

is
> used by all the callers of function andor", I take it you mean that a
> simulator would treat tmp much like a 'C' automatic variable (rather

than a
> static).


No. Local variables in functions are static in Verilog, like any
other variable. All the callers are using the same variable with
the same storage (though variables in functions in different
instances of the module are distinct, as with any other variable
in different instances of a module). This means that functions
in Verilog are not re-entrant, which means that recursive functions
will not work as desired. This is not generally a problem in
hardware modeling.

Verilog-2001 added the capability to declare functions and tasks
to be "automatic", which makes them re-entrant by making their
arguments and variables automatic. This allows functions to be
recursive (and also helps catch functions where some paths read
a variable that they have not assigned to yet).

Since synthesis tools create copies of the function variables
for each call, they end up treating static functions as if they
were automatic ones. This is one of the many ways that the
results of synthesis may not exactly match the original design
(though in this case, they probably match the intent).

Reply With Quote
  #9 (permalink)  
Old 02-15-2005, 09:50 AM
DW
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


<[email protected]> wrote in message
news:[email protected] oups.com...
>
> DW wrote:
>
> > In the above extract, when you refer to "a single variable tmp that

> is
> > used by all the callers of function andor", I take it you mean that a
> > simulator would treat tmp much like a 'C' automatic variable (rather

> than a
> > static).

>
> No. Local variables in functions are static in Verilog, like any
> other variable. All the callers are using the same variable with
> the same storage (though variables in functions in different
> instances of the module are distinct, as with any other variable
> in different instances of a module). This means that functions
> in Verilog are not re-entrant, which means that recursive functions
> will not work as desired. This is not generally a problem in
> hardware modeling.
>
> Verilog-2001 added the capability to declare functions and tasks
> to be "automatic", which makes them re-entrant by making their
> arguments and variables automatic. This allows functions to be
> recursive (and also helps catch functions where some paths read
> a variable that they have not assigned to yet).
>
> Since synthesis tools create copies of the function variables
> for each call, they end up treating static functions as if they
> were automatic ones. This is one of the many ways that the
> results of synthesis may not exactly match the original design
> (though in this case, they probably match the intent).
>

If we took your previous example of the function "andor", I would expect the
variable "tmp" to be treated as automatic. I could not imagine why a
synthesis tool would treat it in any other way. If the function were called
in two completely separate areas of a module, and obeying the Verilog rule
of a function variable being static then wouldn't this result in a mish-mash
of the intended logic?

It looks as if this a bit of a grey area.


Reply With Quote
  #10 (permalink)  
Old 02-15-2005, 06:00 PM
Stephen Williams
Guest
 
Posts: n/a
Default Re: nets/variables inside functions

DW wrote:

> <[email protected]> wrote in message
>>No. Local variables in functions are static in Verilog, like any
>>other variable. All the callers are using the same variable with
>>the same storage (though variables in functions in different
>>instances of the module are distinct, as with any other variable
>>in different instances of a module). This means that functions
>>in Verilog are not re-entrant, which means that recursive functions
>>will not work as desired. This is not generally a problem in
>>hardware modeling.
>>


>
> If we took your previous example of the function "andor", I would expect the
> variable "tmp" to be treated as automatic. I could not imagine why a
> synthesis tool would treat it in any other way. If the function were called
> in two completely separate areas of a module, and obeying the Verilog rule
> of a function variable being static then wouldn't this result in a mish-mash
> of the intended logic?
>
> It looks as if this a bit of a grey area.


It's not gray, Verilog is not C, and Steve knows exactly what he
is talking about. Local (in scope) variables in Verilog functions
are *static* as C programmers understand the meaning of static.
It turns out to not lead to a mish-mash in the specific case that
you mentioned, because a function call once started runs to
completion. But try to recurse the function, and you will indeed
get a mish-mash.

--
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
Reply With Quote
  #11 (permalink)  
Old 02-16-2005, 01:21 PM
DW
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


"Stephen Williams" <[email protected]> wrote in message
news:[email protected] ervers.com...
> DW wrote:
>
> > <[email protected]> wrote in message
> >>No. Local variables in functions are static in Verilog, like any
> >>other variable. All the callers are using the same variable with
> >>the same storage (though variables in functions in different
> >>instances of the module are distinct, as with any other variable
> >>in different instances of a module). This means that functions
> >>in Verilog are not re-entrant, which means that recursive functions
> >>will not work as desired. This is not generally a problem in
> >>hardware modeling.
> >>

>
> >
> > If we took your previous example of the function "andor", I would expect

the
> > variable "tmp" to be treated as automatic. I could not imagine why a
> > synthesis tool would treat it in any other way. If the function were

called
> > in two completely separate areas of a module, and obeying the Verilog

rule
> > of a function variable being static then wouldn't this result in a

mish-mash
> > of the intended logic?
> >
> > It looks as if this a bit of a grey area.

>
> It's not gray, Verilog is not C, and Steve knows exactly what he
> is talking about. Local (in scope) variables in Verilog functions
> are *static* as C programmers understand the meaning of static.
> It turns out to not lead to a mish-mash in the specific case that
> you mentioned, because a function call once started runs to
> completion. But try to recurse the function, and you will indeed
> get a mish-mash.
>

Well, firstly, I don't think I have suggested that Steve doesn't know
"exactly what he is talking about" and (secondly) yes, I know Verilog isn't
C but it is useful to use terms from the 'C' language since almost everyone
understands them. I recognise that this can be a little misleading at times
but what the hell.

However, I was referring to the following :

"Since synthesis tools create copies of the function variables
for each call, they end up treating static functions as if they
were automatic ones. This is one of the many ways that the
results of synthesis may not exactly match the original design
(though in this case, they probably match the intent)."

This suggests that synthesis tools may not treat a function in a
non-re-entrant way - OK the language definition may be very well defined but
this suggests that some synthesis tools may be a little loose in their
interpretation and hence this is a bit of a grey area - wouldn't you agree?
Am I missing something?

When you say non re-entrant, would you expect a problem if the function
didn't call itself, but was called from more than one place in the module?

Regards and respect,


Reply With Quote
  #12 (permalink)  
Old 02-17-2005, 03:07 AM
Guest
 
Posts: n/a
Default Re: nets/variables inside functions

DW wrote:
>
> This suggests that synthesis tools may not treat a function in a
> non-re-entrant way - OK the language definition may be very well

defined but
> this suggests that some synthesis tools may be a little loose in

their
> interpretation and hence this is a bit of a grey area - wouldn't you

agree?

Verilog was designed as a specialized programming language for
modeling hardware. It can be used to write models that approximate
the behavior of hardware for simulation. Like any programming
language, it is defined in terms of the behavior when executed,
independent of whether that exactly matches the hardware that the
designer intended it to model.

Synthesis tools came along later, and attempt to reverse the
process of modeling hardware. They take a model, and try to
generate hardware that matches the behavior that the designer
appears to have been trying to represent. Like the original
modeling process, this is only an approximation. It is worse
because it is being done by a computer program, which is not as
smart as a human.

It also has the problem that Verilog programs can represent
behavior that doesn't match any real hardware. Synthesis tools
have to restrict what they will accept as input, and may have
to make rather loose approximations. Whether this is a problem
depends on whether their output matches the hardware behavior
that the designer had in mind when they wrote the model. The
fact that synthesis tools are inherently going to be loose in
their interpretation of the language does not mean that the
language definition itself is loose. The language is not
defined in terms of the output of synthesis tools.

So when asking questions, it is important to distinguish whether
you are asking about the definition of the language, or the
output that a typical synthesis tool might produce for it.

And while I am being precise, I should correct the statement
I made that synthesis tools treat functions as if they were
automatic. They don't really. They treat both static and
automatic functions in the same way, which is neither truly
static nor automatic.

To truly treat them as automatic, they would have to create
hardware that was allocated and deallocated. They could handle
recursive functions whose depth of recursion was not known until
data was provided to the hardware, so the amount of logic would
change as the data inputs changed.

What they actually do is to instantiate a copy of the function
for each function call. This instantiates a separate static
copy of the variables for each call (or the hardware representing
that variable, which is probably a net). So the different calls
are not sharing a single static variable, but are not using
automatic variables either. They are essentially using multiple
static variables instead. This may allow some limited recursion,
as long as the depth of recursion can be determined during
synthesis (e.g. it is determined by an argument that is a constant
at compile time). The synthesis tool can inline the function
and expand any recursive calls, as long as the expansion ends up
terminating.

> When you say non re-entrant, would you expect a problem if the

function
> didn't call itself, but was called from more than one place in the

module?

As noted above, this should not be a problem for synthesis,
because it will produce multiple independent copies of all of
the logic for the function.

It is also not a problem in simulation. The only way you could
get a problem is if two different calls to the same function
were active at the same time (and here we are not talking about
simulation time, but actually simultaneously in the simulator).
This does not happen without recursion, because of the way that
simulators work. One function call will always return before
another one can start.

Conceptually, many things in the simulated design are happening
at the same time. For example, two different continuous assignments
may have been triggered by the same input change, and both might
call the same function. But in actuality, a simulator performs
evaluations sequentially. It executes one process until the
process suspends itself to wait for something to happen (such
as the next input change for a continuous assignment) Since
a function call cannot contain an event control, delay control,
or wait, a simulator will never suspend execution inside of it.
That means it will always return before any other process could
execute and call the function.

The same is not true of tasks. They can stop and wait for
something, which means that two different processes could be active
in a task at the same time. Then they will be sharing the same
static variables and interfering with each other, and probably
misbehave rather badly. However, it is rare to want to call
a delaying task from more than one process in the same module at
the same time. It is hard to come up with a realistic example
where this would be useful. Most suggested uses fall apart under
closer scrutiny.

The addition of automatic tasks and functions in Verilog-2001
seems to have been driven largely by a misunderstanding of this
task issue. Many users hearing about the issue formed the false
impression that tasks in different instances of a module could
interfere with each other. If this were the case, it would have
been a serious problem. Overreaction to this non-existent problem
provided pressure to fix what was actually a much more limited
problem.

Reply With Quote
  #13 (permalink)  
Old 02-17-2005, 09:27 AM
mk
Guest
 
Posts: n/a
Default Re: nets/variables inside functions

On 16 Feb 2005 19:07:11 -0800, [email protected] wrote:

>DW wrote:
>>

>It is also not a problem in simulation. The only way you could
>get a problem is if two different calls to the same function
>were active at the same time (and here we are not talking about
>simulation time, but actually simultaneously in the simulator).
>This does not happen without recursion, because of the way that
>simulators work. One function call will always return before
>another one can start.
>

Does this mean that there is a requirement implicit in 1364 that
simulators not be multi-threaded ? If not, what happens if a new
simulator is ?

>Conceptually, many things in the simulated design are happening
>at the same time. For example, two different continuous assignments
>may have been triggered by the same input change, and both might
>call the same function. But in actuality, a simulator performs
>evaluations sequentially. It executes one process until the
>process suspends itself to wait for something to happen (such
>as the next input change for a continuous assignment) Since
>a function call cannot contain an event control, delay control,
>or wait, a simulator will never suspend execution inside of it.
>That means it will always return before any other process could
>execute and call the function.



Reply With Quote
  #14 (permalink)  
Old 02-17-2005, 09:29 AM
DW
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


<[email protected]> wrote in message
news:[email protected] oups.com...
> DW wrote:
> >
> > This suggests that synthesis tools may not treat a function in a
> > non-re-entrant way - OK the language definition may be very well

> defined but
> > this suggests that some synthesis tools may be a little loose in

> their
> > interpretation and hence this is a bit of a grey area - wouldn't you

> agree?
>
> Verilog was designed as a specialized programming language for
> modeling hardware. It can be used to write models that approximate
> the behavior of hardware for simulation. Like any programming
> language, it is defined in terms of the behavior when executed,
> independent of whether that exactly matches the hardware that the
> designer intended it to model.
>
> Synthesis tools came along later, and attempt to reverse the
> process of modeling hardware. They take a model, and try to
> generate hardware that matches the behavior that the designer
> appears to have been trying to represent. Like the original
> modeling process, this is only an approximation. It is worse
> because it is being done by a computer program, which is not as
> smart as a human.
>
> It also has the problem that Verilog programs can represent
> behavior that doesn't match any real hardware. Synthesis tools
> have to restrict what they will accept as input, and may have
> to make rather loose approximations. Whether this is a problem
> depends on whether their output matches the hardware behavior
> that the designer had in mind when they wrote the model. The
> fact that synthesis tools are inherently going to be loose in
> their interpretation of the language does not mean that the
> language definition itself is loose. The language is not
> defined in terms of the output of synthesis tools.
>
> So when asking questions, it is important to distinguish whether
> you are asking about the definition of the language, or the
> output that a typical synthesis tool might produce for it.
>
> And while I am being precise, I should correct the statement
> I made that synthesis tools treat functions as if they were
> automatic. They don't really. They treat both static and
> automatic functions in the same way, which is neither truly
> static nor automatic.
>
> To truly treat them as automatic, they would have to create
> hardware that was allocated and deallocated. They could handle
> recursive functions whose depth of recursion was not known until
> data was provided to the hardware, so the amount of logic would
> change as the data inputs changed.
>
> What they actually do is to instantiate a copy of the function
> for each function call. This instantiates a separate static
> copy of the variables for each call (or the hardware representing
> that variable, which is probably a net). So the different calls
> are not sharing a single static variable, but are not using
> automatic variables either. They are essentially using multiple
> static variables instead. This may allow some limited recursion,
> as long as the depth of recursion can be determined during
> synthesis (e.g. it is determined by an argument that is a constant
> at compile time). The synthesis tool can inline the function
> and expand any recursive calls, as long as the expansion ends up
> terminating.
>
> > When you say non re-entrant, would you expect a problem if the

> function
> > didn't call itself, but was called from more than one place in the

> module?
>
> As noted above, this should not be a problem for synthesis,
> because it will produce multiple independent copies of all of
> the logic for the function.
>
> It is also not a problem in simulation. The only way you could
> get a problem is if two different calls to the same function
> were active at the same time (and here we are not talking about
> simulation time, but actually simultaneously in the simulator).
> This does not happen without recursion, because of the way that
> simulators work. One function call will always return before
> another one can start.
>
> Conceptually, many things in the simulated design are happening
> at the same time. For example, two different continuous assignments
> may have been triggered by the same input change, and both might
> call the same function. But in actuality, a simulator performs
> evaluations sequentially. It executes one process until the
> process suspends itself to wait for something to happen (such
> as the next input change for a continuous assignment) Since
> a function call cannot contain an event control, delay control,
> or wait, a simulator will never suspend execution inside of it.
> That means it will always return before any other process could
> execute and call the function.
>
> The same is not true of tasks. They can stop and wait for
> something, which means that two different processes could be active
> in a task at the same time. Then they will be sharing the same
> static variables and interfering with each other, and probably
> misbehave rather badly. However, it is rare to want to call
> a delaying task from more than one process in the same module at
> the same time. It is hard to come up with a realistic example
> where this would be useful. Most suggested uses fall apart under
> closer scrutiny.
>
> The addition of automatic tasks and functions in Verilog-2001
> seems to have been driven largely by a misunderstanding of this
> task issue. Many users hearing about the issue formed the false
> impression that tasks in different instances of a module could
> interfere with each other. If this were the case, it would have
> been a serious problem. Overreaction to this non-existent problem
> provided pressure to fix what was actually a much more limited
> problem.
>

This has been a great help, thankyou.


Reply With Quote
  #15 (permalink)  
Old 02-17-2005, 06:36 PM
Stephen Williams
Guest
 
Posts: n/a
Default Re: nets/variables inside functions

mk wrote:
> On 16 Feb 2005 19:07:11 -0800, [email protected] wrote:
>It is also not a problem in simulation. The only way you could
>>get a problem is if two different calls to the same function
>>were active at the same time (and here we are not talking about
>>simulation time, but actually simultaneously in the simulator).
>>This does not happen without recursion, because of the way that
>>simulators work. One function call will always return before
>>another one can start.
>>

>
> Does this mean that there is a requirement implicit in 1364 that
> simulators not be multi-threaded ? If not, what happens if a new
> simulator is ?


Yes.

A new, threaded simulator will need to understand that all the
resources read or written by a thread are implicitly locked by
that thread. This locking is a pretty fatal requirement for a
hypothetical threaded runtime engine.

--
Steve Williams "The woods are lovely, dark and deep.
steve at icarus.com But I have promises to keep,
http://www.icarus.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep."
Reply With Quote
  #16 (permalink)  
Old 02-17-2005, 08:27 PM
Guest
 
Posts: n/a
Default Re: nets/variables inside functions


mk wrote:
>
> Does this mean that there is a requirement implicit in 1364 that
> simulators not be multi-threaded ?


No, there is not. In theory, a simulator could be multi-threaded.
The 1364 standard specifically allows a process to be suspended at
any time, nondeterministically. This allows certain optimizations
where one process is inlined into another, but was also intended
to allow for parallel simulation.

In practice, this would not be successful. There are many common
coding styles that would not be guaranteed to work correctly under
these circumstances. Calls to a shared function are only one of
the situations that would have problems.

Furthermore, there is no good reason to implement a simulator that
way. If run on a single processor, unnecessary context switching
would be a complete waste of time. The idea is usually proposed
by someone who sees the potential for parallelism with multiple
processors. But in practice, the parallelism in event-driven
simulation is too fine-grained for that to be successful. The
overhead for synchronization exceeds the gains. There may be some
academic papers claiming success, but a closer look will reveal
flaws. They may have skipped the synchronization necessary to
guarantee correct results. Or they may be using some home-grown
simulator that is orders of magnitude slower than a state-of-the-art
simulator, allowing the synchronization overhead to be hidden.

> If not, what happens if a new simulator is ?


It will not be used because its speed is not competitive, it
produces unexpected results, and those results differ from run
to run, making the causes difficult to debug.

Reply With Quote
Reply

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Bits access of variables Anton Ng Verilog 0 05-08-2004 07:50 AM
How to find if two nets are connected in PLI Arash Verilog 0 04-01-2004 01:44 AM
Value of variables Johnsy Joseph Verilog 6 03-03-2004 10:26 PM
Tool to find drivers of nets Steve Taetzsch Verilog 4 01-12-2004 07:16 PM
Updating global reg's from inside functions/tasks Jay Bhadra Verilog 1 11-18-2003 10:30 PM


All times are GMT +1. The time now is 05:48 AM.


Powered by vBulletin® Version 3.8.0
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
Search Engine Friendly URLs by vBSEO 3.2.0
Copyright 2008 @ FPGA Central. All rights reserved