Why does the order of the lines of code not matter in Hardware Description Language? - hdl

Per the nand2tetris course material, "Since the language is designed to describe connections rather than processes, the order of the PARTS statements is insignificant: as long as the chip-parts are connected correctly, the chip will function as stated."
How does the sequence of the code below not matter though given that the carry bits are calculated using the carry bits of previous bit calculations?
/**
* Adds two 16-bit values.
* The most significant carry bit is ignored.
*/
CHIP Add16 {
IN a[16], b[16];
OUT out[16];
PARTS:
HalfAdder(a=a[0], b=b[0], carry=carry0, sum=out[0]);
FullAdder(a=a[1], b=b[1], c=carry0, carry=carry1, sum=out[1]);
FullAdder(a=a[2], b=b[2], c=carry1, carry=carry2, sum=out[2]);
FullAdder(a=a[3], b=b[3], c=carry2, carry=carry3, sum=out[3]);
FullAdder(a=a[4], b=b[4], c=carry3, carry=carry4, sum=out[4]);
FullAdder(a=a[5], b=b[5], c=carry4, carry=carry5, sum=out[5]);
FullAdder(a=a[6], b=b[6], c=carry5, carry=carry6, sum=out[6]);
FullAdder(a=a[7], b=b[7], c=carry6, carry=carry7, sum=out[7]);
FullAdder(a=a[8], b=b[8], c=carry7, carry=carry8, sum=out[8]);
FullAdder(a=a[9], b=b[9], c=carry8, carry=carry9, sum=out[9]);
FullAdder(a=a[10], b=b[10], c=carry9, carry=carry10, sum=out[10]);
FullAdder(a=a[11], b=b[11], c=carry10, carry=carry11, sum=out[11]);
FullAdder(a=a[12], b=b[12], c=carry11, carry=carry12, sum=out[12]);
FullAdder(a=a[13], b=b[13], c=carry12, carry=carry13, sum=out[13]);
FullAdder(a=a[14], b=b[14], c=carry13, carry=carry14, sum=out[14]);
FullAdder(a=a[15], b=b[15], c=carry14, carry=carry15, sum=out[15]);

Because all the calculations are being computed concurrently. In an hdl each instance of a block/entity/module create an instance of all the behaviors inside. You didn't specify which one you're using, but almost all behave the same.

Related

BAPI_GOODSMVT_CREATE with multiple material numbers and same PP order?

As I know of, When you're using BAPI_GOODSMVT_CREATE at the same time(by loop or just coincidence), Using same material number puts you an error about locked object (Material XXXX is locked by USER YYYY).
But, as i know of, using BAPI_GOODSMVT_CREATE at the same time, but different material number WITH same production order makes no error.
Issue
Recently I found an error about M3/897 (Plant Data of Material XXXX is locked by user XXXX) when I'm doing BAPI_GOODSMVT_CREATE when I'm trying GI for Production order, by parallel processing, which are putting different Material number to same production order.
Question
So, I'm asking about constraint of BAPI_GOODSMVT_CREATE.
So far I know is -
A. You can't issue GI for Production Order(Mvt 261) at the same time, when you're putting same material number for different production order.
B. (I'm not sure about this) You can't issue GI for Production Order(Mvt 261) at the same time, when you're putting different material number for same production order.
Is both is right, or just A is right? Any help from experienced ABAPer or MM consultant would be appreciated!
To post GI in a loop you need to make commit after each run, and unlock the object explicitly, otherwise you will get the PP lock.
Try like this:
LOOP AT lt_orders ASSIGNING <fs>.
...
CALL FUNCTION 'BAPI_GOODSMVT_CREATE'
EXPORTING
goodsmvt_header = ls_header
goodsmvt_code = ls_code
IMPORTING
goodsmvt_headret = ls_headret
materialdocument = ls_retmtd
TABLES
goodsmvt_item = lt_item
return = lt_return.
IF line_exists( lt_return[ type = 'E' ] ).
CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'.
ELSE.
COMMIT WORK AND WAIT.
CALL FUNCTION 'DEQUEUE_ALL'.
ENDIF.
ENDLOOP.
Always use BAPI_TRANSACTION_COMMIT with WAIT parameter or COMMIT WORK with the same after each BAPI call.
Also there can be tricky issues with the GR and implicit GI movements, see the note 369518 about this.
You can check the presence of existing lock at runtime using this FM - "ENQUE_READ2".
data: RAW_ENQ like LOCKSEDX_ENQ_TAB,
SUBRC type SY-SUBRC,
NUMBER type I.
clear : RAW_ENQ[], SUBRC, NUMBER.
add 1 to COUNTER.
call function 'ENQUE_READ2'
importing
SUBRC = SUBRC
NUMBER = NUMBER
tables
ENQ = RAW_ENQ.
But if you have to prevent a failure of GOODS mvt. in general you have instead to implement some reprocessing logic to store errors.
The steps would be : Catch errors --> store bapi information or header doc number --> retry later

How to setup a pr. job_type.Murnaghan that for each volume reads the structure output of another Murnaghan job?

For the Ene-Vol calculations of the non-cubic structures, one has to relax the structures for all volumes.
Suppose that I start with a pr.jobtype.Murnaghan() job that its ref_job_relax is a cell-shape and internal coordinates relaxation. Let's call the Murnaghan job R1 with 7 volumes, i.e. R1-V1,...,R1-V7.
After one or more rounds of relaxation (R1...RN), one has to perform a static calculation to acquire a precise energy. Let's call the final static round S.
For the final round, I want to create a pr.jobtype.Murnaghan() job that reads all the required setup configurations from the ref_job_static except the input structures .
Then for each volume S-Vn it should read the corresponding output structure of RN-Vn, e.g. R1-V1-->S-V1, ..., R1-V7-->S-V7 if there were only one round of relaxation.
I am looking for an implementation like below:
murn_relax = pr.create_job(pr.job_type.Murnaghan, 'R1')
murn_relax.ref_job = ref_job_relax
murn_relax.run()
murn_static = pr.create_job(pr.job_type.Murnaghan, 'S', continuation=True)
murn_static.ref_job = ref_job_static
murn_static.structures_from(prev_job='R1')
murn_static.run()
The Murnaghan object has two relevant functions:
get_structure() https://github.com/pyiron/pyiron_atomistics/blob/master/pyiron_atomistics/atomistics/master/murnaghan.py#L829
list_structures() https://github.com/pyiron/pyiron_atomistics/blob/master/pyiron_atomistics/atomistics/master/murnaghan.py#L727
The first returns the predicted equilibrium structure and the second returns the structures at the different volumes.
In addition you can get the IDs of the children and iterate over those:
structure_lst = [
pr.load(job_id).get_structure()
for job_id in murn_relax.child_ids
]
to get a list of converged structures.

Apache Flink Error Handing and Conditional Processing

I am new to Flink and have gone through site(s)/examples/blogs to get started. I am struggling with the correct use of operators. Basically I have 2 questions
Question 1: Does Flink support declarative exception handling, I need to handle parse/validate/... errors?
Can I use org.apache.flink.runtime.operators.sort.ExceptionHandler or similar
to handle errors?
or Rich/FlatMap function my best option?
If Rich/FlatMap the only option then is there a way to get handle to Stream inside Rich/FlatMap function so Sink(s) could be attached for error processing?
Question 2: Can I conditionally attach different Sink(s)?
Based on certain field(s) in keyed split streams I need to select different sink(s), do I split the stream again or use a Rich/FlatMap to handle that?
I am using Flink 1.3.2. Here is the relevant portion of my job
.....
.....
DataStream<String> eventTextStream = env.addSource(messageSource)
KeyedStream<EventPojo, Tuple> eventPojoStream = eventTextStream
// parse, transform or enrich
.flatMap(new MyParseTransformEnrichFunction())
.assignTimestampsAndWatermarks(new EventAscendingTimestampExtractor())
.keyBy("eventId");
// split stream based on eventType as different reduce and windowing functions need to be applied
SplitStream<EventPojo> splitStream = eventPojoStream
.split(new EventStreamSplitFunction());
// need to apply reduce function
DataStream<EventPojo> event1TypeStream = splitStream.select("event1Type");
// need to apply reduce function
DataStream<EventPojo> event2TypeStream = splitStream.select("event2Type");
// need to apply time based windowing function
DataStream<EventPojo> event3TypeStream = splitStream.select("event3Type");
....
....
env.execute("Event Processing");
Am I using the correct operators here?
Update 1:
Tried using the ProcessFunction as suggested by #alpinegizmo but that didn't work as it depends upon a keyed stream which I don't have until I parse/validate input. I get "InvalidProgramException: Field expression must be equal to '*' or '_' for non-composite types. ".
It's such a common use case where your first parse/validate input and won't have keyed stream yet, so how do you solve it?
Thanks for your patience and help.
There's one key building block that you've overlooked. Take a look at side outputs.
This mechanism provides a typesafe way to produce any number of additional output streams. This can be a clean way to report errors, among other uses. In Flink 1.3 side outputs can only be used with ProcessFunction, but 1.4 will add side outputs to ProcessWindowFunction.

options for questions in Watson conversation api

I need to get the available options for a certain question in Watson conversation api?
For example I have a conversation app and in some cases Y need to give the users a list to select an option from it.
So I am searching for a way to get the available reply options for a certain question.
I can't answer to the NPM part, but you can get a list of the top 10 possible answers by setting alternate_intents to true. For example.
{
"context":{
"conversation_id":"cbbea7b5-6971-4437-99e0-a82927607079",
"system":{
"dialog_stack":["root"
],
"dialog_turn_counter":1,
"dialog_request_counter":1
}
},
"alternate_intents":true,
"input":{
"text":"Is it hot outside?"
}
}
This will return at most the top ten answers. If there is a limited number of intents it will only show them.
Part of your JSON response will have something like this:
"intents":[{
"intent":"temperature",
"confidence":0.9822100598134365
},
{
"intent":"conditions",
"confidence":0.017789940186563623
}
This won't get you the output text though from the node. So you will need to have your answer store elsewhere to cross reference.
Also be aware that just because it is in the list, doesn't mean it's a valid answer to give the end user. The confidence level needs to be taken into account.
The confidence level also does not work like a normal confidence. You need to determine your upper and lower bounds. I detail this briefly here.
Unlike earlier versions of WEA, the confidence is relative to the
number of intents you have. So the quickest way to find the lowest
confidence is to send a really ambiguous word.
These are the results I get for determining temperature or conditions.
treehouse = conditions / 0.5940327076534431
goldfish = conditions / 0.5940327076534431
music = conditions / 0.5940327076534431
See a pattern?🙂 So the low confidence level I will set at 0.6. Next
is to determine the higher confidence range. You can do this by mixing
intents within the same question text. It may take a few goes to get a
reasonable result.
These are results from trying this (C = Conditions, T = Temperature).
hot rain = T/0.7710267712183176, C/0.22897322878168241
windy desert = C/0.8597747113239446, T/0.14022528867605547
ice wind = C/0.5940327076534431, T/0.405967292346557
I purposely left out high confidence ones. In this I am going to go
with 0.8 as the high confidence level.

Solve "Out of local stack" in this specific constraint programming in prolog

I'm currently trying to create schedules for bus drivers in prolog. I wish to find a limited number of solutions. But I get the "Out of local stack" error, and I suppose it is because I'm getting too many solutions.
How can I prevent that error given the following code? Any tips on whatever I'm not doing correctly would help immensely too.
count_drivers: counts the number of drivers with D_id as driver_id
( I need them to work less than "max_hours").
vehicle: represents the bus and respective routes.
connected: represents the connection between the relief opportunities
( a route consists of a group of relief points and the respective "connection"
between them)
workpiece: is a segment of work in the same vehicle between two relief points
spell: is a group of workpieces done by the same driver
spreadover: is the whole shift one driver has to do.
Here is the code:
?- use_module(library(clpfd)).
?- use_module(library(lists)).
?- use_module(library(aggregate)).
%workpiece(Bus,[Ro1,Ro2],Weight).
workpiece(1,[1,2],1).
workpiece(1,[2,3],2).
workpiece(1,[3,4],1).
workpiece(1,[4,5],2).
workpiece(1,[5,6],1).
workpiece(2,[7,8],2).
workpiece(2,[8,9],2).
workpiece(2,[9,10],1).
workpiece(2,[10,11],2).
workpiece(2,[11,12],1).
workpiece(3,[13,14],2).
workpiece(3,[14,15],1).
workpiece(3,[15,16],2).
workpiece(3,[16,17],1).
workpiece(3,[17,18],2).
%spell
spell(Vehicle,[[Ro1,Ro2]|Tail]):-Vars = [Ro1,Ro2], Vars in 1..18, workpiece(Vehicle,[Ro1,Ro2],_),spell(Vehicle,Tail,Ro2),labeling([],Vars).
spell(_,[],_).
spell(Vehicle,[[Ro1,Ro2]|Tail],Ro3):- Vars = [Ro3], Vars in 1..18, Ro3 #= Ro1, workpiece(Vehicle,[Ro1,Ro2],_),spell(Vehicle,Tail,Ro2), labeling([],Vars).
%spreadover de cada driver
spreadover(_,List):- Vars = I, Vars in 1..15, length(List,I), I #>= 1.
spreadover(Driver,[Head|Tail]):- Vars = [Vehicle,I], Vars in 1..9, Vehicle #>= 1, Vehicle #=< 3, spell(Vehicle,Head), length(Head,I), I #>= 1, spreadover(Driver,Tail), labeling([],Vars).
%ocupar as workpieces todas
%minimizando os shifts
%cobrir todas as routes
%length 15
%drivershifts
drivershifts(_,List):- Vars = I, Vars in 1..15, length(List,I), I #= 15.
drivershifts(NumDrivers,[[Driver|List]|Tail]):-Vars = Driver, Vars in 1..NumDrivers, Driver #>= 1, Driver #=< NumDrivers, spreadover(Driver,List), labeling([],Vars).
I thank you all in advance for any time you can spare in helping me.
EDIT: I changed the code around a bit, now I get a load of unassigned variables from a query of
forall(spreadover(1,List),writeln(List)).
or one unassigned variable from
spreadover(1,List).
I restricted the domains wherever I could, but aren't sure if I'm doing this correctly.
From the queries above I should generate spreadovers(a set of spells) for driver 1.
Not sure if I should post a new question or rewrite this one either, so decided to rewrite this one.
You have many warnings from Singleton variables, and would be good style to solve them.
At least, prefix variables you know are unused with an underscore, to avoid the warning.
Now to the loop: you're calling diagram with a free variable, causing an infinite recursion that 'construct' an infinite list of partially instantiated variables.
I can't understand what could be the intended meaning of diagram/1. For sure, you miss the base case: add something like
diagram([]).