I created a straight highway 6km long and 3 lanes in each way.It consists of number of segments connected to each other and there are almost 200 vehicles in the network . so,I made them to move in circles to keep them until the end of simulation time.
Here is a apart of .rou.xml file:
<routes>
<vType id="normal_car" vClass="passenger" maxSpeed="41.67" speedFactor="0.9" speedDev="0.1" sigma="0.5" />
<route id="route1" edges="e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 e15 e16 e17 e18 e19 e20 e21 e22 e23 e24 e25 e26 e27 e28 e29 e30 e31 e32 e33 e34 e35 e36 e37 e38 e39 e40 e41 e42 e43 e44 e45 e46 e47 e48 e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 e11 e12 e13 e14 e15 e16 e17 e18 e19 e20 e21 e22 e23 e24 "/>
The problem is at the end of each side of the highway the vehicles have to slow down to cross to the other side one by one as shown in the screenshot below . Is there a way to make more than one vehicle cross at the same time?
The vehicles slow down to make the movement more realistic. Nobody goes into a turnaround with full speed. I would propose you rather create a circle network (can easily be done with netgenerate, for instance:
netgenerate --spider --spider-omit-center)
If you insist on the turnaround then increase the speed on the turnaround inner edge using --junctions.limit-turn-speed.
You can include different vehicle models that vary in properties. This will make them overtake others.
Related
I am trying to fill column D and column E.
Column A: varchar(64) - unique for each trip
Column B: smallint
Column C: timestamp without time zone (excel messed it up in the
image below but you can assume this as timestamp column)
Column D: numeric - need to find out time from origin in minutes
column E: numeric - time to destination in minutes.
Each trip has got different intermediate stations and I am trying to figure out the time it has been since origin and time to destination
Cell D2 = C2 - C2 = 0
cell D3 = C3 - C2
Cell D4 = C4 - C2
Cell E2 = E6 - E2
Cell E3 = E6 - E3
Cell E6 = E6 - E6 = 0
The main issue is that each trip contains differnt number of stations for each trip_id. I can think about using partition by column but cant figure out how to implement it.
Another sub question: I am dealing with very large table (100 million rows). What is the best way Postgresql experts implement data modification. Do you create like a sample table from the original data and implement everything on the sample before implementing the modifications on the original table or do you use something like "Begin trasaction" on the original data so that you can rollback in case of any error.
PS: Help with question title appreciated.
you don't need to know the number of stops
with a as (select *,extract(minutes from c - min(c) over (partition by a)) dd,extract(minutes from max(c) over (partition by a) - c) ee from td)
update td set d=dd, e=ee
from a
where a.a = td.a and a.b=td.b
;
http://sqlfiddle.com/#!17/c9112/1
I am facing an issue with my flat file. The BAdI is processing the header data as the body of the flat file. Due to this . The TIMEID, which is conditioned to be an year belonging to 'Q1', is giving the error. If I replace the TIME label with 2014.Q1 (which belongs to Q1), then it works fine, but if I use the label “TIMEID” in the header data, it gets evaluated, and gives an error “time member TIMEID does not belong to Q1”. This also rejects all the subsequent records. This happens regardless of whether HEADER in the transformation file is labeled as YES (with SKIP=1) or NO.
Due to this, the “cl_ujk_query=>query()” function is not returning any data.
Following is the flat file (Cis for header data, and R is for the records, both of which are valid):
______________________________________________________________________
c1 c2 c3 c4 c5 TIMEID c7 c8 c8 c9
______________________________________________________________________
r11 r12 r13 r14 r15 2014.Q1 r17 r18 r19 r20
r21 r22 r23 r24 r25 2013.Q1 r27 r28 r29 r30
_____________________________________________________________________
Following is the Transformation File:
_________________________________________________________________________
***OPTIONS
FORMAT = DELIMITED
HEADER = YES
DELIMITER = ,
SKIP = 1
SKIPIF =
VALIDATERECORDS=YES
CREDITPOSITIVE=YES
MAXREJECTCOUNT= -1
ROUNDAMOUNT=
STARTROUTINE=ZNAME_TIME
*MAPPING
A=*COL(1)
B=*STR(OC_) + *COL(8)
TIME=*COL(6)
D=*STR(NOBUYER)
E=*STR(CC)
F=*STR(INPUT)
G=*COL(5)
H=*COL(2)
I=*COL(4)
J=*STR(NO_J)
K=*COL(7)
*CONVERSION
**
________________________________________________________________________
You have to change the header = NO in the transformation file .
No book seems to be able to answer this.
Suppose I have two transactions:
T1: Lock A, Lock B, Unlock A
T2: Lock B, Unlock B, Lock A, Unlock A
Q1. How many ways are there to plan these transactions? (Is it just a simple graph and the result is 3! * 4! ?)
Q2. How many of these ways are serializable?
I would really like to know what is the thinking process, how do you get to the answer?
Q1 is 7.
Proof: First of all, we have to merge the set 'Lock A', 'Lock B', 'Unlock A' (I denote the items as A1, A2, A3) into the set 'Lock B',..,'Unlock A' (I denote them B1..B4), that is to put 3 items into 5 places (between B's) with repetitions allowed, that is binomial coeff. choose 3 from (5-1+3). It is equal to 7!/(3!*4!) = 35.
Next, we have to drop 'bad' solutions (the ones prevented by locking conditions). It's where A1 stands between B3 and B4 (3 solutions) and A2 between B1 and B2 (2*4 = 8). Also, we have to exclude the solutions with B3 between A1 and A3 too. There are 3*3=9 with B3 between A1 and A2, and 6*2=12 with B3 between A2 and A3. Thus, we have 35-3-8-9-12=3. But we should also satisfy inclusion-exclusion principle: add solutions which violates two rules simultaneously. They could be only like this: B1 A2 B2 B3 B4, with A1 in either of two left positions, and A3 in either of two right ones. 4 in total. So, we have the final answer 35 - 3 - 8 - 9 - 12 + 4 = 7.
everyone
I would like to ask community of help to find a way of how to cache our huge plain table by splitting it to the multiple hashes or otherwise.
The sample of table, as an example for structure:
A1 B1 C1 D1 E1 X1
A1 B1 C1 D1 E1 X2
A7 B5 C2 D1 E2 X3
A8 B1 C1 D1 E2 X4
A1 B6 C3 D2 E2 X5
A1 B1 C1 D2 E1 X6
This our denormalized data, we don't have any ability to normalize it.
So currently we must perform 'group by' to get required items, for instance to get all D* we perform 'data.GroupBy(A1).GroupBy(B1).GroupBy(C1)' and it takes a lot of time.
Temporarily we had found workaround for this by creating composite a string key:
A1 -> 'list of lines begin A1'
A1:B1 -> 'list of lines begin A1:B1'
A1:B1:C1 -> 'list of lines begin A1:B1:C1'
...
as a cache of results of grouping operations.
The question is how it can be stored efficiently?
Estimated number of lines in denormalized data around 10M records and as in my an example there is 6 columns it will be 60M entries in hash. So, I'm looking an approach to lookup values in O(N) if it's possible.
Thanks.
Most files I am working with only have the following fields:
F00001 - usually 1 (f1) or 9 (f9)
K00001 - usually only 1-3 sub-fields of
zoned decimals and ebcdic
F00002 - sub-fields of ebcdic, zoned and
packed decimals
Occasionally other field names K00002, F00003 and F00004 will appear in cross reference files.
Example Data:
+---------+--------------------------------------------------+--------------------------------------------------------------------------------------------+
| F00001 | K00001 | F00002 |
+---------+--------------------------------------------------+--------------------------------------------------------------------------------------------+
| f1 | f0 f0 f0 f0 f1 f2 f3 f4 f5 f6 d7 c8 | e2 e3 c1 c3 d2 d6 e5 c5 d9 c6 d3 d6 e7 40 12 34 56 7F e2 d2 c5 c5 e3 |
+---------+--------------------------------------------------+--------------------------------------------------------------------------------------------+
Currently using:
SELECT SUBSTR(HEX(F00001), 1, 2) AS FNAME_1, SUBSTR(HEX(K00001), 1, 14) AS KNAME_1, SUBSTR(HEX(K00001), 15, 2) AS KNAME_2, SUBSTR(HEX(K00001), 17, 2) AS KNAME_2, SUBSTR(HEX(F00002), 1, 28) AS FNAME_2, SUBSTR(HEX(F00002), 29, 8) AS FNAME_3, SUBSTR(HEX(F00002), 37, 10) AS FNAME_4 FROM QS36F.FILE
Is this the best way to unpack EBCDIC values as strings?
You asked for 'the best way'. Manually fiddling the bytes is categorically NOT the best way. #JamesA has a better answer: Externally describe the table and use more traditional SQL to access it. I see in your comments that you have multiple layouts within the same table. This was typical years ago when we converted from punched cards to disk. I feel your pain, having experienced this many times.
If you are using SQL to run queries, I think you have several options, all of which revolve around having a sane DB2 table instead of a jumbled S/36 flat file. Without more details on the business problem, all we can do is offer suggestions.
1) Add a trigger to QS36F.FILE that will break out the intermingled records into separate SQL defined tables. Query those.
2) Write some UDFs that will pack and unpack numbers. If you're querying today, you'll be updating tomorrow and if you think you have some chance of maintaining the raw HEX(this) and HEX(that) for SELECTS, wait until you try to do an UPDATE that way.
3) Write stored procedures that will extract out the bits you need for a given query, put them into SQL tables - maybe even a GLOBAL TEMPORARY TABLE. Have the SP query those bits and return a result set that can be consumed by other SQL queries. IBM i supports user defined table functions as well.
4) Have the RPG team write you a conversion program that will read the old file and create a data warehouse that you can query against.
It almost looks as if old S/36 files are being accessed and the system runs under CCSID 65535. That could cause the messy "hex"representation issue as well as at least some of the column name issues. A little more info about the server environment would be useful.