In my Fitnesse Tests I want to enter dates through datepicker elements. Sometimes it works. But most of the time a different date, unlike the date that was entered, appears. Here is an example:
| ensure | do | type | on | id=field_id | with | |
| ensure | do | type | on | id=field_id | with | 05.05.1997 |
| check | is | verifyValue | on | id=field_id | [28.05.1997] expected [05.05.1997] |
(To make sure that the field isn't already filled, I pass an empty String first.)
Mostly, the 'day'-statement is different from what was entered. Do you know the reason for this behavior? How can I solve this?
Thanks in advance!
This is related to how you wrote your fixture and not FitNesse, the problem is that it returns a different value and also implies that the previous line didn't work - | ensure | do | type | on | id=field_id | with | 05.05.1997 |
Related
I have a data table in druid and which has missing rows and I want to fill them by generating the missing timestamps and adding the precedent row value.
This is the table in druid :
| __time | distance |
|--------------------------|----------|
| 2022-05-05T08:41:00.000Z | 1337 |
| 2022-05-05T08:42:00.000Z | 1350 |
| 2022-05-05T08:44:00.000Z | 1360 |
| 2022-05-05T08:47:00.000Z | 1377 |
| 2022-05-05T08:48:00.000Z | 1400 |
And i want to add the missing minutes either by forcing it in the side of druid storage or by query it directly in druid without passing by other module.
The final result that I want will be look like this:
| __time | distance |
|--------------------------|----------|
| 2022-05-05T08:41:00.000Z | 1337 |
| 2022-05-05T08:42:00.000Z | 1350 |
| 2022-05-05T08:43:00.000Z | 1350 |
| 2022-05-05T08:44:00.000Z | 1360 |
| 2022-05-05T08:45:00.000Z | 1360 |
| 2022-05-05T08:46:00.000Z | 1360 |
| 2022-05-05T08:47:00.000Z | 1377 |
| 2022-05-05T08:48:00.000Z | 1400 |
And thank you in advance !
A Driud time series query will produce a densely populated timeline at a given time granularity like the one you want for every minute. But its current functionality either skips empty time buckets or assigns them a value of zero.
Doing other gap filling functions like LVCF (last value carried forward) that you describe seems like a great enhancement. You can join the Apache Druid community and create an issue that describes this request. That's a great way to start a conversation about requirements and how it might be achieved.
And/Or you could also add the functionality and submit a PR. We're always looking for more members in the Apache Druid community.
The title may not be that helpful but what I am trying to do is this.
For simplicity's sake I have two tables one called logs and another called Log controls
In LOGS I have and a log event column, this is automatically populated by imported information. On the LOG CONTROLS I have a manually entered list of Log events (to match the ones coming in) and I have this table to have them assigned ID numbers and other details about the event.
What I need to do is have a column in the LOGS table which looks at the Log events, matches it to the ID from the LOG CONTROLS table and assigns the ID into the LOGS table.
I have seen a few methods of changing information in columns based of information in other tables but all of these seem to be one way checks i.e if ID = X change to VALUE FROM OTHER TABLE where as what I need is IF VALUE = X FROM OTHER TABLE CHANGE ID FIELD TO = Y FROM OTHER TABLE
Below is a mock up of the tables.
+----+-----------+----------+------------+
| ID | Date_Time | Event | Control ID|
+----+-----------+----------+------------+
| 1 | 0/0/0 | Shutdown | |
| 2 | 0/0/0 | Start up | |
| 3 | 0/0/0 | Error | |
| 4 | 0/0/0 | Info | |
| 5 | 0/0/0 | Shutdown | |
| 6 | 0/0/0 | Error | |
+----+-----------+----------+------------+
+-------------------+----------+--------+-------+
| Control ID | Event | Export | Flag |
+-------------------+----------+--------+-------+
| 1 | Shutdown | TRUE | TRUE |
| 2 | Start up | TRUE | FALSE |
| 3 | Error | TRUE | TRUE |
| 4 | Info | TRUE | FALSE |
+-------------------+----------+--------+-------+
So I need the Control ID in the first table to match the control ID from the second table depending on what the event was.
I hope this makes sense.
Any help or advice would be greatly appreciated.
From your description, it seems that a simple UPDATE statement is all you need:
update logs
set control_id = c.control_id
from log_controls as c
where c.event = logs.event;
Having used Eclipse for more years than I care to admit, I'm currently trying to get adjusted to IntelliJ (2016.2, Community) but I'm having a hard time with the totally different UI metaphors/concepts.
I'd like to be able to have a window layout like this:
+--------+------------------------
| | |
| (1) | |
| | |
|+-------+ Source editor |
| | |
| (2) | |
| |-----------------------+
| | |
| | Console output etc. |
+--------+-----------------------+
Luckily, this question helped me with splitting panes to get (1) and (2).
However, I have not been able to find out how to rearrange tool windows so that I can get a (for example) Terminal tool window that is directly underneath the source editor pane, instead of one that looks like this:
+--------+------------------------
| | |
| (1) | |
| | |
|+-------+ Source editor |
| | |
| (2) | |
+--------------------------------+
| |
| Console output etc. |
+--------------------------------+
Open "Settings" > "Appearance & Behavior" > "Appearance" and enable "Widescreen tool window layout".
Tested in DataGrip and PyCharm. Should be exactly the same in IDEA, Webstorm as far as i know - i used them all and the UI is the same everywhere (and i like that).
You could just float the panels next to your IntelliJ main windows like this screenshot.
I have reworked our API's logging system to use Azure Table Storage from using SQL storage for cost and performance reasons. I am now migrating our legacy logs to the new system. I am building a SQL query per table that will map the old fields to the new ones, with the intention of exporting to CSV then importing into Azure.
So far, so good. However, one artifact of the previous system is that it logged 3 times per request - call begin, call response and call end - and the new one logs the call as just one log (again, for cost and performance reasons).
Some fields common are common to all three related logs, e.g. the Session which uniquely identifies the call.
Some fields I only want the first log's value, e.g. Date which may be a few seconds different in the second and third log.
Some fields are shared for the three different purposes, e.g. Parameters gives the Input Model for Call Begin, Output Model for Call Response, and HTTP response (e.g. OK) for Call End.
Some fields are unused for two of the purposes, e.g. ExecutionTime is -1 for Call Begin and Call Response, and a value in ms for Call End.
How can I "roll up" the sets of 3 rows into one row per set? I have tried using DISTINCT and GROUP BY, but the fact that some of the information collides is making it very difficult. I apologize that my SQL isn't really good enough to really explain what I'm asking for - so perhaps an example will make it clearer:
Example of what I have:
SQL:
SELECT * FROM [dbo].[Log]
Results:
+---------+---------------------+-------+------------+---------------+---------------+-----------------+--+
| Session | Date | Level | Context | Message | ExecutionTime | Parameters | |
+---------+---------------------+-------+------------+---------------+---------------+-----------------+--+
| 84248B7 | 2014-07-20 19:16:15 | INFO | GET v1/abc | Call Begin | -1 | {"Input":"xx"} | |
| 84248B7 | 2014-07-20 19:16:15 | INFO | GET v1/abc | Call Response | -1 | {"Output":"yy"} | |
| 84248B7 | 2014-07-20 19:16:15 | INFO | GET v1/abc | Call End | 123 | OK | |
| F76BCBB | 2014-07-20 19:16:17 | ERROR | GET v1/def | Call Begin | -1 | {"Input":"ww"} | |
| F76BCBB | 2014-07-20 19:16:18 | ERROR | GET v1/def | Call Response | -1 | {"Output":"vv"} | |
| F76BCBB | 2014-07-20 19:16:18 | ERROR | GET v1/def | Call End | 456 | BadRequest | |
+---------+---------------------+-------+------------+---------------+---------------+-----------------+--+
Example of what I want:
SQL:
[Need to write this query]
Results:
+---------------------+-------+------------+----------+---------------+----------------+-----------------+--------------+
| Date | Level | Context | Message | ExecutionTime | InputModel | OutputModel | HttpResponse |
+---------------------+-------+------------+----------+---------------+----------------+-----------------+--------------+
| 2014-07-20 19:16:15 | INFO | GET v1/abc | Api Call | 123 | {"Input":"xx"} | {"Output":"yy"} | OK |
| 2014-07-20 19:16:17 | ERROR | GET v1/def | Api Call | 456 | {"Input":"ww"} | {"Output":"vv"} | BadRequest |
+---------------------+-------+------------+----------+---------------+----------------+-----------------+--------------+
select L1.Session, L1.Date, L1.Level, L1.Context, 'Api Call' AS Message,
L3.ExecutionTime,
L1.Parameters as InputModel,
L2.Parameters as OutputModel,
L3.Parameters as HttpResponse
from Log L1
inner join Log L2 ON L1.Session = L2.Session
inner join Log L3 ON L1.Session = L3.Session
where L1.Message = 'Call Begin'
and L2.Message = 'Call Response'
and L3.Message = 'Call End'
This would work in your sample.
I have create a couple of Scenarios in fitnesse using Xebium/Selenium. They works nice but I'd like to create a decision table from one of my scenario.
So I try with the following:
| Verifiera ärendet | selenium driver fixture |
| tabellRadsNr | längd | bredd | grisar | höns | getter | får | kod | felbeskrivning |
| 19 | 50 | 20 | 201 | 0 | 0 | 0 | R110 | Nekad |
And ends up with:
Could not invoke constructor for VerifieraÄrendet[1]
The instance decisionTable_25. does not exist
The scenario "Verifiera ärendet" works when I run it by itself so I guess that I am missing something....
The problem was that I mixed up parameter names and their value names. So the structure basically is
| scenarioname parametername2 |
| parametervaluename1 | parametervaluename2 |
| row1value1 | row1value2 |
| row2value1 | row2value2 |