How can I read a file from a specific byte position in Robot Framework?
Let's say I have a process running for a long time writing a long log file. I want to get the current file size, then I execute something that affects the behaviour of the process and I wait until some message appears in the log file. I want to read only the portion of the file starting from the previous file size.
I am new to Robot Framework. I think this is a very common scenario, but I haven't found how to do it.
There are no built-in keywords to do this, but writing one in python is pretty simple.
For example, create a file named "readmore.py" with the following:
from robot.libraries.BuiltIn import BuiltIn
class readmore(object):
ROBOT_LIBRARY_SCOPE = "TEST SUITE"
def __init__(self):
self.fp = {}
def read_more(self, path):
# if we don't already know about this file,
# set the file pointer to zero
if path not in self.fp:
BuiltIn().log("setting fp to zero", "DEBUG")
self.fp[path] = 0
# open the file, move the pointer to the stored
# position, read the file, and reset the pointer
with open(path) as f:
BuiltIn().log("seeking to %s" % self.fp[path], "DEBUG")
f.seek(self.fp[path])
data = f.read()
self.fp[path] = f.tell()
BuiltIn().log("resetting fp to %s" % self.fp[path], "DEBUG")
return data
You can then use it like this:
*** Settings ***
| Library | readmore.py
| Library | OperatingSystem
*** test cases ***
| Example of "tail-like" reading of a file
| | # read the current contents of the file
| | ${original}= | read more | /tmp/junk.txt
| | # do something to add more data to the file
| | Append to file | /tmp/junk.txt | this is new content\n
| | # read the new data
| | ${new}= | Read more | /tmp/junk.txt
| | Should be equal | ${new.strip()} | this is new content
Related
I'm trying to create a new BigQuery destination on Airbyte with Octavia cli.
When launching:
octavia apply
I receive:
Error: {"message":"The provided configuration does not fulfill the specification. Errors: json schema validation failed when comparing the data to the json schema. \nErrors:
$.loading_method.method: must be a constant value Standard
Here is my conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15
It was an indentation issue on my side:
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
Should be at 1 upper level (this wasn't clear in the commented template, hence the error and the possibility that others persons will do the same).
Here is full final conf:
# Configuration for airbyte/destination-bigquery
# Documentation about this connector can be found at https://docs.airbyte.com/integrations/destinations/bigquery
resource_name: "BigQueryFromOctavia"
definition_type: destination
definition_id: 22f6c74f-5699-40ff-833c-4a879ea40133
definition_image: airbyte/destination-bigquery
definition_version: 1.2.12
# EDIT THE CONFIGURATION BELOW!
configuration:
dataset_id: "airbyte_octavia_thibaut" # REQUIRED | string | The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
project_id: "data-airbyte-poc" # REQUIRED | string | The GCP project ID for the project containing the target BigQuery dataset. Read more here.
loading_method:
## -------- Pick one valid structure among the examples below: --------
# method: "Standard" # REQUIRED | string
## -------- Another valid structure for loading_method: --------
method: "GCS Staging" # REQUIRED | string}
credential:
## -------- Pick one valid structure among the examples below: --------
credential_type: "HMAC_KEY" # REQUIRED | string
hmac_key_secret: ${AIRBYTE_BQ1_HMAC_KEY_SECRET} # SECRET (please store in environment variables) | REQUIRED | string | The corresponding secret for the access ID. It is a 40-character base-64 encoded string. | Example: 1234567890abcdefghij1234567890ABCDEFGHIJ
hmac_key_access_id: ${AIRBYTE_BQ1_HMAC_KEY_ACCESS_ID} # SECRET (please store in environment variables) | REQUIRED | string | HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. | Example: 1234567890abcdefghij1234
gcs_bucket_name: "airbyte-octavia-thibaut-gcs" # REQUIRED | string | The name of the GCS bucket. Read more here. | Example: airbyte_sync
gcs_bucket_path: "gcs" # REQUIRED | string | Directory under the GCS bucket where data will be written. | Example: data_sync/test
# keep_files_in_gcs-bucket: "Delete all tmp files from GCS" # OPTIONAL | string | This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
credentials_json: ${AIRBYTE_BQ1_CREDENTIALS_JSON} # SECRET (please store in environment variables) | OPTIONAL | string | The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
dataset_location: "europe-west1" # REQUIRED | string | The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
transformation_priority: "interactive" # OPTIONAL | string | Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
big_query_client_buffer_size_mb: 15 # OPTIONAL | integer | Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here. | Example: 15
I get Request body data from excel file.
I have already covert excel to csv format.
I have kind of able to find a solution but it is not working 100% as jsonbody format in not fetching data correctly is shows forward slash in csv import data from runner collections.
Request Body
{{jsonBody}}
Set Global variables jsonBody
Run collection select data file as csv file as per screenshot request body shows with forward slash.
After running the collection I'm getting body incorrect version with forward slash.
This below screenshot show correct version on csv data I require to remove forward slash from csv data
I had similar issue with postman and realized my problem was more of a syntax issue.
Lets say your cvs file has the following columns:
userId | mid | platform | type | ...etc
row1 94J4J | 209444894 | NORTH | PT | ...
row2 324JE | 934421903 | SOUTH | MB | ...
row3 966RT | 158739394 | EAST | PT | ...
This is how you want your json request body to look like:
{
"userId" : "{{userId}}",
"mids":[{
"mid":"{{mid}}",
"platform":"{{platform}}"
}],
"type":["{{type}}"],
.. etc
}
Make sure your colums names match the varibales {{variableName}}
The data coming from CSV is already in a stringified format so you don't need to do anything in pre-request.
example:
let csv be
| jsonBody |
| {"name":"user"}|
Now in postman request just use:
{{jsonBody}}
as {{column_name}} will be considered as data varaible so , in your case {{jsonBody}}
csv:
make sure you save this as csv file :
Now in request use :
output:
if you want to add the json body as value of another then just use :
Output:
My homework is giving me a hard time with pyspark. I have this view of my "df2" after a groupBy:
df2.groupBy('years').count().show()
+-----+-----+
|years|count|
+-----+-----+
| 2003|11904|
| 2006| 3476|
| 1997| 3979|
| 2004|13362|
| 1996| 3180|
| 1998| 4969|
| 1995| 1995|
| 2001|11532|
| 2005|11389|
| 2000| 7462|
| 1999| 6593|
| 2002|11799|
+-----+-----+
Every attempt to save this (and then load with pandas) to a file gives back the original source data text file form I read with pypspark with its original columns and attributes, only now its .csv but that's not the point.
What can I do to overcome this ?
For your concern I do not use SparkContext function in the begining of the code, just plain "read" and "groupBy".
df2.groupBy('years').count().write.csv("sample.csv")
or
df3=df2.groupBy('years').count()
df3.write.csv("sample.csv")
both of them will create sample.csv in your working directory
You can assign the results into a new dataframe results, and then write the results to a csv file. Note that there are two ways to output the csv. If you use spark you need to use .coalesce(1) to make sure only one file is outputted. The other way is to convert .toPandas() and use to_csv() function of pandas DataFrame.
results = df2.groupBy('years').count()
# writes a csv file "part-xxx.csv" inside a folder "results"
results.coalesce(1).write.csv("results", header=True)
# or if you want a csv file, not a csv file inside a folder (default behaviour of spark)
results.toPandas().to_csv("results.csv")
I created a simple .exe file that just assigns a value of 3 to an integer called "x" and then prints out that value.Here is a picture of the source code:
source code
I opened the .exe file with an hex editor(named HxD) and used the disassembly function of Visual Studio 2017 to show me the opcodes of my main function. After a bit of search i found out that the main function is stored in the file at Offset 0xC10
Here is the disassembly:disassembly
And here is the file in the Hex-Editor:hexadecimal view of .exe file
I know that some values of the .exe file in the hex editor vary from what the visual studio debugger says but i know that the main starts there because i changed the value of x in the hex editor and then when i started the .exe it printed out another value instead of the 3. My question is where in the .exe file is the value that says:"At that point of the file start the opcodes of the main function."
For example in a .bmp file the 4 bytes at positions 0x0A,0x0B,0x0C and 0x0D tell you the offset of the first byte of the first pixel.
On Windows, the entry point of an executable (.exe) is set in the PE Header of the file.
WikiPedia illustrates the structure of this header like this (SVG file).
Relative to the beginning of the file, the PE Header starts at the position indicated at the address
DWORD 0x3C Pointer to PE Header
File Header / DOS Header
+--------------------+--------------------+
0000 | 0x5A4D | | |
0008 | | |
0010 | | |
0018 | | |
0020 | | |
0028 | | |
0030 | | |
0038 | | PE Header addr |
0040 | | |
.... | .................. | .................. |
And the entry point is designated at the position (relative to the address above)
DWORD 0x28 EntryPoint
PE Header
+--------------------+--------------------+
0000 | Signature | Machine | NumOfSect|
0008 | TimeDateStamp | PtrToSymTable |
0010 | NumOfSymTable |SizOfOHdr| Chars |
0018 | Magic | MJV| MNV | SizeOfCode |
0020 | SizeOfInitData | SizeOfUnInitData |
0028 | EntryPoint (RVA) | BaseOfCode (RVA) |
0030 | BaseOfData (RVA) | ImageBase |
0038 | SectionAlignment | FileAlignment |
0040 | ... | ... |
from the beginning of the PE Header. This address is a RVA (Relative Virtual Address) what means that it is relative to the Image Base address that the file is loaded to by the loader:
Relative virtual addresses (RVAs) are not to be confused with standard virtual addresses. A relative virtual address is the virtual address of an object from the file once it is loaded into memory, minus the base address of the file image.
This address is the address of the main function.
A .exe is a portable executable.
Layout
Structure of a Portable Executable 32 bit
A PE file consists of a number of headers and sections that tell the dynamic linker how to map the file into memory. An executable image consists of several different regions, each of which require different memory protection; so the start of each section must be aligned to a page boundary.[4] For instance, typically the .text section (which holds program code) is mapped as execute/readonly, and ...
So really it is a question of where the .text section is in the file. Exactly where depends on the headers and locations of the other sections.
I am trying to create a random number generator:
Command | Tgt | Val |
store | tom | tester
store | dominic | envr
execute script | Math.floor(Math.random()*11111); | number
type | id=XXX | ${tester}.${dominic}.${number}
Expected result:
tom.dominic.0 <-- random number
Instead I get this:
tom.dominic.${number}
I looked thru all the resources and it seems the recent selenium update/version has changed the approach and I cannot find a solution.
I realize this question is 2 years old, but it's a fairly common one, so I'll answer it and see if there are other answers that address it.
If you want to assign the result of a script run by the "execute script" in Selenium IDE to a Selenium variable, you have to return the value from JavaScript. So instead of
execute script | Math.floor(Math.random()*11111); | number
you need
execute script | return Math.floor(Math.random()*11111); | number
Also, in your final assignment that puts the 3 pieces together, you needed ${envr} instead of ${dominic}.