So I need to transfer information from a file. An example of the heading is label=1234 I was hoping I could use a perl code to change "label" to "id", is this possible?
Cy.js supports JSON data. Convert or write your data to JSON, and you can load it in as described in the docs: http://cytoscape.github.io/cytoscape.js/
Related
I have written a simple query and join it with a json reference data. I can see correct results when testing the query in "Test results" tab. However, no output is generated when starting the job.
I have confirmed that the output blob is created when no join with reference data is used in the query.
Any help is appreciated. The sample reference json follows:
[
{
"DeviceId":"DEV-021",
"Brand":"brand01",
"Model":"model01"
}
]
Use flat json structure instead of array. It should give you the output
Check the path you specified in the reference data, maybe it is not correct or you did not specify the file name. Does it contain something like {date}/{time}/filename.json?
If you forget to specify the file name, it does not work as well.
And if you are testing the job, usually you specify the file manually and that is why your query works.
I'm working with ThemeFuse I found this data format. But it not looks like anything I saw before.
a:66:{s:24:"autotrader_slider_images";a:0:{}s:26:"autotrader_thumbnail_image";s:0:"";s:19:"seek_property_price";s:0:"";s:23:"seek_property_vat_price";s:0:"";s:21:"seek_property_mileage";s:0:"";s:25:"seek_property_engine_size";s:0:"";s:30:"seek_property_engine_power_bhp";s:0:"";s:29:"seek_property_engine_power_kw";s:0:"";s:23:"seek_property_reduction";s:1:"0";s:25:"seek_property_consumption";s:1:"0";s:20:"seek_property_origin";s:0:"";s:22:"seek_property_emission";s:13:"super emitivo";s:23:"autotrader_vehicle_type";s:3:"SUV";s:20:"autotrader_fuel_type";s:6:"Diesel";s:23:"autotrader_gearbox_type";s:9:"Automatic";s:17:"autotrader_status";s:6:"Intact";s:16:"autotrader_color";s:5:"White";s:18:"seek_property_year";s:0:"";s:26:"autotrader_enable_comments";s:5:"false";s:29:"autotrader_enable_breadcrumbs";s:4:"true";s:25:"autotrader_header_element";s:4:"none";s:23:"autotrader_header_image";s:0:"";s:23:"autotrader_header_title";s:0:"";s:24:"autotrader_select_slider";s:2:"-1";s:19:"autotrader_page_map";s:0:"";s:19:"autotrader_map_text";s:11:"We are here";s:19:"autotrader_map_zoom";s:2:"13";s:25:"autotrader_search_element";s:4:"none";s:22:"autotrader_content_top";s:0:"";s:26:"autotrader_content_bottom1";s:0:"";s:25:"autotrader_footer_element";s:4:"none";s:31:"autotrader_select_slider_footer";s:2:"-1";s:25:"autotrader_content_bottom";s:0:"";s:26:"autotrader_content_bottom2";s:0:"";s:23:"autotrader_top_ad_space";s:5:"false";s:23:"autotrader_top_ad_image";s:0:"";s:21:"autotrader_top_ad_url";s:0:"";s:25:"autotrader_top_ad_adsense";s:0:"";s:30:"autotrader_bfcontent_ads_space";s:5:"false";s:25:"autotrader_bfcontent_type";s:5:"image";s:27:"autotrader_bfcontent_number";s:3:"one";s:31:"autotrader_bfcontent_ads_image1";s:0:"";s:29:"autotrader_bfcontent_ads_url1";s:0:"";s:33:"autotrader_bfcontent_ads_adsense1";s:0:"";s:31:"autotrader_bfcontent_ads_image2";s:0:"";s:29:"autotrader_bfcontent_ads_url2";s:0:"";s:33:"autotrader_bfcontent_ads_adsense2";s:0:"";s:31:"autotrader_bfcontent_ads_image3";s:0:"";s:29:"autotrader_bfcontent_ads_url3";s:0:"";s:33:"autotrader_bfcontent_ads_adsense3";s:0:"";s:31:"autotrader_bfcontent_ads_image4";s:0:"";s:29:"autotrader_bfcontent_ads_url4";s:0:"";s:33:"autotrader_bfcontent_ads_adsense4";s:0:"";s:31:"autotrader_bfcontent_ads_image5";s:0:"";s:29:"autotrader_bfcontent_ads_url5";s:0:"";s:33:"autotrader_bfcontent_ads_adsense5";s:0:"";s:31:"autotrader_bfcontent_ads_image6";s:0:"";s:29:"autotrader_bfcontent_ads_url6";s:0:"";s:33:"autotrader_bfcontent_ads_adsense6";s:0:"";s:31:"autotrader_bfcontent_ads_image7";s:0:"";s:29:"autotrader_bfcontent_ads_url7";s:0:"";s:33:"autotrader_bfcontent_ads_adsense7";s:0:"";s:21:"autotrader_hook_space";s:5:"false";s:21:"autotrader_hook_image";s:0:"";s:19:"autotrader_hook_url";s:0:"";s:23:"autotrader_hook_adsense";s:0:"";}
What should I use to parse and unparse this format?
This is serialized PHP array.
All you need to do is to unserialize it.
http://php.net/manual/en/function.unserialize.php
This question already has answers here:
Can we parameterize the request file name to the Read method in Karate?
(2 answers)
Closed 1 year ago.
I am using a JSON file which act as a test case document for my API testing. The JSON contain Test Case ID, Test case Description, Header and Request body details, which should be the driving factor of Automation
Currently i am looping a feature over this json file to set different header and body validations. However it will be helpful if i can set the Scenario name from JSON file while its iterating
Something like
serverpost.feature
Feature:re-usable feature to publish data
Scenario: TC_NAME # TC_NAME is avaliable in the JSON data passed to this feature. However, CURRENTLY ITS NOT TAKING THIS DATA FROM JSON FILE.
Given path TC_ID # TC ID is taken from JSON
Given url 'http://myappurl.com:8080/mytestapp/Servers/Data/uploadServer/'
And request { some: '#(BODY)' } # Request Body Details is taken from JSON
Please suggest
In my honest opinion, you are asking for a very un-necessary feature. Please refer to the demo examples, look for it in the documentation.
Specifically, look at this one: dynamic-params.feature. There are multiple ways to create / use a data table. Instead of trying to maintain 2 files - think of Karate as being both - your data table AND the test execution. There is no need to complicate things further.
If you really really want to re-use some JSON lying around, it is up to you but you won't be able to update the scenario name, sorry. What I suggest is just use the print statement to dump the name to the log and it will appear in the HTML report (refer to the doc). Note that when calling a feature in a loop using a JSON array, the call argument is ALREADY included the report, so you may not need to do anything.
Just an observation - your questions seem to be very basic, do you mind reading the doc and the examples a bit more thoroughly, thanks.
I have a simple conversion of JSON to XML using MuleSoft. In "Transform Message" component, I provided JSON Schema as Input and XML Schema as Output. When I run the app, the conversion happens if the file matches with both schema but it generates an empty XML file if it doesn't match.
I want below conditions:
1) If the file matches with schema, the converted output file should be sent to converted folder and the original file should move to Success folder.
2) If the file doesn't match with schema, the original file should move to the Failure folder instead of conversion.
Hope, I explained it comprehensively as I am new to MuleSoft. Here is a sample diagram which may simplify my requirement. Provide me with a new one if I badly designed the process.
First thing you need to create a flowVar that will hold your original payload.
When your doing your evaluation, if its XML then use a simple XPath expression like //elementName[not(node())]
Lastly, on your success use scatter-gather for multi-threading write. Pull your original payload from flowVar and write to Success and Write your regular payload to your Converted folder
I need to store the data presented in the graphs on the Google Ngram website. For example, I want to store the occurences of "it's" as a percentage from 1800-2008, as presented in the following link: https://books.google.com/ngrams/graph?content=it%27s&year_start=1800&year_end=2008&corpus=0&smoothing=3&share=&direct_url=t1%3B%2Cit%27s%3B%2Cc0.
The data I want is the data you're able to scroll over on the graph. How can I extract this for about 140 different terms (e.g. "it's", "they're", "she's", etc.)?
econpy wrote a nice little module in Python that you can use through a command-line interface.
For your "it's" example, you would need to type this command in a terminal / windows console:
python getngrams.py it's -startYear=1800 -endYear=2008 -corpus=eng_2009 -smoothing=3
This will automatically save the query result in a CSV file named after your query parameters.
econpy's package, in #HugoMailhot's answer, no longer works (2021) and seems not maintained.
Here's a updated version, with some improvements for easier integration into Python code:
https://gitlab.com/cpbl/google-ngrams
You can call this from the command line (as in econpy's) to create a CSV file, e.g.
getngrams.py it's -startYear=1800 -endYear=2008 -corpus=eng_2009 -smoothing=3
or call it from python to get (and plot) data directly in python, e.g.:
from getngrams import ngrams
df = ngrams('bells and whistles -startYear=1900 -endYear=2018 -smoothing=2')
df.plot()
The xkcd functionality is still there too.
(Issues / bug fix pull requests /etc welcome there)