Dynamic Variables in Postman is not coming properly - api

I am trying to delete users in bulk with Postman. To achieve this operation I have to pass all the user id in append url which is like this -
https://example.com/test/Users/{{id}}
id value I am taking from csv file. Please find below screenshot for more understanding -
And postman is taking value in different way. I guess due to the size of variable it is not considering last 3 numbers
Please help me in this.
Thanks

Related

Apigee Integration: How to use listEntitiesPageSize parameter in conjunction with the listEntitiesPageToken parameter o navigate through the pages

Good day everyone,
we are trying to have through the use of the integrations of the Apigee service of google all the rows in a bigquery table that have a certain value in a field.
this operation is quite easy to do, but when we have more than 200 lines as a result, problems arise.
The problem is that using the integration to connect to BigQuery I am not returning any listEntitiesPageToken value and not even any listEntitiesNextPageToken value
so i can't figure out how i can go about navigating the result pages
Has anyone had the same problem? What do you suggest?
In the tutorial: "https://cloud.google.com/apigee/docs/api-platform/integration/connectors-task#configure-the-connectors-task" is write : "For example, if you are expecting 1000 records in your result set, you can set the listEntitiesPageSize to 100. So when the Connectors task runs for the first time, it returns the first 100 records, the next 100 records in the second run and so on."
And there is a tip: "Use the listEntitiesPageSize parameter in conjunction with the listEntitiesPageToken parameter to navigate through the pages."
I used the tutorial to understand how to use the task for loop and I understood that I should create a "subintegration" which must be called by a "main integration" for each element present in a list / array.
But what what can i do since these tokens are empty?

How to store and serve coupons with Google tools and javascript

I'll get a list of coupons by mail. That needs to be stored somewhere somehow (bigquery?) where I can request and send it to the user. The user should only be able to get 1 unique code, that was not used beforehand.
I need the ability to get a code and write, that it was used, so the next request gets the next code...
I know it is a completely vague question but I'm not sure how to implement that, anyone has any ideas?
thanks in advance
Thr can be multiples solution for same requirement, one of them is given below :-
Step 1. Try to get coupons over a file (CSV, JSON, and etc) as per your preference/requirement.
Step 2. Load Source file to GCS (storage).
Step 3. Write a Dataflow code which read data from GCS (file) an load data to a different Bigquery table (tentative name: New_data). Sample code.
Step 4. Create a Dataflow code to read data from Bigquery table New_data and compare it with History_data and identify new coupons and write data to a file on GCS or Bigquery table. Sample code.
Step 5. Schedule entire process over an orchestrator/Cloud scheduler/Cron tab job.
Step 6. Once you have data you can send it to consumers through any communication channel.

In Karate, can I update a value for one of the fields in a Request payload (JSON format) automatically? [duplicate]

This question already has an answer here:
How to set a dynamic value for a json array in Karate
(1 answer)
Closed last year.
In Karate, can I update a value for one of the fields in a Request payload (JSON format) automatically?
I am working on the following scenario -
1. Submit a POST request to create an object. A Request payload (JSON format) is used.
2. Query DB to verify that this object is created. Each object creation generates a unique ID.
3. Submit another POST request to delete this object created in Step 1. A seperate request payload (JSON format) is used.
PROBLEM -
How do I automatically update the request payload to delete this object? The only thing that needs to be updated in this payload is the ID field with ID's value which I will grab from the DB (step 2). Basically, all but the ID field values in Delete payload remains the same. I need this ID to update during the run time...Any ideas?
Is there a better way to accomplish this?
Thank you
Yes this is easy in Karate. I suggest you read the documentation. The "hello world" example itself shows how to use a value coming back from the response.
This example shows how to get values from a database using Java interop: dogs.feature
And here is how you update any JSON, JS-style: https://stackoverflow.com/a/62294932/143475
Instead of asking a broad question like this, I suggest you create a real example, and ask a specific question only if you get stuck.

mass find deleted image rows from a table in sql

in my sql files, there is a table call "user", there is a "Photolink" field within this table, they should store data like: abc.com/acc.jpg
now the problem is some of the images already deleted, some of links like abc.com /acc.jpg may not showing images again..
i can try to find one by one, but it use too much time.
now I would like to find which row with deleted images?? is it possible to find them mass? thanks
Write a test in java or js, turn this url to BASE64Encoder.
All the operation needed to confirm your network is ping, otherwise you will drop good img url.
When using java, if catch exception then replace url to empty string in db.

apache solr csv file same values

We have identified Apache Solr as a possible solution to our problem. Please bear with me, I'm new to Apache Solr. We are planning to upload several large CVS files and use Solrs REST like feature to get the result back in XML/JSON.
The problem I am thinking of is e.g. you have two file currency.csv and country.csv and they both have a 'GBP' as the currency entry in them. So if you upload these both files into Solr and do a query for value of 'GBP' then form which file entries will this have been returned?
What I would ideally like to do is a query that would only return currency e.g. 'GBP' form entries that were upload from the currency.csv and not from the country.csv file.
Hope someone can help or point me in the right direction as we may have files with similar data and yet we need to be sure to retrieve the right values from the right csv file.
Thanks in advance.
GM
UPDATE
Is it better to have multiple cores? i.e. one core per file?
You can add an additional field data_type which would indicate the type like country or currency for the records.
You can then use the field to filter the results by the type or be able to display and use the type to indicate which type the record belongs to.