I am working on writing ERC721 smart contracts by using the PresetMinterPauserAutoId script.
ERC721PresetMinterPauserAutoId("Simple", "SPL", "http://127.0.0.1:8080/ipfs/QmU2JdyAmcSMy9A44nUCbYCFkKKR65b37HCqZLsQPNJABw/")
{}
However, for the third parameter in the constructor (supposing it's the base url), I am not sure what to put. I tried "ipfs://{CID}" , "http://ipfs/{local site}/{CID}" and "ipfs.io/{CID}" but they all failed. I wonder how am I supposed tyo get my json data read and if there's any way to test what the input actualy is (in order to debug). Thanks.
i'm fairly new with working with NiFi. We're trying to validate an xmlfile, except we need to use a different xsd depending on some value passed in the file. Extracting and routing on the name wasn't an issue, and we stored the desired filepath in an attribute (xsdFile).
However, when trying to use that attribute in the XMLValidation processor, it changes the path and gives an error. When I copy the path from the attributes and copy it to the schema, it works, so the path itself isn't wrong.
Attribute passed in flowfile:
xsdFile:
C:\Users\MYNAME\Documents\NiFi\FLOW_RESOURCES\input\validatexml\camt.053.001.02_CvW_2.xsd
XMLValidation processor properties:
Schema File: ${xsdFile}
Error:
Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to java.io.FileNotFoundException:
Schema file not found at specified location: C:\Users\MYNAME\DOCUME~1\NiFi\NIFI-1~1.0: java.io.FileNotFoundException:
Schema file not found at specified location: C:\Users\MYNAME\DOCUME~1\NiFi\NIFI-1~1.0
java.io.FileNotFoundException: Schema file not found at specified location: C:\Users\MYNAME\DOCUME~1\NiFi\NIFI-1~1.0
Why does this not work? Is there another way to do this, or do we need to route to different XMLValidators?
Check documentation for this processor:
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.9.2/org.apache.nifi.processors.standard.ValidateXml/index.html
Schema File:
The path to the Schema file that is to be used for validation
Supports Expression Language: true
(will be evaluated using variable registry only)
So, flow file attribute can't be used for this parameter
We're trying BigQuery for the first time, with data extracted from mongo in json format. I kept getting this generic parse error upon loading the file. But then I tried a smaller subset of the file, 20 records, and it loaded fine. This tells me it's not the general structure of the file, which I had originally thought was the problem. Is there any way to get more info on the parse error, such as the string of the record that it's trying to parse when it has this error?
I also tried using the max errors field, but that didn't work either.
This was via the website. I also tried it via the Google Cloud SDK command line 'bq load...' and got the same error.
This error is most likely caused by some of the JSON records not compying with table schema. It is not clear whether you used schema autodetect feature, or you are supplying schema for the load. But here is one example where such error could happen:
{ "a" : "1" }
{ "a" : { "b" : "2" } }
If you only have a few of these and they are for invalid records - you can automatically ignore them by using max_bad_records option for load job. More details at: https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-json
I want to run a Bigquery SQL query using insert method.
I ran the following code just like so:
JobConfigurationQuery = Google::Apis::BigqueryV2::JobConfigurationQuery
bq = Google::Apis::BigqueryV2::BigqueryService.new
scopes = [Google::Apis::BigqueryV2::AUTH_BIGQUERY]
bq.authorization = Google::Auth.get_application_default(scopes)
bq.authorization.fetch_access_token!
query_config = {query: "select colA from [dataset.table]"}
qr = JobConfigurationQuery.new(configuration:{query: query_config})
bq.insert_job(projectId, qr)
and I got an error as below:
Caught error invalid: Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0:
Please let me know how to use the insert_job method.
I'm not sure what client library you're using, but insert_job probably takes a JobConfiguration. You should create one of those and set the query parameter to equal your JobConfigurationQuery you've created.
This is necessary because you can insert various jobs (load, copy, extract) with different types of configurations to this one API method, and they all take a single configuration type with a subfield that specifies which type and details about the job to insert.
More info from BigQuery's documentation:
jobs.insert documentation
job resource: note the "configuration" field and its "query" subfield
I have a JSON schema:
[{"name":"timestamp","type":"integer"},{"name":"xml_id","type":"string"},{"name":"prod","type":"string"},{"name":"version","type":"string"},{"name":"distcode","type":"string"},{"name":"origcode","type":"string"},{"name":"overcode","type":"string"},{"name":"prevcode","type":"string"},{"name":"ie","type":"string"},{"name":"os","type":"string"},{"name":"payload","type":"string"},{"name":"language","type":"string"},{"name":"userid","type":"string"},{"name":"sysid","type":"string"},{"name":"loc","type":"string"},{"name":"impetus","type":"string"},{"name":"numprompts","type":"record","mode":"repeated","fields":[{"name":"type","type":"string"},{"name":"count","type":"integer"}]},{"name":"rcode","type":"record","mode":"repeated","fields":[{"name":"offer","type":"string"},{"name":"code","type":"integer"}]},{"name":"bw","type":"string"},{"name":"pkg_id","type":"string"},{"name":"cpath","type":"string"},{"name":"rsrc","type":"string"},{"name":"pcode","type":"string"},{"name":"opage","type":"string"},{"name":"action","type":"string"},{"name":"value","type":"string"},{"name":"other","type":"record","mode":"repeated","fields":[{"name":"param","type":"string"},{"name":"value","type":"string"}]}]
(http://jsoneditoronline.org/ for pretty print)
When loading through the browser GUI the schema is accepted as valid. The cli throws the following error:
BigQuery error in load operation: Invalid schema entry: "fields":[{"name":"type"
Is there something wrong with my schema as specified?
If you are passing the schema as json, you should write it to a file and pass the file name as the schema parameter. Passing the schema inline on the command line is only allowed for simple flat schemas.