openflow: preseve match metadata - openflow

I am writing an openflow rule that needs to preserve match metadata and pass it on to the next table. I used the following new metadata assignment
newmetadata = 0xFFFFFFFFFFFFFFFF
newmeatadataMask = 0x0
Will this do the trick?

The answer is YES. It does preserve the metadata.

Related

Removing property from operation in Cumulocity does not work?

I would like to remove a property from an operation in Cumulocity. I use the following code:
private final DeviceControlApi deviceControl;
OperationRepresentation operation = deviceControl.getOperation(new GId("some_op_id"));
operation.removeProperty("the_property_to_be_removed");
deviceControl.update(operation);
But after executing this piece of code the property is still there.
What is the right way to remove a property from operation?
The PUTs (updates) in Cumulocity IoT always do a merge on root level of the JSON so that you can do a partial update.
If you want to remove a property with your PUT request you need to explicitly to null.

How to extracting large amounts of data (more than 100 MB) from Snowflake into CSV

I am trying to export large amounts of data from snowflake into a CSV. I saw a similar question and the solution given was to “Run the query as part of a COPY INTO {location} command to an internal stage, and then use a GET command to pull it down locally.”
I tried following the guide and ran the following but receives the error, “SQL compilation error: syntax error line 4 at position 3 unexpected 'file_format'.”
I am not sure how to fix this or even if the first part of my syntax is correct. Can someone please help.
copy into #my_stage/result/data_ from (select *
from"IRIS"."PRODUCTION"."VW_ALL_IIS_LHJ"
where (RECIP_ADDRESS_COUNTY = 06065 or ADMIN_ADDRESS_COUNTY = 06065)
file_format=(TYPE='CSV');
[ HEADER = TRUE]
get #%my_stage/result/data.csv/;
I'm pretty sure the issue is that you're missing a closing parenthesis. Try:
copy into #my_stage/result/data_ from (select *
from"IRIS"."PRODUCTION"."VW_ALL_IIS_LHJ"
where (RECIP_ADDRESS_COUNTY = 06065 or ADMIN_ADDRESS_COUNTY = 06065))
file_format=(TYPE='CSV');
[ HEADER = TRUE]
get #%my_stage/result/data.csv/;
Sorry - I don't have a way to test this.
You are missing a parentheses after the where clause. You opened a parentheses after the first FROM and then another one at the WHERE clause, but you only closed the WHERE parentheses.
Also, AFAIK, you don't need to call a get if the stage was properly set. The copy into command will place it in your stage, you then retrieve it from that stage but you can do this by the normal way of accessing the stage you specified. So if you sent it to a s3 bucket, you'd just access the resource from S3 as if it were any other file.
Lastly, remember there are many useful parameters you can indicate in the FILE_FORMAT, such as Record_delimiter, compression and how to handle nulls.
And remove the last semicolon after csv, that's going to cause another error because HEADER is not a valid instruction on its own.
Also you don't have to put HEADER = TRUE between brackets. Brackets in documentation mean it's an optional parameter.

How to add a whole package to transport request by code?

My task is to do all these steps programmatically:
Create a new transport request, I managed to do this with TR_INSERT_REQUEST_WITH_TASKS
Add package content to the newly created transport, this is the part I am stuck in.
Release the transport, I managed to do this with TR_RELEASE_REQUEST
My problem is that I can manually add the package to the transport request via transaction SE03 and then release it with FM TR_RELEASE_REQUEST, but that is not the goal, everything from step 1 to 3 has to happen in one program execution if anyone can guide me how to do step 2 it would be very helpful, thanks in advance.
In your program, you must :
First get the list of objects which belong to the package, via the table TADIR (object in columns PGMID, OBJECT, OBJ_NAME, and package in column DEVCLASS)
And add these objects to the task or transport request via the non-released function modules TRINT_APPEND_COMM or TR_APPEND_TO_COMM_OBJS_KEYS.
To add the whole project into request you must first select all the objects from package and add them one by one. You can do it like this:
DATA: l_trkorr TYPE trkorr,
l_package TYPE devclass VALUE 'ZPACKAGE'.
cl_pak_package_queries=>get_all_subpackages( EXPORTING im_package = l_package
IMPORTING et_subpackages = DATA(lt_descendant) ).
INSERT VALUE cl_pak_package_queries=>ty_subpackage_info( package = l_package ) INTO TABLE lt_descendant.
SELECT pgmid, object, obj_name FROM tadir
INTO TABLE #DATA(lt_segw_objects)
FOR ALL ENTRIES IN #lt_descendant
WHERE devclass = #lt_descendant-package.
DATA(instance) = cl_adt_cts_management=>create_instance( ).
LOOP AT lt_segw_objects ASSIGNING FIELD-SYMBOL(<fs_obj>).
TRY.
instance->insert_objects_in_wb_request( EXPORTING pgmid = <fs_obj>-pgmid
object = <fs_obj>-object
obj_name = CONV trobj_name( <fs_obj>-obj_name )
IMPORTING result = DATA(result)
request = DATA(request)
CHANGING trkorr = l_trkorr ).
CATCH cx_adt_cts_insert_error.
ENDTRY.
ENDLOOP.
Note, that you cannot add objects that are already locked in another request, it will give you cx_adt_cts_insert_error exception. There is no way to unlock objects programmatically, only via SE03 tool.
You can check code behind, Write Transport Entry in SE80 right click on package menu.

How to use insert_job

I want to run a Bigquery SQL query using insert method.
I ran the following code just like so:
JobConfigurationQuery = Google::Apis::BigqueryV2::JobConfigurationQuery
bq = Google::Apis::BigqueryV2::BigqueryService.new
scopes = [Google::Apis::BigqueryV2::AUTH_BIGQUERY]
bq.authorization = Google::Auth.get_application_default(scopes)
bq.authorization.fetch_access_token!
query_config = {query: "select colA from [dataset.table]"}
qr = JobConfigurationQuery.new(configuration:{query: query_config})
bq.insert_job(projectId, qr)
and I got an error as below:
Caught error invalid: Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0:
Please let me know how to use the insert_job method.
I'm not sure what client library you're using, but insert_job probably takes a JobConfiguration. You should create one of those and set the query parameter to equal your JobConfigurationQuery you've created.
This is necessary because you can insert various jobs (load, copy, extract) with different types of configurations to this one API method, and they all take a single configuration type with a subfield that specifies which type and details about the job to insert.
More info from BigQuery's documentation:
jobs.insert documentation
job resource: note the "configuration" field and its "query" subfield

BizTalk Receive binary file correlated on RecievedFileName

I'm having problem with routing a message of binary file to a running instance of an Orchestration using correlation of context property: ReceivedFileName. The correlation is initialized using a send with dummy file where in the Orchestration sets the ReceivedFileName context property of the message and the property gets promoted. After that routing fails of the message being received (as XmlDocument) and I can see that the ReveivedFileName context property of that message has not been promoted should it be like that? I cant figure out any way to get it promoted so I just want to make sure it should be like this.
The file names are identical but I noticed that the ReceivedFileName property of the send message doesn't have the path whereas the received message has path + file name. I have tried to add the path to the send message(sounds strange though, read it some where) but it doesn't change the outcome.
While you can set Context Proerties in an Orchestration, they are not Promoted.
You have to use the Correlation Technique described here to have the Properties Promoted when they hit the MessageBox: http://blogs.biztalk360.com/property-promotion-inside-orchestration/
Basically, you Initialize a Correlation Set based on the Properties you need Promoted.
As Ben Runchey pointed out in a comment above one have to resort to a custom pipeline and promote the FILE.ReceivedFileName there by calling:
messag.Context.Promote("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties", receivedFileName);
I also removed the path from FILE.ReceivedFileName to only have the filename by calling the inmsg.Context.Read("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties")
and altered the value and wrote it back by calling:
inmsg.Context.Write("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties", receivedFileName);