How can I use the RDF4J console to programmatically create a repository? - sesame

As explained here, it is easy to clear an existing repository and load new datasets. However, due to the dialogue nature of the create command, I fail to see how I can set up a repo just using a script. Moreover, the REST API documentation seems to omit the possibility of creating a repo; only to remove it.

Just put the expected inputs for the dialogs in your script, one on each line. For example to create an in-memory repositry called 'test-script', firing a query against it, then closing it:
create memory.
test-script
testing using a script
10000
true
0
org.eclipse.rdf4j.query.algebra.evaluation.impl.StrictEvaluationStrategyFactory
open test-script.
select * where {?s ?p ?o }.
close.
quit.
As for creating a repo via the REST API, that's possible, but somewhat underdocumented (mainly because it is cumbersome). If you need programmatic access to this kind of stuff, it's far easier to use the RDF4J Java APIs.

Related

Bulk edit DataStage jobs?

We are repointing a large number (>1000) DataStage jobs from one database to another. As part of this, we will need to make the same changes to a single stage for many jobs.
So far, we have been able to export jobs to XML, edit and reimport. This seems to work, but will require a lot of parsing logic. We also have looked at dsjob, but that tool does not seem to have the ability to edit jobs.
What is the best method (UI or CLI/API) to bulk edit job stages?
Scenarios like this are the reason for using parameters for Databases - I recommend using ParameterSets with DBName, User, Password and Schema parameters.
This allows an easy and quick change in one place of a project: the ParameterSet
Hard coding all these things will give you a hard time - the export method is one option you know already.
There is a connector migration wizzard - I am not sure if this tool could be helpful as well - you might want to search for documentation on that.
Perhaps you can try the RJUT (Rapid Job Update Tool) or the CMT (Connector Migration Tool).
RJUT: https://www.ibm.com/support/pages/rapid-job-update-tool-ibm-infosphere-information-server-datastage-jobs
CMT: https://www.ibm.com/docs/en/iis/11.7?topic=connectors-using-command-line-migrate-jobs

How to programatically create functions for Log Analytics Workspace

The Functions in Azure Monitor log queries doc explains how to create functions in Log Analytics workspace manually, but doesn't explain how to do it automatically. What is the recommended way of doing it?
I tried to use .create-or-alter function from Kusto.Explorer but it didn't work (I have correct connection, as I can execute existing functions and read data). The documentation section on function supportability doesn't mention support for it, so no surprise here.
I did discover find or update saved searches API method that seems promising. But calling API directly is nowhere near as convenient as executing .create-or-alter function from my Kusto.Explorer.
Is there an easier way to create functions programmatically? If there is an SDK support for it, could I get links to the relevant methods?
Per this comment on GitHub issues I opened, titled 'The "Functions in Azure Monitor log queries" doc doesn't explain what is the recommended programmatic way of creating functions #94841'
guywi-ms replied:
There are SDKs that you can use to create functions programmatically, including:
C# SDK:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.operationalinsights.models.savedsearch?view=azure-dotnet
Python SDK:
https://learn.microsoft.com/en-us/python/api/azure-mgmt-loganalytics/azure.mgmt.loganalytics.models.savedsearch?view=azure-python

Retrieve Overwritten Saved Query in Big Query

I accidentally overwrote a saved project query in BQ with a completely unrelated query. I can't find any documentation about retrieving overwritten queries or about any sort of version control. Has anyone done this as well and recovered their query?
Unfortunately, "Saved Query" is UI internal feature (see How to access “Saved Queries” programmatically? and there is respective feature request REST API for Saved Queries), so we really have no way to manage / control this cases
Meantime you can use query history (either in UI or via respective API or in Stackdriver) to locate use of that query and recreate/re-save it again

BigQuery connecting from GSheet without enabling API every time

I have some scripts running from GSheet getting data from BigQuery. However, in order to make the files run, I need to manually enable the API every time for a given sheet.
So the question is: How to enable API within the code, so that if I share the GSheet or make a copy I don't have to go to the script editor and enable the API from there?
Thanks
I am a huge fan of this particular use of the Google ecosystem, so I'm happy to help get others up and running using GSheets with BigQuery! Hopefully it is working well for you!
When sharing the sheet with others, there is no need to alter anything in the script editor at all. The scripts should run and query BigQuery without issue; this has been my experience at least. The obvious caveat to this is that the users you share it with must have access to the Google Developer Project that the BigQuery instance is associated with.
However, when copying the sheet, I do not believe it is possible to have it replicate the connection. This is because when the file is copied, it becomes associated with a new Google Developer Project. Thus, you have to go into the script editor, then go to Resources > Developers Console Project and change the project listed to the one in which you have BigQuery enabled.
Hopefully this helps! Sorry I don't have better news for you!

Single FakeApp for all test in Play Framework

I want to have single FakeApplication for all my test.
My final goal is to set up database and use it in all test. They should access single database and share data in it. I can not use H2, because I use some MySQL features(fulltest search, for example). But if there is no started application, I can't call "DB.withTransaction" because there is started application yet. But it should start once, because it drops all tables and create new ones.
How can I do it?
I am using scala and JUnit. I solved my problem next way: I just created singleton for my fake application, which is retrieved as an implicit val. So, all work about creating and cleaning database is done on first fetch.