nexus3 api how to search component from groupId and artifactId - api

I search a way to request components or assets from groupId and artefactId.
Documentation provides any help about how to create this request.
This doc has been a great help.
Unfortunately, it's not enough to resolve my need and I try to create query to request components from groupId and artefactId like that:
Query.builder().where('group = ').param('###').and('name = ').param('###').and('version = ').param('###).build()
Last time I played my script it throwed java.lang.StackOverflowError. After increasing memory, I had the same result. It seems there is too much components to return, but in my nexus repository there's only one component with such group, name and version.
What's wrong with this query?
Is there someone pass this difficulty (it was so easy with nexus2 and rest api!) and retrieve components information with a groovy script?

Here is the script I successfully uploaded and use. You may easily add version to your requests - the parameter will be, for example, "snapshots-lib,com.m121.somebundle,someBundle,7.2.1-SNAPSHOT,all". As you may see, I decided to filter the sequence locally, because did not find the way to specify in query version parameter.
{
"name": "listgroup",
"type": "groovy",
"content": "import org.sonatype.nexus.repository.storage.Query;
import org.sonatype.nexus.repository.storage.StorageFacet;
import groovy.json.JsonOutput;
def repositoryId = args.split(',')[0];
def groupId = args.split(',')[1];
def artifactId = args.split(',')[2];
def baseVersion = args.split(',')[3];
def latestOnly = args.split(',')[4];
def repo = repository.repositoryManager.get(repositoryId);
StorageFacet storageFacet = repo.facet(StorageFacet);
def tx = storageFacet.txSupplier().get();
tx.begin();
def components = tx.findComponents(Query.builder().where('group = ').param(groupId).and('name = ').param(artifactId).build(), [repo]);
def found = components.findAll{it.attributes().child('maven2').get('baseVersion')==baseVersion}.collect{def version = it.attributes().child('maven2').get('version');\"${version}\"};
// found = found.unique().sort();
def latest = found.isEmpty() ? found : found.last();
tx.commit();
def result = latestOnly == 'latest' ? JsonOutput.toJson(latest) : JsonOutput.toJson(found);
return result;"
}

okee,
in fact my needs are :
i want to retrieve last release version and its date of a component by group and artefact.
I definitely leave the groovy API option.
A way i found consists in use of extdirect api. This "rest" api is used by nexus frontend to communicate with backend. No documentation exists.
I do a call to extdirect api to retrieve all versions from a component by group and artefact. I parse the results to get the last version on a release (snapshot and releases).
It's not really good because this call retrieves all the versions on all repositories and could be huge.
Another call to extdirect api to find the release date from the component id of the last release version.
I hope someday nexus publishs official documentation of an useful rest api.

Related

Polarion API: How to update multiple work items in the same action

I created a script which updates work items in a polarion document. However, right now each workitem update is a single save. But my script updates all work items in the document, which results in a large number of save actions in the api and thus a large set of clutter in the history.
If you edit a polarion document yourself, it will update all workitems.
Is it possible to do this same thing with the polarion API?
I tried
Using the tracker service to update work items. This only allows a single work item to be updated.
Using the web development tools to try and get information from the API. This seems to use a UniversalService for which no documentation is available at the API site https://almdemo.polarion.com/polarion/sdk/index.html
Update 1:
I am using the python polarion package to update the workitems.Python polarion Based on the answer by #boaz I tried the following code:
project = client.getProject(project_name)
doc = project.getDocument(document_location)
workitems = doc.getWorkitems()
session_service = client.getService("Session")
tracker_service = client.getService("Tracker")
session_service.beginTransaction()
for workitem in workitems:
workitem.description = workitem._polarion.TextType(
content=str(datetime.datetime.now()), type='text/html', contentLossy=False)
update_list = {
"uri": workitem.uri,
"description": workitem.description
}
tracker_service.updateWorkItem(update_list)
session_service.endTransaction(False)
The login step that #boaz indicated is done in the backend (See: https://github.com/jesper-raemaekers/python-polarion/blob/3e61527cf0f1f3c8614a30289a0a3409d2d8712d/polarion/polarion.py#L103)
However, this gives the following Java exception:
java.lang.RuntimeException: java.lang.NullPointerException
Update 2
There seems to be an issue with the session. If I call the following code:
session_service.logIn(user, password)
print(session_service.hasSubject())
it prints False.
The same thing happens when using the transaction:
session_service.beginTransaction()
print(session_service.transactionExists())
also prints False
Try wrapping your changes in a SessionWebService transaction, see JavaDoc:
sessionService = factory.getSessionService();
sessionService.logIn(prop.getProperty("user"),
prop.getProperty("passwd"));
sessionService.beginTransaction();
// ...
// your changes
// ...
sessionService.endTransaction(false);
sessionService.endSession();
As shown in the example in Polarion/polarion/SDK/examples/com.polarion.example.importer.
This will commit all your changes in one single SVN commit.

Multiple mongoDB related to same django rest framework project

We are having one django rest framework (DRF) project which should have multiple databases (mongoDB).Each databases should be independed. We are able to connect to one database, but when we are going to another DB for writing connection is happening but data is storing in DB which is first connected.
We changed default DB and everything but no changes.
(Note : Solution should be apt for the usage of serializer. Because we need to use DynamicDocumentSerializer in DRF-mongoengine.
Thanks in advance.
While running connect() just assign an alias for each of your databases and then for each Document specify a db_alias parameter in meta that points to a specific database alias:
settings.py:
from mongoengine import connect
connect(
alias='user-db',
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
connect(
alias='book-db'
db='test',
username='user',
password='12345',
host='mongodb://admin:qwerty#localhost/production'
)
models.py:
from mongoengine import Document
class User(Document):
name = StringField()
meta = {'db_alias': 'user-db'}
class Book(Document):
name = StringField()
meta = {'db_alias': 'book-db'}
I guess, I finally get what you need.
What you could do is write a really simple middleware that maps your url schema to the database:
from mongoengine import *
class DBSwitchMiddleware:
"""
This middleware is supposed to switch the database depending on request URL.
"""
def __init__(self, get_response):
# list all the mongoengine Documents in your project
import models
self.documents = [item for in dir(models) if isinstance(item, Document)]
def __call__(self, request):
# depending on the URL, switch documents to appropriate database
if request.path.startswith('/main/project1'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db1'
elif request.path.startswith('/main/project2'):
for document in self.documents:
document.cls._meta['db_alias'] = 'db2'
# delegate handling the rest of response to your views
response = get_response(request)
return response
Note that this solution might be prone to race conditions. We're modifying a Documents globally here, so if one request was started and then in the middle of its execution a second request is handled by the same python interpreter, it will overwrite document.cls._meta['db_alias'] setting and first request will start writing to the same database, which will break your database horribly.
Same python interpreter is used by 2 request handlers, if you're using multithreading. So with this solution you can't start your server with multiple threads, only with multiple processes.
To address the threading issues, you can use threading.local(). If you prefer context manager approach, there's also a contextvars module.

PyRal Rally Specific queries work but generic queries not returning data

When I use the PyRal get() function no results are returned, but if I use specific built-in get functions (e.g., getProjects(), getWorkspaces()) all data is returned correctly. Am I using the general get() incorrectly or do I have a configuration issue?
For the setup:
import sys
from pyral import Rally, rallyWorkset
server, user, password, apikey, workspace, project = <appropriate values>
rally = Rally(server, user, password, apikey=apikey, workspace=workspace, project=project)
These calls respond correctly (i.e., expected data returned):
workspacesAll = rally.getWorkspaces()
projectsAll = rally.getProjects(workspace=workspace)
No data returned for this call, no errors. The User Story exists in Rally.
query_criteria = 'FormattedID = "US220220"'
response = rally.get('HierarchicalRequirement', fetch=True, query=query_criteria)
Have also tried using "UserStory" instead of "HierarchicalRequirement" and other query criteria all without success.
It will work if you go thru all Projects (not parent):
query_criteria = 'FormattedID = "US220220"'
response_req = rally.get('HierarchicalRequirement', fetch=True, projectScopeDown=True, query=query_criteria)
response = response_req.next()
print response.details()

Issue in getting iteration data from rally API

I am using following url to get the iteration data from rally.
I then parse the json data received.
def query = URLEncoder.encode("(Project.Name contains \"1 Prime Infrastructure\")", "UTF-8")
def rallyURL = "https://us1.rallydev.com/slm/webservice/v2.0/iteration?query="+query+"&fetch=true&start=1&pagesize=200"
The issue is it giving 0 records. But when i change the name to some other project the data comes.
Probably it is because of the default workspace for my username and password. I want project data from different workspace.
I have access to all this workspace.
can someone tell how to set the workspace before making an api call so that i can get the iteration data ??
Thanks,
You can simply include a workspace parameter in your url to override the default:
&workspace=/workspace/12345
You can also always further refine your results to a specific project:
&project=/project/12345
Or to a specific hierarchy:
&projectScopeUp=true
&projectScopeDown=true

How do I dynamically trigger downstream builds in jenkins?

We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)