Is it possible to add a new wiki entry in a GitLab project using a standard merge request? - automation

Using free, self-managed GitLab
Due to group-level custom project templates available only to paid tiers (even there there are restrictions) I am looking into an alternative solution using a simple project, where
in the code repository each branch provides a template
in the wiki each code repository branch has an entry documenting what the template does
I know that the wiki and the code are actually two separate repositories.
In its nature a template is a construct that offers a pre-made setup for working on a reoccurring task. A group template adds the additional restriction that the reoccurring task applies to more then one individual.
In order to limit tomfoolery and people pushing whatever they want thinking it's worth becoming a group-level template (even though they made something real quick to tackle a problem that has been long forgotten and even they themselves will not work on it ever again) I would like to impose access restrictions to all members. Beside the maintainer/owner all other members are assigned a developer role. All branches are protected so a change of an existing branch or the creation of a new one can only be done through a merge request leading to an assessment whether the committed changes to the repository are actually worthy of becoming a template for the whole group.
Many members of my group have the bad habit of choosing poor names for functionality they have developed (e.g. a script called jennifers_help_script_23.py) and not documenting what was actually implemented. And yes, we are not a software development company but a research institute. :D So in order to improve the documentation and the ability to actually reuse some of the quite useful things that people have developed I would like to make it mandatory for people to provide documentation if they want their stuff to be added to the project.
So the question here is can a user submit a code merge request that also acts as a merge request for a change in the wiki (e.g. user has created a new template, which requires also a new wiki page documenting that template) or the two have to be handled separately given the nature of a GitLab project (wiki separate from code)?
I was thinking maybe each branch (representing a template) will contain a markdown file that will be inserted as a wiki automatically after the merge request has been approved. However I don't know how to automate this. I am currently looking into uploading a file to the wiki using the GitLab API, hoping a can somehow add a trigger in GitLab to execute the "command" upon a successful merge. Needless to say I am quite new to all of this.

Related

Write conflict: I want to always Drop Changes

I have a split database and I have duplicated front-end file to make multiple copies for different users. Every-time a change is made on one front-end form, I want the other forms in other front-ends to always drop changes. How can I trap this write conflict to always drop changes maybe through VBA if possible?
Not quite sure what you mean by "drop changes" - the frontend should never be redesigned during normal use.
You must distribute a new copy of the frontend to the users.
A smooth and proven method using a shortcut and a script is described in my article:
Deploy and update a Microsoft Access application with one click
(If you don't have an account, browse for the link: Read the full article)
Edit:
If it is the data that is updated by several users, and you update via VBA, you may study another of my articles:
Handle concurrent update conflicts in Access silently
Though simple to use, the code is a bit too much to post here. It is also on GitHub:
VBA.ConcurrencyUpdates

Migrate from youtrack to jira

After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer

Importing Product in Adobe CQ5

I have a question on how we can import/synchronise products from our back-office to CQ5 front end.
The architecture to be is pretty simple - custom back-office managing all the products( basically it will be the source of truth). CQ5 driven web-site to show search results(driven by Adobe SearchAndPromote) and product details. Purchase transactions will be handled outside of CQ5.
I went through http://dev.day.com/docs/en/cq/current/ecommerce/eCommerce-framework.html and I think have some idea in which direction we should move, but I would like someone to confirm that my understanding is correct.
1) I need to create scheduled job running on Author node that would call back-office and import products as json feed. I use annotation based #Service(Runnable.class) - Is there a way to set it so it rund on Author node only?
2) Create custom service(called my service above) that will actually create all the nodes in crx. If I have desktop and mobile versions of the site do I need to create all those dones twice? Are there any tips on easier way to create those?
3) Let CQ5 replicate those products to publish nodes.
Is there a easier way? I mean if I was using more standard web-app I would have one controller to show product details, two templates(one for mobile, one for desktop) and a service that would call back-office and return details for requested product. But Sling world is very different, and I want to check if I understand it correctly.
Cheers.
Here are some answers:
1) Here is a good article about different configs for different runmodes: http://helpx.adobe.com/cq/kb/RunModeSetUp.html you can create configs for pub and auth runmodes with certain flag your code will look for which will tell whether to execute import or not.
2) It depends. CQ tends to have copies of content for mobile site so it may make sense to do copies of nodes for mobile site but only in case you those nodes are pages (cq:Page and cq:PageContent) you create based on imported data. Otherwise you just need to save imported data somewhere and obtain it at some moment (via JCR queries or methods like .getNode()). In this case of course it makes sense not to copy your data.
3) It depends here as well. I would consider following forces you may have: should imported data be editable? how frequent are updates? how massive are updates? how critical is consistency across pubs? In case updates are not massive, not frequent and consistency matters import to auth followed by replication can work. Also it may be the case if you need to be able to edit imported data. In case updates are massive and/or frequent and consistency across pubs do not matter much (you can afford that some people may see different results from different pubs during import) I'd suggest run import on all pubs at the same time since massive replication of imported data may affect regular page/images replications.
Thanks,
Max.

Changing createDate on Liferay Journal Article (Web Content) via Liferay API

So here's the situation. I want to add 'old' news from our previous website in to an asset publisher portlet on our new Liferay 6.1 site. The problem is that I want them to show up as if I had added them in the past.
So, I figure, how hard can it be to modify the createDate? I've since been able to directly access the MySQL database and perform updates on the article object's createDate field. However, it doesn't seem to propagate to my Liferay deployment, regardless of clearing caches, reindexing search indices, and restarting Liferay. The web content still maintains it's 'original' createDate even though the database shows it as the value I have changed it to.
Here's the query I used:
mysql> UPDATE JournalArticle SET createDate='2012-03-08 15:17:12' WHERE ArticleID = 16332;
I have since learned that it is a no-no to directly manipulate the database, as the dynamics of database/Liferay isn't as straight forward as Liferay performing lookups. So it looks like I might need to use the Liferay API, namely, setCreateDate as seen here.
But I have absolutely no idea where and how to leverage the API. Do I need to create a dummy portlet with the sole purpose of using this API call? Or can I create a .java file somewhere on the server running my Liferay deployment and run it to leverage this method?
I only have like 15 articles I need to do this to. I can find them by referencing the ArticleID and GroupID.
Any help would be greatly appreciated. I've grepped the Liferay deployment and found setCreateDate being used heavily within .java files inside the knowledge-base-portlet, but I can't tell how else to directly use them without creating a portlet.
On the other hand, if anybody knows how to get my database to propagate it's changes to the Liferay deployment, even though I know it's a dirty hack, that would probably be the easiest.
Thanks; I really appreciate it.
Using of Liferay API is of course the clear and better way, but for only 15 articles I would try to change it directly through the database.
I checked the database and it seems that Liferay stores the data in these tables: JOURNALARTICLE and ASSETENTRY.
Try to change the created date in both these tables.
Then reload the cache: Control Panel -> Server Administration --> Clear Database Cache.
You can write hook for application startup event. This way whenever liferay is first started it will change the create date as you desire. Later if you want to remove the hook it can be done easily. See here on how to create a hook and deploy it.
http://www.liferay.com/community/wiki/-/wiki/Main/Portal+Hook+Plugins
Also, changing in database itself is not at all recommended even for 1 value/article. Always use Liferay provided service api to modify.

How to prevent Trac to show some commits in the Timeline?

I'm trying to configure a trac server we are using in my team, in order to avoid an undesired behaviour. We are mainly developing free and open-source software in the team, but we sometimes need to be able to build our early prototypes as completely private.
Because of our first constraint, we want our timeline to be visible for anonymous users. But because of the seconde constraints, we want some commits to be completely hidden from the external world, i.e. we don't want anybody else than us to be able to read the message and content of some commits in the timeline.
Unfortunately, I've been unable to configure Trac the proper way to reach this behaviour untli now. I wan't find a configuration that would let me manage the Timeline content with enough accuracy.
Consequently, I would like to know if such a configuration is possible with trac.
For information, I'm using Trac 0.12.2. The installed plugins are :
Trac 0.12.2
TracAccountManager 0.2.1dev-r7731
TracNav 4.1
The only permission I can see that is related to Timeline is TIMELINE_VIEW.
EDIT :
I have forgot to mention something. We don't want to loose the private commits. And we want them to display for registered users. Consequently, it's not a solution for us to remove them from the database.
EDIT 2 :
Ideally, we would like the commits' message to be displayed according to the right to read the content of our Subversion repository. The idea is that, if a commit is made on a part someone can't access, this person is not supposed to be able to read the message of the commit either.
EDIT 3 :
If we have a look in the configuration file of trac, we already can find :
permission_policies = AuthzSourcePolicy, DefaultPermissionPolicy, LegacyAttachmentPolicy
and the authz_file variable is properly set too. Moreover, svn access to the private folders of the svn repositories can't be accessed by anonymous users.
You should set up authz checking for both your Subversion repository and your Trac installation. You can use the same permission file for both. For Subversion, see Path-based authorization in the SVN book. For Trac, enable and configure the trac.versioncontrol.svn_authz.AuthzSourcePolicy component.
This will allow you to have a very fine-grained control over who can access which part of the repository. Note that the implementation of AuthzSourcePolicy in Trac 0.12.2 has a few bugs that will be fixed in 0.12.3.
There are two ways of going about this :
1) You can directly edit the plugins that are running in trac, and add a module that helps you to filter these out at the code level (i.e. you can edit the behavior of the script to , say, only include commits which exclude certain key words). The timeline script is here (trac 2.4) : /usr/local/lib/python2.4/site-packages/trac/Timeline.py (here is an online diff snapshot of the source code : http://trac.edgewall.org/attachment/ticket/890/Timeline.py.diff)
2) You can remove the commits entirely - trac commits are derived from the sqlLite database (the schema is here http://trac.edgewall.org/wiki/TracDev/DatabaseSchema).
Of course, there also might be some fancy tools out there that provide a nice interface for editing the way the timeline looks.
Finally - temporarily, you can remove the timeline/roadmap entirely from the trac.ini file : http://www.gossamer-threads.com/lists/trac/users/28079
I confess that I've virtually no experience with the repository part of Trac, even less with using a repository with a variety of permissions across it's contents.
On the subject: Configuration is certainly not enough, see rblanks answer. While I've never seen the code for that functionality, I was wrong to suggest it doesn't exist. Because it is a central place and developed/supported in Trac core this is definitely the way to go.