Liferay jsonws api: what is the staging url? - api

When a site has live content, it is possible to get the articles you posted using /api/jsonws/invoke URL and passing on a body with:
cmd:{"articles = /journal.journalarticle/get-articles": {"groupId": <gId>, "folderId": 0}}
p_auth: <p_auth>
But when the content is on staging mode, the same API returns a empty list.
Yet there's a proper Web Content manager for the staging content... That means that there should be an API for that.
Does anyone know the URL for the staging content API, or how can I get this data using a different cmd?
(if possible, with no use of additional servlets/java development)

Related

Trying to make a page in a Confluence Space using REST API

I am trying to do a post request on my companies confluence to make a page using the REST API.
When I attempt to do it, I get a JSON response saying that it is forbidden.
I am able to make a page from the GUI, but not with the REST Api.
I have done Basic Authentication for the query.
Here is a picture of the request and the response.
My Question : how can I make the page on the confluence space with the rest Api?
I have tried: making a personal access token. And then using it with my request, but that didn't work either. 
The request body to create the page is a little different:
You should not use space as a container (only pages for comments as containers or attachments)
Do not use space id, just KEY
storage object should have a representation field and use its ID.
Here is the example:
{
"type":"page",
"title":"TEST SEITE 2",
"space":{
"key":"MY_SPACE_KEY"
},
"body":{
"storage":{
"value":"<p>This is my storage</p>",
"representation":"storage"
}
}
}

Share dynamic content on LinkedIn

I have a JS based CMS that populates a single page with different content based on URL parameters passed to the page. I am using the shareURL format (https://www.linkedin.com/shareArticle?mini=true&url=''&title=''&summary=''&source='')
But the parameters I pass are never used it always falls back to what is being served directly from the server.
Do I have to use the API to make this work and if so can I use the API without making the user authenticate?
Is there a correct way to pass this so that linked in will display the correct data.
After testing this more I realised that the linked ins share URL does not take its parameters it only takes what is served from the server. So I changed my build process not to get the pages in run time but to precompile them onto the server. Maybe in the future linked in will have resolved this for dynamic pages.

XWIKI Tags integration

we are using our own application for tags management. We would like to integrate it with XWIKI Tag application so it would show our tags.
Is it possible to change a source of tags? To use REST endpoint, our DB, etc.
I checked XWIKI REST API, but it allows page tagging only, there is no way to create a tag (without tagging a page). Our use case is:
1. users create tags in our application
2. user opens XWIKI
3. our tags should be available in auto suggestion when tagging a page.
Any ideas?
Thank You
I have not found a way to do this via existing plugin/extension.
As all requests to our app. go via proxy, I routed all requests that XWIKI uses to load tags
/xwiki/bin/view/Main/?xpage=suggest&classname=XWiki.TagClass&fieldname=tags&input=tag-name
to our endpoint and it works.

How to dynamically change meta tags for crawlers on a static vue application?

I am building a static vue website with some routes. In one of these routes, I use an Http GET request to get data from a remote server. Something like this:
www.example.com/products/1/
This checks my remote server for a product with an id of 1. The product is then returned, and I use that data to populate the template. The only issue is the meta tags. Ideally, I would like to set my meta tags using the data I receive from my server, especially the open graph tags, so that the Facebook share box is properly configured.
However, I am getting the data from the server in my created() function, so changing the meta tags from the component does not seem to work for Facebook and Google crawlers.
What is the correct way to tackle this issue? Thanks!

How to use regular urls without the hash symbol in spine.js?

I'm trying to achieve urls in the form of http://localhost:9294/users instead of http://localhost:9294/#/users
This seems possible according to the documentation but I haven't been able to get this working for "bookmarkable" urls.
To clarify, browsing directly to http://localhost:9294/users gives a 404 "Not found: /users"
You can turn on HTML5 History support in Spine like this:
Spine.Route.setup(history: true)
By passing the history: true argument to Spine.Route.setup() that will enable the fancy URLs without hash.
The documentation for this is actually buried a bit, but it's here (second to last section): http://spinejs.com/docs/routing
EDIT:
In order to have urls that can be navigated to directly, you will have to do this "server" side. For example, with Rails, you would have to build a way to take the parameter of the url (in this case "/users"), and pass it to Spine accordingly. Here is an excerpt from the Spine docs:
However, there are some things you need to be aware of when using the
History API. Firstly, every URL you send to navigate() needs to have a
real HTML representation. Although the browser won't request the new
URL at that point, it will be requested if the page is subsequently
reloaded. In other words you can't make up arbitrary URLs, like you
can with hash fragments; every URL passed to the API needs to exist.
One way of implementing this is with server side support.
When browsers request a URL (expecting a HTML response) you first make
sure on server-side that the endpoint exists and is valid. Then you
can just serve up the main application, which will read the URL,
invoking the appropriate routes. For example, let's say your user
navigates to http://example.com/users/1. On the server-side, you check
that the URL /users/1 is valid, and that the User record with an ID of
1 exists. Then you can go ahead and just serve up the JavaScript
application.
The caveat to this approach is that it doesn't give search engine
crawlers any real content. If you want your application to be
crawl-able, you'll have to detect crawler bot requests, and serve them
a 'parallel universe of content'. That is beyond the scope of this
documentation though.
It's definitely a good bit of effort to get this working properly, but it CAN be done. It's not possible to give you a specific answer without knowing the stack you're working with.
I used the following rewrites as explained in this article.
http://www.josscrowcroft.com/2012/code/htaccess-for-html5-history-pushstate-url-routing/