I have already add the rest support (link)of my website. the CMS and my website are under different domain. when I used restcall to pull the contents like html code " the contents will automatically convert to site is for hippo site and rest is for rest call mount point.
so obvious the relative path does not support if we use rest call to pull contents from cms to other websites.
I am just wondering that is there anyway to keep same relative url after rest call and instead using absolute path.
Please have a look at the long thread posted here: https://groups.google.com/d/msg/hippo-community/L417REGg2pY/YPtawvh0M8IJ
Related
I have a JS based CMS that populates a single page with different content based on URL parameters passed to the page. I am using the shareURL format (https://www.linkedin.com/shareArticle?mini=true&url=''&title=''&summary=''&source='')
But the parameters I pass are never used it always falls back to what is being served directly from the server.
Do I have to use the API to make this work and if so can I use the API without making the user authenticate?
Is there a correct way to pass this so that linked in will display the correct data.
After testing this more I realised that the linked ins share URL does not take its parameters it only takes what is served from the server. So I changed my build process not to get the pages in run time but to precompile them onto the server. Maybe in the future linked in will have resolved this for dynamic pages.
I have recently launched a website & therefore trying to figure out the Seo tricks to make it more visible. I use prerender.io to render javascript.
Can you please tell me how to show extended url results besides the main website link? Is there anything specific i need to do to get the results in the particular format?
For Example : Here main url is Google Voice & rest extended urls.
Well , There is no rules for this structure. Often, my old sites got structured but not the new one.
Google have their own theory for make this structure.
I'm trying to achieve urls in the form of http://localhost:9294/users instead of http://localhost:9294/#/users
This seems possible according to the documentation but I haven't been able to get this working for "bookmarkable" urls.
To clarify, browsing directly to http://localhost:9294/users gives a 404 "Not found: /users"
You can turn on HTML5 History support in Spine like this:
Spine.Route.setup(history: true)
By passing the history: true argument to Spine.Route.setup() that will enable the fancy URLs without hash.
The documentation for this is actually buried a bit, but it's here (second to last section): http://spinejs.com/docs/routing
EDIT:
In order to have urls that can be navigated to directly, you will have to do this "server" side. For example, with Rails, you would have to build a way to take the parameter of the url (in this case "/users"), and pass it to Spine accordingly. Here is an excerpt from the Spine docs:
However, there are some things you need to be aware of when using the
History API. Firstly, every URL you send to navigate() needs to have a
real HTML representation. Although the browser won't request the new
URL at that point, it will be requested if the page is subsequently
reloaded. In other words you can't make up arbitrary URLs, like you
can with hash fragments; every URL passed to the API needs to exist.
One way of implementing this is with server side support.
When browsers request a URL (expecting a HTML response) you first make
sure on server-side that the endpoint exists and is valid. Then you
can just serve up the main application, which will read the URL,
invoking the appropriate routes. For example, let's say your user
navigates to http://example.com/users/1. On the server-side, you check
that the URL /users/1 is valid, and that the User record with an ID of
1 exists. Then you can go ahead and just serve up the JavaScript
application.
The caveat to this approach is that it doesn't give search engine
crawlers any real content. If you want your application to be
crawl-able, you'll have to detect crawler bot requests, and serve them
a 'parallel universe of content'. That is beyond the scope of this
documentation though.
It's definitely a good bit of effort to get this working properly, but it CAN be done. It's not possible to give you a specific answer without knowing the stack you're working with.
I used the following rewrites as explained in this article.
http://www.josscrowcroft.com/2012/code/htaccess-for-html5-history-pushstate-url-routing/
I have recently placed an ad in a weekly publication that sends out a PDF file. My ad is directly linked so that the reader can click on it and go to my website. The PDF file is hosted on a different server, but is, in fact, a PDF file that has to be downloaded and viewed on that site, not emailed or shared that way. I have Google Analytics and a couple other stats tracking programs installed and I can't see the referring URL from this other site at all, in anything. Is there something I can ask the designer of the PDF file to include in her links to make them trackable? Or is this simply not possible?
Use Google Analytics Campaign Tagging.
This tool will help set it up. You'll want to classify the variables such that the source and the medium are set, at minimum.
http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55578
So, for example, if your URL is http://example.com, you could set the parameters as such:
utm_source: BlahNews
utm_medium: newsletter
utm_campaign: july10issue
Your resulting URL would be http://example.com/?utm_source=BlahNews&utm_medium=newsletter&utm_campaign=july10issue
Google Analytics would track these hits under that Campaign, Source and medium.
If the URL is displayed raw, and want to avoid 'displaying' an ugly URL, you could setup an internal redirect to that URL, and it looks like you're using WordPress, there are a few free plugins that manage redirects like this (I happen to like 'Redirection')
So, you could tell the plugin to redirect
http://example.com/blahnews TO http://example.com/?utm_source=BlahNews&utm_medium=newsletter&utm_campaign=july10issue
Can you ask them to put some token in the query string of the URL to the site?
I have created a site, which parses XML files and display its content on the appropriate page. Is my site a dynamic web page or static web page?
How do dynamic and static web pages differ?
I feel it's dynamic, because I parse the content from xml files; initially i don't have any content in my main page..
What do you think about this, please explain it..
I would describe your pages as dynamic. "Static" usually means that the file sitting on the web server is delivered as-is to the user; since you're assembling the pages from data files, I'd call them dynamic even if you're not building in any dynamically-changing data.
I don't think this is a hard and fast definition though. If someone feels the page is static because it's assembled from static pages, that's another way to look at it.
This is actually an interesting question..
I would have said it's a dynamic website, as the content is generated programmatically.. but if the XML files do not change, it's no less "static" than straight HTML files served though Apache.
Say you have a site that is regular HTML files - it would be considered a static web-page; but if you take those HTML files, store them in a database, and have a simple page that allows /view.php?page=index - does that make it a dynamic site?
I would say no, it's just a static site served through a database, or XML files (instead of a file-system).
Basically: if the content changes without you manually editing those XML files, I would say it's a dynamic site. If it does change, then I would say it's a static site.
Static web pages would be plain HTML content that are delivered. If you are processing any type of XML files at the server side and generating content accordingly, this is a dynamic page. Static pages change content when the page is actually edited & modified.
First result on Google if you had searched for it explains it. http://websiteowner.info/articles/pages/pagetypes.asp
Also, stating that static websites are not updated regularly is not correct. The web and HTML was around even before we started writing stuff in Perl & PHP. There are/were sites that had heavy traffic and were being modified manually.
a simple way to distinguish between static and dynamic:
Static: straight HTML files
Dynamic: HTML is generated through server-side code and a data store(XML, database, etc.)
KISS - Dynamic pages change without changing the page itself.
Your pages are dynamic, because once deployed the content can be changed without changing the page's HTML.
Any content that is fixed and always renders the same is considered Static.