I'm trying to implement some omniture requests on server-side. I've got the calls set up, and the requests make it to omniture, but the referrer is not showing up in omniture.
Here is an example of one of the urls for omniture my code creates. Am I missing something?
http://[id].112.2o7.net/b/ss/[group]/1/H23.2/s1328206514850?AQB=1&ndh=1&ns=[id]&g=http%3A%2F%2F[domain]%2Flogin.asp&vid=1328206514850&pageName=Login%20Page%20!test!&r=http%3A%2F%2Ftest.com
The Internal URL Filters in the Report Suite Admin Console specifies what your internal domains are (i.e your domains). Any referal from any other domain will be recognised as a referrer.
I generally use a Firefox addon like WATS to debug the variables that are on a particular page, including referrer.
Keep in mind that there needs to be a referral from an external site. If you just type in the URL, or reload, or click from your own site, there is no referral. When testing this, I would create a page on another domain (e.g. localhost), and create a link to my page.
https://omniture-help.custhelp.com/app/answers/detail/a_id/1652/kw/JavaScript/related/1
COMPARISON: s.linkInternalFilters vs. Internal URL Filters
s.linkInternalFilters: The linkInternalFilters variable within the s_code.js file is used in exit link tracking. If s.trackExternalLinks is set to true, it is used to determine if a specific link a visitor clicked on is internal to your organization's site or not. Clicked links that match a value in s.linkInternalFilters are ignored, while links that do not match any values are sent to SiteCatalyst as an exit link.
Internal URL Filters: The Internal URL filters within the Admin Console is used in Traffic Sources reports, such as the Referring Domain report. Every s.t() request checks to see if the referring URL (contained within the referrer variable) matches any of the rules set up. Referring URLs that match any of these rules are excluded from all Traffic Sources reports, while referring URLs that do not are included.
It is recommended that s.linkInternalFilters and Internal URL filters match eachother, however the two operate completely independently and serve completely different functions.
The last part of that image is the referrer value, r= . Is that the correct value? Also you should check your Internal URL Filters in the admin console for that report suite. Typically for new report suites you will find the value of . (a single period) set in there. If you do have that then no referrers will be recorded.
Related
I am a novice to APIs and I am aware of the major kind of path in a rest API: path such as www.example.com/carsand query parameters such as www.example.com/cars?color=blue.
I just visit an e-commerce website and I am confused about the current path. I selected the category iphone-8 and got that url: https://www.example.fr/iphone-8.html
On the same page, I filter all phones with a price between 250 and 300 euros. This is the new url: https://www.example.fr/iphone-8.html#price=250&price=300
Does this url means that the filter is only applied on the html because of the # and therefore there is no api call for filtering?
Does this url means that the filter is only applied on the html because of the # and therefore there is no api call for filtering?
No that doesn't follow.
The experiment to try would be to load the original page into your browser, turn on the developer tools used to watch the network traffic, and then perform your search.
What you may discover is that when you manipulate the filter controls on the web page, what's really happening under the covers is that java script code is running, and making calls to fetch data from some back end endpoint, and then re-rendering the web page on the client. The fragment is being updated so that, if you were to bookmark the link, or copy it to another tab in your browser, the underlying javascript can reproduce "the same" results (by getting the search parameters from the fragment and repeating the search).
It should be possible to repeat those same calls directly from the browser itself (you won't necessarily get the HTML rendering, of course, but you'll probably be able to look at the filtered results in their own native representation (application/json, perhaps).
What's the avantage of using the frag rather than the query parameters for price?
Fragments are not part of the absolute-URI, and the query part is.
Which is to say, the query part is still part of the identifier of a primary resource, and is part of the request-line that is sent to the server.
But fragments are used to identify secondary resources; resources embedded within some primary resource.
Consider:
https://www.rfc-editor.org/rfc/rfc3986#section-3.5
This identifies a secondary resource (specifically, section-3.5) that is included within a primary resource (an HTML representation of RFC 3986). So we "fetch" the secondary resource by first loading the primary resource (the whole RFC) and then use the fragment identifier and HTML processing rules to discover the appropriate element in the document.
The fragment part is strictly a client side concern.
I am trying to collect the referrer URL in Google Tag Manager and I want it to include the referring path page. I want to do this because I have multiple links from the same domain pointing to one form. I want to track which page is bringing in the most form fills and so that I can trigger an email series based on which landing page they came from.
For example, I have 3 landing pages directing to one of my forms:
www.site1.com/first-page-path
www.site1.com/second-page-path
www.site1.com/third-page-path
When I check the referrer variable in Google Tag Manager, it simply displays the domain name as follows:
referrer: https://www.site1.com/
How do I collect the the full URL including the page path so that it shows up like this:
referrer: https://www.site1.com/second-page-path
Any help would be appreciated.
It's limit by the referrer policy. These days, browsers usually set very restrictive defaults for the referrer policies, so only the referring domain is sent.
If you can manage the other domain or you can give each page with different form url.
You can add some parameter at the form url and add proper setting in GTM to retrieve it.
In general, referrer has always been a bit unreliable, and is now so limited that you probably should not use it for business critical purposes.
I'm trying to achieve urls in the form of http://localhost:9294/users instead of http://localhost:9294/#/users
This seems possible according to the documentation but I haven't been able to get this working for "bookmarkable" urls.
To clarify, browsing directly to http://localhost:9294/users gives a 404 "Not found: /users"
You can turn on HTML5 History support in Spine like this:
Spine.Route.setup(history: true)
By passing the history: true argument to Spine.Route.setup() that will enable the fancy URLs without hash.
The documentation for this is actually buried a bit, but it's here (second to last section): http://spinejs.com/docs/routing
EDIT:
In order to have urls that can be navigated to directly, you will have to do this "server" side. For example, with Rails, you would have to build a way to take the parameter of the url (in this case "/users"), and pass it to Spine accordingly. Here is an excerpt from the Spine docs:
However, there are some things you need to be aware of when using the
History API. Firstly, every URL you send to navigate() needs to have a
real HTML representation. Although the browser won't request the new
URL at that point, it will be requested if the page is subsequently
reloaded. In other words you can't make up arbitrary URLs, like you
can with hash fragments; every URL passed to the API needs to exist.
One way of implementing this is with server side support.
When browsers request a URL (expecting a HTML response) you first make
sure on server-side that the endpoint exists and is valid. Then you
can just serve up the main application, which will read the URL,
invoking the appropriate routes. For example, let's say your user
navigates to http://example.com/users/1. On the server-side, you check
that the URL /users/1 is valid, and that the User record with an ID of
1 exists. Then you can go ahead and just serve up the JavaScript
application.
The caveat to this approach is that it doesn't give search engine
crawlers any real content. If you want your application to be
crawl-able, you'll have to detect crawler bot requests, and serve them
a 'parallel universe of content'. That is beyond the scope of this
documentation though.
It's definitely a good bit of effort to get this working properly, but it CAN be done. It's not possible to give you a specific answer without knowing the stack you're working with.
I used the following rewrites as explained in this article.
http://www.josscrowcroft.com/2012/code/htaccess-for-html5-history-pushstate-url-routing/
I would like to track users clicks on my website.
For that purpose, I would like to take advantage, if possible, of my Apache log system, which already tracks many things.
The idea would be, putting inside my source page "source.html" a link to "target.html" in the following way:
<a href='target_url.html' OnClick ='window.location="target_url.html#key"'>my mink which i want to track...</a>
with a well chosen key (typically, source url + link id + ...)
If the Apache log system could store the full path "target.html#key" whenever a user follows the link, it would be great, but as it is now, my Apache log system removes the last segment, and only stores the path "target.html".
Any idea on this issue ?
Many thanks by advance,
r.
URL segments are not passed to the server, their implementation is completely up to the client side (the browser). URL segment will never appear in logs, not will it back send to back-end scripts.
I need to track visitors.
I have a script (http://example.com/something.aspx) that saves all the visitor data (like browser, referrer, etc.) into a DB and insert a flash-cookie in the visitor machine for further tracking.
Right now I insert that script using an iframe in each page I want it to work.
The script need to be in the same domain of the page for it to work.
I use this script in a number of domains, so for each domain I have the same script installed in each domain.
I want to provide some kind of javascript API to be able to use only one script for all the domains. "One Script to Rule them All".
Its important to know that I own all the domains.
It is possible? How to achieve cross-domain?
Thanks.
I would try the following approach, but have not tested the whole thing.
insert into the page.
The record-and-set-cookie.aspx page will record agent info into a database (this part I am sure will work), and then return javascript that will set a cookie (this part can work, but needs confirmation).