Smart edit unable edit the page emailsignmodal - sap

Hi Hybris 1811 smartedit I am trying to edit Email Signup Modal but as soon as I click on the link it redirects me to below page. I have multiple sites so sometimes I am getting this warning: WARN [hybrisHTTP20] [DefaultCMSSiteService] More than one site associated with URL [https://localhost:9002/?site=XXXX]. Returning first. Did anybody faced similar issue?

The main problem is that you have more than one site containing the same urlPatterns
This attribute represents a regex that is used by CMSSiteFilter to decide the website that will be used based on the URL introduced in the browser or requested by smartEdit(i.e https://localhost:9002/?site=XXXX)
In this case, more than one instances of CMSSiteModel were found and the first one was used.
In order to fix this you need to adjust each site to have its own, unique urlPattern . If you have apparel and electronics sites, the regexes for both should look like this
apparel --> (?i)^https?://[^/]+(/[^?]*)?\?(.*\&)?(site=apparel)(|\&.*)$
electronics --> (?i)^https?://[^/]+(/[^?]*)?\?(.*\&)?(site=electronics)(|\&.*)$
This can be changed in backoffice for each and every site:
Make sure that the preview URL is also changed accordingly so that SmartEdit will know how to perform the HTTP requests for this site:

Related

Pull in the page path with Referrer variable in GTM?

I am trying to collect the referrer URL in Google Tag Manager and I want it to include the referring path page. I want to do this because I have multiple links from the same domain pointing to one form. I want to track which page is bringing in the most form fills and so that I can trigger an email series based on which landing page they came from.
For example, I have 3 landing pages directing to one of my forms:
www.site1.com/first-page-path
www.site1.com/second-page-path
www.site1.com/third-page-path
When I check the referrer variable in Google Tag Manager, it simply displays the domain name as follows:
referrer: https://www.site1.com/
How do I collect the the full URL including the page path so that it shows up like this:
referrer: https://www.site1.com/second-page-path
Any help would be appreciated.
It's limit by the referrer policy. These days, browsers usually set very restrictive defaults for the referrer policies, so only the referring domain is sent.
If you can manage the other domain or you can give each page with different form url.
You can add some parameter at the form url and add proper setting in GTM to retrieve it.
In general, referrer has always been a bit unreliable, and is now so limited that you probably should not use it for business critical purposes.

Facebook App in Page Tab receiving signed_request but missing page data

I have a page tab app that I am hosting. I have both http and https supported. While I receive a signed_request package as expected, after I decode it does not contain page information. That data is simply missing.
I verified that like schemes are being used (https) among facebook, my hosted site and even the 'go between'-- facebook's static page handler.
Also created a new application with page tab support but got the same results-- simply no page information in the signed_request.
Any other causes people can think of?
I add the app to the page tab using this link:
https://www.facebook.com/dialog/pagetab?app_id=176236832519816&next=https://www.intelligantt.com/Facebook/application.html
Here is the page tab I am using (Note: requires permissions):
https://www.facebook.com/pages/School-Auction-Test-2/154869721351873?id=154869721351873&sk=app_176236832519816
Here is the decoded signed_request I am receiving:
{"algorithm":"HMAC-SHA256","code":!REMOVED!,"issued_at":1369384264,"user_id":"1218470256"}
5/25 Update - I thought maybe the canvas app urls didn't match the page tab urls so I spent several hours going through scenarios where they both had a trailing slash or not. Where they both had a trailing ? or not, with query parameters or not.
I also tried changing the 'next' value when creating the page tab to the canvas app url and the page tab url.
No success on either count.
I did read where because I'm seeing the 'code' value in the signed_request it means Facebook either couldn't match my urls or that I'm capturing the second request. However, I given all the URL permutations I went through I believe the urls match. I also subscribed to the 'auth.authResponseChange' which should give me the very first authResponse that should contain the signed_request with page.id in it (but doesn't).
If I had any reputation, I'd add a bounty to this.
Thanks.
I've just spent ~5 hours on this exact same problem and posted a prior answer that was incorrect. Here's the deal:
As you pointed out, signed_request appears to be missing the page data if your tab is implemented in pure javascript as a static html page (with *.htm extension).
I repeated the exact same test, on the exact same page, but wrapped my html page (including js) within a Perl script (with *.cgi extension)... and voila, signed_request has the page info.
Although confusing (and should be better documented as a design choice by Facebook), this may make some sense because it would be impossible to validate the signed_request wholly within Javascript without placing your secretkey within the scope (and therefore revealing it to a potential hacker).
It would be much easier with the PHP SDK, but if you just want to use JavaScript, maybe this will help:
Facebook Registration - Reading the data/signed request with Javascript
Also, you may want to check out this: https://github.com/diulama/js-facebook-signed-request
simply you can't get the full params with the javascript signed_request, use the php sdk to get the full signed_request . and record the values you need into javascript variabls ...
with the php sdk after instanciation ... use the facebook object as following.
$signed_request = $facebook->getSignedRequest();
var_dump($signed_request) ;
this is just to debug but u'll see that the printed array will contain many values that u won't get with js sdk for security reasons.
hope that helped better anyone who would need it, cz it seems this issue takes at the min 3 hours for everyone who runs into.

How to force google to show my first page from a page set with pagination?

I have a website and in my website I have, for example, a list of Audi models. I saw, using google webmaster tools, that my website appears in the google search by the word audi, but the target page was the 22nd page from my result set, not the first. I need my first page to appead, not my last (or middle), but I cannot tell google that this is a parameter, because my URLs are rewritten using mod rewrite. Any ideas?
BTW, I have read in a SEO forum, that it's a bad idea to use a cannonical tag. So is it really a bad idea in my case?
You can't force Google to do anything, however, they have made it easier to deal with pagination issues with a recent post on rel="next" and rel="prev".
But the primary problem you face is signalling to Google that your first (main) page is the starting point - this is achieved using internal link and back-link "juice" focussed on that page. You need to ensure that the first page of results is linked to properly from higher-value pages (like the home-page).
Google recently announced that you can use View All which will allow them to find and index entire articles that are normally broken up using pagination and display them all as one result.

Remove deleted page from Google search results

So I have a website that I recently made changes to, and one of the changes was removing a page from the site. I deleted the page, it doesn't exist anymore.
However, when you search for my site, one of the results is the page that I deleted. People are clicking on the page and getting an error.
How do I remove that page from the search results?
Here is the solution
First get ur site on google webmaster. Then go to site configuration -- > crawler access --> remove url . Click on New removal request and add the page you want to remove and make sure you have added that page to the robots.txt of your site. Google will deindex the page within 24 hrs.
You simply wait for googles robots to find out that it doesn't exist anymore.
A trick that used to work is to upload a sitemap to google where you add the url to the deleted page and set it to top priority and that it changes every day. That way the google robots will prio that page and quicker find out that its not there anymore.
There might be other ways but none that are known to me.
You can remove specific pages using the webmaster tools I believe.
Yahoo Web tools offer a similar service as I understand it.
This information was correct the last time I tried to do this a little while ago.
You should go to https://www.google.com/webmasters/tools/removals and remove the pages which you want.

How do you access browser history?

Some e-Marketing tools claim to choose which web page to display based on where you were before. That is, if you've been browsing truck sites and then go to Ford.com, your first page would be of the Ford Explorer.
I know you can get the immediate preceding page with HTTP_REFERRER, but how do you know where they were 6 sites ago?
Javascript this should get you started: http://www.dicabrio.com/javascript/steal-history.php
There are more nefarius means to: http://ha.ckers.org/blog/20070228/steal-browser-history-without-javascript/
Edit:I wanted to add that although this works it is a sleazy marketing teqnique and an invasion of privacy.
Unrelated but relevant, if you only want to look one page back and you can't get to the headers of a page, then document.referrer gives you the place a visitor came from.
You can't access the values for the entries in browser history (neither client side nor server side). All you can do is to send the browser back or forward a number of steps. The entries of the history are otherwise hidden from programmatic access.
Also note that HTTP_REFERER won't be there if the user typed the address in the URL bar instead of following a link to your page.
The browser history can't be directly accessed, but you can compare a list of sites with the user's history. This can be done because the browser attributes a different CSS style to a link that hasn't been visited and one that has.
Using this style difference you can change the content of you pages using pure CSS, but in general javascript is used. There is a good article here about using this trick to improve the user experience by displaying only the RSS aggregator or social bookmarking links that the user actually uses: http://www.niallkennedy.com/blog/2008/02/browser-history-sniff.html