Thredds server WMS/Godiva working but WMS link just shows XML - thredds

I have the latest Thredds server working with Tomcat 7 and in the threddsConfig.XML file I have enabled WMS. This causes the Godiva link to show at the bottom in the Viewers category.
However, above, in the Access category, which now shows both OPENDAP and WMS, the WMS link doesn't work correctly. When the link is clicked, both IE and Chrome just display the XML.
Thanks

Actually, that is the intended response when you click the WMS link on the THREDDS Data Server. If you examine the URL of that link, you can see that it is a WMS GetCapabilities request, which should look something like:
http://geoport.whoi.edu/thredds/wms/coawst_4/use/fmrc/coawst_4_use_best.ncd?service=WMS&version=1.3.0&request=GetCapabilities
The GetCapabilities request returns an XML document describing the capabilities of this particular WMS Service (e.g. which datasets are available, what projections, etc).
This is a required service request for WMS services, and the returned information is used by WMS clients to display valid options to users.

Related

WOPI viewing and editing error : 'Keyset does not exist'

I'm trying to implement a wopi on our site (our domain is already added to wopi). I can view and edit in first load. But upon clicking the back button and view again or edit I got this error.
Any idea what causes the issue.
Github: https://github.com/apulliam/WOPIFramework
I remember having the same error when implementing WOPI. Unfortunately don't recall the exact reason for it, but as far as I remember, this may be a permission issue (the IIS process can't read the certificate used for the app. Just adjust the permission for the certificates using the Certificates snap-in in the MMC console). To verify that is the problem, you could tell the app to consume the certificate from a PFX (physical) file rather than from the store.
Back button problems are frequently caused by a lack of (or incorrect) wd* parameter implementation.
Office Online will sometimes pass additional query string parameters to your host page. These query string parameters are of the form wd*. When you receive these query string parameters on your host page URLs, you must pass them, unchanged, to the Office Online iframe.
In addition, if the replaceState method from the HTML5 History API is available in the user’s browser, you should remove the following parameters from your host page URL after passing them to the Office Online iframe:
wdPreviousSession
wdPreviousCorrelation
Other wd* parameters must not be removed from the host page URL.
The key here is that you may not be using the HTML 5 API to do replaceState.

How to add dynamic meta tags to website with no middleware or SSR

I have a relatively large app where there are a lot of user profile pages. I want to make it so that if you share one of the user's profile page it will preview their name and picture on social medias like FB and Twitter (think sharing a Twitch streamer's page on Twitter). I used create-react-app to start the project so I don't have server side rendering or any middleware for pre-rendering tools. Is there another way I can accomplish this?
There two ways you can get this to work
Is the server your files via express server and check for who has the made the request by checking user-agent header from request and if its a bot instead of sending them the usual response you can fetch the required user profile data and use that data to populate the open-graph metatags and return them the HTML with those metatags.
Second way would be to use a network interceptor from the CDN you're using to identify the who is requesting the page (either bot or a person) if its a bot, make a request to your backend to fetch related data and send them the HTML with populated metatags.
Explained approach
Every time a request comes into our server, it comes with a header value user-agent which tells the server who is requesting the resources (human or a bot from Facebook trying to do link preview). Just by comparing a list of known user-agent (so it won't work on all but will work all know platforms and 90% of others.)
Let's say we have something.com where we want the link preview and let's say a request comes for something.com/john. What we will do is check for request that is coming to the server and will check for user-agent property, if its a human it will be redirected to our normal site but if its a bot (so it just wants an HTML for link preview) what we are going to do is since it's our server we can grab the data of akkshay and set the proper metatags inside our HTML and send it back as a response.
So what happens here is whenever a human tries to go for something.com/john he will be redirected to our landing page as he is more concerned about what he sees on his browse but when a bot comes in we will send it HTML response with proper metatags as its the link previews which is the concern for the bot.
This thing can be done on our express server with something like this. But this can also be done infrastructure level.

How to use squid access logs to find frequency of web requests

I am trying to build a model for how frequently users make web requests. I am interested in the timing between each new page they visit. I want to build a load simulator which then uses this model.
To do this I've been analyzing Squid access logs and looking at the timing between http requests by user IP. Squid captures all the requests associated with a web site request and I am only interested in the top level page requests. There are numerous starting pages for a request eg. not just *.html so it seems challenging to only capture the starting page for each session.
Is there a way to only capture the initial request for the top level page, like for when a user a page on Amazon, and then they jump to another page, etc.
You can use Squid Analysis Report Generator it will read log files and generate reports in HTML format with detailed information like access and denied website,daily and weekly report.

Who knows which files should be included in a website?

When the browser requests a website, any website from a HTTP server, which of the two parses the site's content in order to know which other files need to be included on the webpage?
What I mean is this:
the browser asks for the html file and then observers that it needs to import some external css files and HE is the one who requests them.
OR
the HTTP server when faced with a request for a website, parses (already knows) which sites need to be linked to a certain webpage and sends them alongside the html page?
I'm guessing the first case is the correct one, but if someone can confirm and maybe clarify it, I'd appreciate it.
It's all done by the client (which is usually a browser). When it sees <script>, <iframe>, <img>, <link>, etc. tags that reference other documents, it downloads them if necessary.
According to Wikipedia -
The primary function of a web server is to cater web page to the
request of clients using the Hypertext Transfer Protocol (HTTP). This
means delivery of HTML documents and any additional content that may
be included by a document, such as images, style sheets and scripts.
and
The primary purpose of a web browser is to bring information resources
to the user ("retrieval" or "fetching"), allowing them to view the
information ("display", "rendering"), and then access other
information ("navigation", "following links").
It is the Browser that parses the HTML and request for the associated contents.

Is this correct? Should firebug see SSL-protected AJAX?

I have enabled SSL and I am doing a jQuery AJAX post request and sending some fields to the server.
When I look at the AJAX post request through firebug under the post parameters I see all the fields in clear text.
So this means I can see the passwords in clear text. Is this normal? I am also looking at it with fiddler and it does not even log this AJAX request(so its like the request was never made).
So is it just because firebug is installed in the browser and can capture it or what?
ssl enables security when the data moves from browser to web sever. Firebug is a browser plugin, it knows everything in the DOM tree. I think it makes sense for firebug display the input fields and form data.
Yes, you can see the field data because FireBug is capturing the requests inside Firefox before they're encrypted. If you inspect the actual network traffic with a protocol analyzer like Wireshark you'll see that it's encrypted.
Off the top of my head I would think that Firebug is showing you exactly what is being sent. Otherwise it would mean that it is somehow decode encoded information.
If you really want to confirm this, use a tool which can capture the web traffic outside of the browser. Tcpdump for example.
"So this means I can see the passwords in clear text. Is this normal?"
Yup. The data resides on your browser, that is - the user agent, and is captured before it is communicated to the server. Any encryption operation is vulnerable to sniffing at the point at which the value enters the closed system. That's why if your machine is compromised (say, by malware) very little will help.