this question is coming from me getting a rather big $ bill surprise with the Javascript Google Maps API.
Essentially, I launched our Real Estate based web app, ran some ads, got some traffic, which resulted in a much larger bill than expected. One of the reasons the bill was higher than expected was due to the maps API was hit an obscene amount of times, especially compared to traffic.
I'm using Vue routers.
Now, I have a route called /listings, on said route there is a map, (using vue2-google-maps), as well as a list view.
Hypotheses, every time a user hits /listings, the components/page get's rendered and a dynamic map request is sent? Meaning a single user can easily send off 10, 20, 100+ Gmap API requests just by navigating to different listings, then navigating back to the map search. Can anyone confirm?
Now I am thinking about solutions already that would make use of a dialog, and v-if when a listing is selected it appears overtop, essentially never navigating away from the /listings page.
However, am I correct in my assumption? And if so, is there a better way to solve this?
Used <keep-alive> around the <router-view> and worked!
Related
im building a specific book reader like app.
Main page call api/booksList and receive the json array containing each book info like:
[ { id: server_db_id, title: "title test", sum: 10 , date: ... }
]
ans its cached after the request, so im not saving the book list into indexedDB, localStorage or other storage. If i need one specific book, i just call the api book list again and filter it. Is that bad design? (book will be over 200 items)
Whe user open the book, it call the /api/book/book_id and its cached too, the opened book response is a json list of the lines of the book, eg:
[
{
id: ...
content: "This is line...lore ipsum..."
....
}
]
I put the api response inside vue data variable and the component is rendered correclty
Im not using any kind of handler for keeping this offline by my hand. To detect if user already opened this book, i just call the api, check if errors happened or the responde body has content.
Is that a wrong, bad or stupid decision? Will this hit the quota limit api or other kind of limitation? The "gods" of pwa will raise the finger to me and say: WAAAT. (im not using indexedDB at first because it need some models handling and i want to make things easier if possible)
I my self was just researching this and concluded, at the moment I am going to go with this method, where I use cache for assets, js, css, html etc based on their matching routes.
Then when it comes to data e.g. json requests etc. Its best to store them in indexedDB (or an equivalent), which really does not require a model or schema as such.
See Jake Archibald's IndexedDB-Promise library https://github.com/jakearchibald/idb its really simple to get your head round.
Though both Jake and Addy say it's not a defacto rule, so you can decide ultimately what is best for you.
Read this for better clarification
https://developers.google.com/web/ilt/pwa/live-data-in-the-service-worker
https://medium.com/dev-channel/offline-storage-for-progressive-web-apps-70d52695513c
It helped me to make a better decision on how to go about moving forward.
Recommendations Also
Check out PWA Training: https://developers.google.com/web/ilt/pwa
Workbox: https://developers.google.com/web/tools/workbox (This has sped up my development massively!)
Codelabs: https://codelabs.developers.google.com/ (Search PWA)
The guides on here are really good at taking you through everything you need.
Good Luck with your PWA
Random thought (edit)
One thing that makes me question this though is based on some of the examples and guides I have seen is that, data storage is handled in a more ad-hoc manner. For example, if the PWA calls out an API, there are two methods I have come across where you can either manage cached data in the application or in the service worker, e.g. if your API calls to get JSON fails in the app, it can revert to getting data in the indexedDB which hopefully was pre-cached the first time your app called the API.
Or you can use self.addEventListener('fetch', (event) => { ad-hoc stuff here }) this is where you can match either an asset, or data request and hijack the response with either a cache or indexedDB response. Which prevents the need handle offline data in your app.
The first method makes me feel uneasy so i'm gonna go with the addEventListener approach both in the service worker cause thats what it is there for plus my app does not then have to worry about that.
I want to get the Fields necessary for a particular issue to make a transition from Open to Resolved and Resolved to Closed. Any ideas on how to move forward with this ?
All I see of the internet are examples of adding custom fields.
To perform a transition in JIRA, making use of the rest APIs
you need to GET the transition id from
https:///rest/api/2/issue//transitions?expand=transitions.fields"
From the response, you can find the possible transitions for the issue, corresponding transition ids and the fields(with required flag) for each transition.
You can generate a request JSON from above response and POST to
https:///rest/api/2/issue//transitions
To do multiple transitions, need to come up with some logic, by getting issue status each time in a loop and performing transitions based on the response.
This is very high level, hope this helps
When I use LIMIT to make pages of results, how do we usually know the offset i.e. which page should be retrieved for each request?
Via cookies?
Via a query string parameter, traditionally. URLs typically include a ?page=3 to request page 3, like you'll see all over Stack Overflow: https://stackoverflow.com/questions?page=2&sort=newest
This is something you absolutely should not do through cookies. The URL should include everything necessary to navigate to the given page. Consider a user bookmarking page three of your results, or trying to link somebody else to the page they're looking at: Using cookies to store pagination data breaks these situations completely.
Usually via request parameters in action frameworks (RoR, ZF, Cake, Django) and via state of the session in component frameworks (Prado, JSF, ASP.NET). Session is usually associated with request by a cookie.
Using session to store current page is quite common in business-oriented applications, where state of the gui might be very complicated and the practically of being able to bookmark a page - limited.
I have a web page that has 70000 characters. As you know when doing translation through Google API you can only send up to 5000 characters at a time. Which means I have to send data to Google 14 times (70000/5000) which takes a lot of time and then my page is displayed. Is there a way to speed up the process?
Thanks
have you tried caching the translation?
If you were using some AJAX framework (you don't mention what your web page is created with eg c#) then you can make it faster by making the API call via the AJAX framework.
It would look something like this (psuedo-code since we don't know what you are using):
Serve web page (almost instant)
Web page starts AJAX call:
Break text into chunks
Foreach chunk
Translate via API
Append to the page
This way the user will see the page immediately, and will also see the translation appear piece by piece as it is processsed instead of having to wait until the end.
My best bet would be to generate a page in one language, then ask google to translate it trough HTTP and display result as your own, to make it seamless for user. I believe that is what Google Chrome does when translating web pages.
Example of URL that makes Google translate the whole web page:
http://translate.google.com/translate?hl=en&sl=ru&tl=en&u=http%3A%2F%2Flinux.org.ru%2F
Of course, another option is to use Google Translate API and cache result if page content is not changing frequently.
go to the Javascript file in Google, it will lead you also to the CSS file, make a file or perhaps two, or you may be able to add CSS to your own, now make Javascript page on your web site in own directory. make a nip of code to update the Javascript code every so many seconds or minutes, and this will make the transition much faster, just by refreshing the content they give.. have fun :) also ultimately you can also send a request at the same time as the first one to translate after char 5000 which should be relatively easy to do.
As the question states, I'm trying to figure out how google tracks clicks on search results. When you view the source, you find the following:
<em>Yahoo</em>!
The function rwt is, which is pretty messy:
windows.rwt=function(b,d,e,g,h,f,i,j){
var a=encodeURIComponent||escape,c=b.href.split("#");
b.href=["/url?sa=t\x26source\x3dweb",d?"&oi="+a(d):"",e?"&cad="+a(e):"","&ct=",a(g),"&cd=",a(h),"&url=",a(c[0]).replace(/\+/g,"%2B"),"&ei=7_C2SbqXBMW0-AbU4OWnCw",f?"&usg="+f:"",i,c[1]?"#"+c[1]:""].join("");
b.onmousedown="";
return true};
So it looks like Google is changing the href of the a tag to /url?... which I'm assuming is where their tracking is. From LiveHeaders in Firefox, it looks like this page is redirecting the browser to the original href of the a tag.
Is this correct and is this the best method of tracking clicks on links on your site, such as ads?
It's actually changing the href of the link rather than the window location. It's setting b.href, and b refers to the link itself. This runs in onmousedown, so when you release the mouse and the click is handled you magically get sent to that new href.
Any click tracking pretty much comes down to sending the user to some equivalent of Google's /url?... script, counting the click, and performing a 302 redirect to the real destination.
This javascript href replacement has the advantage of automatically filtering out any robots that don't run scripts. The downside is that it also filters out any real people that have javascript disabled. If, like Google, you just care which link is most popular with your real human users, this works out quite well. The clicks that you do record should be representative of real human traffic, and you can safely ignore the clicks from non-javascript users because they probably have the same preferences anyway.
Most adverts just link straight to the counting URL with no javascript replacement. This means that you definitely count every real click on the link, but you need to worry about filtering out requests from robots, since they'll now see your counting URL too.
Which you prefer really depends on why you want to track the clicks.
I think most people expect ads to click through via some sort of tracking system, so I shouldn't worry too much about following this particular javascript implementation - as much as anything that's probably there to ensure that the user sees the correct link in the browsers status bar, that various other interesting bits of info (search terms, position on the result set at the time, who you are, etc) are sent across (without you realising it) and that the links still work if JavaScript is disabled.
Generally, yes directing the user through some tracking page with the ID of the ad they have clicked on, and possibly some additional indication of where they have come from is sensible - that way you aren't relying on other mechanisms (such as JS event handlers) to track clicks on the links, it's certainly the way most ad systems I've used work.