Slow OSClass based website issue since osclass.org is down? - osclass

I am using Osclass open source script for my classified ads website.
Since OSClass.org is down and OSClass market is closed my website site oc-admin speed is very slow, sometime oc-admin takes more than 40 seconds to login. Similarly oc-admin dashboard loads very slow.
What are causes of these issues and what should I do to get rid of these?
Github url is
https://github.com/osclass/Osclass
For security concerns I cannot put url of my website here.

I think this link solve your problem.
https://www.valueweb.gr/forums/osclass/for-osclass-3-8-remove-all-your-website-dependencies-from-osclass-org/
Blockquote
Stripped/nulled all Market connections and admin dashboard visuals from Osclass 3.8.
Here are the changed files only to remove Market connections. The structure is kept for your convenience. You just upload the two folders to replace 12 files.
If you prefer to do it manually here is the documentation:
https://docs.osclasscommunity.com/removing-market/introduction
Attachment: Osclass_380_Stripped_from_Market.zip

Related

Hubspot: stage entire website including primary CSS file during website redesign

I am helping a client redesign their website through Hubspot. Their existing site is within Hubspot and their new website will also be within Hubspot. I am attempting to run the development through Hubspot's Content Staging as per this link: https://knowledge.hubspot.com/website-user-guide/how-to-redesign-and-relaunch-your-site-with-content-staging
The problem is that this appears to be on a per page basis rather than a per site basis. A problem with this is that I am unable to stage files such as the primary CSS file, or other CSS/JS files that I need to make changes to, but that the existing website will need to keep untouched throughout development.
Does anyone have any experience redesigning a Hubspot website who may have some advise for me? What am I missing?
Thanks!
When I'm redeveloping a HubSpot site within a client's portal, i'll do it on a template by template basis.
So for example, if you're making a new home page, just attach any stylesheets and scripts you need in the template file itself - found in the Edit > Edit Head menu. Here you can disable the Primary CSS file, and you can also disable domain specific stylesheets (the ones you add in Content > Content Settings) so that your template is only using the assets you want it to use.
Using this technique, you can work on individual templates, and then stage any pages that are using said templates independently of the rest of the website. Finally when you're ready to make your changes site wide, merely move your assets from the Edit head area within your template to the Content Settings area.

Main site url removed from google despite re-submitting it

I have a site www.megalim.co.il,
recently due to a version upgrade, I discovered that i have a robots.txt file that disallowed all Search engines.. my google ranking dropped , and I couldn't find the site's main page anymore
I changed the robots.txt file to one that allows all, and now the web master toolkit doesn't
write me that the site is blocked from google.
I did this about 5 days ago, I've also fetched as google
and submitted www.megalim.co.il to index with all related pages
but still, when i search this: "site:www.megalim.co.il"
i get a bunch of results from my site , but not the main page!
what else should I look for?
thanks!
Igal
You don't see your main page because of your old robots.txt. 5 days is nothing for Google bots to re-index all your website.
Just wait a little and you will see your website fully indexed in Google results.
Issue sorted out..
embarassing...
apparently we (inexplicably) had a nofollow, noindex meta tag..
after a day we start reappearing in google
thanks :)

File hosting to track total downloads of a PDF

I am looking for a web service that will allow me to upload a PDF and can track the number of times it is downloaded regardless of the source. I am aware of Google Analytics event tracking on my site but the issue here is that I need to give the file path to a number of partner sites and would like a centralized place to view total downloads among all partners. A breakdown of downloads by source would be awesome but not necessary. I can't rely on getting numbers from all of the partners as some may not even have GA set up at all.
Does something like this exist? Free is nice but would be willing to pay for an account if necessary.
Thanks.
Ended up using bit.ly to to shorten the path to the PDF hosted on my server. Gave the shortened url to the partners. Bit.ly provides good click stats by simply adding a "+" to end of the shortened url so we could see results.
Have you tried Ge.tt ?
I believe it shows number of times your files has been downloaded.

Script to download Google web history

How does one write a script to download one's Google web history?
I know about
https://www.google.com/history/
https://www.google.com/history/lookup?hl=en&authuser=0&max=1326122791634447
feed:https://www.google.com/history/lookup?month=1&day=9&yr=2011&output=rss
but they fail when called programmatically rather than through a browser.
I wrote up a blog post on how to download your entire Google Web History using a script I put together.
It all works directly within your web browser on the client side (i.e. no data is transmitted to a third-party), and you can download it to a CSV file. You can view the source code here:
http://geeklad.com/tools/google-history/google-history.js
My blog post has a bookmarklet you can use to easily launch the script. It works by accessing the same feed, but performs the iteration of reading the entire history 1000 records at a time, converting it into a CSV string, and making the data downloadable at the touch of a button.
I ran it against my own history, and successfully downloaded over 130K records, which came out to around 30MB when exported to CSV.
EDIT: It seems that number of foks that have used my script have run into problems, likely due to some oddities in their history data. Unfortunately, since the script does everything within the browser, I cannot debug it when it encounters histories that break it. If you're a JavaScript developer, use my script, and it appears your history has caused it to break; please feel free to help me fix it and send me any updates to the code.
I tried GeekLad's system, unfortunately two breaking changes have occurred #1 URL has changed ( I modified and hosted my own copy which led to #2 type=rss arguments no longer works.
I only needed the timestamps... so began the best/worst hack I've written in a while.
Step 1 - https://stackoverflow.com/a/3177718/9908 - Using chrome disable ALL security protocols.
Step 2 - https://gist.github.com/devdave/22b578d562a0dc1a8303
Using contentscript.js and manifest.json, make a chrome extension, host ransack.js locally to whatever service you want ( PHP, Ruby, Python, etc ). Goto https://history.google.com/history/ after installing your contentscript extension in developer mode ( unpacked ). It will automatically inject ransack.js + jQuery into the dom, harvest the data, and then move on to the next "Later" link.
Every 60 seconds, Google will force you to re-login randomly so this is not a start and walk away process BUT it does work and if they up the obfustication ante, you can always resort to chaining Ajax calls and send the page back to the backend for post processing. At full tilt, my abomination script collected 1 page a second of data.
On moral grounds I will not help anyone modify this script to get search terms and results as this process is not sanctioned by Google ( though not blocked apparently ) and recommend it only to sufficiently motivated individuals to make it work for them. By my estimates it took me 3-4 hours to get all 9 years of data ( 90K records ) # 1 page every 900ms or faster.
While this thing is going, DO NOT browse the rest of the web because Chrome is running with no safeguards in place, most of them exist for a reason.
One can download her search logs directly from Google (In case downloading it using a script is not the primary purpose),
Steps:
1) Login and Go to https://history.google.com/history/
2) Just below your profile picture logo, towards the right side, you can find an icon for settings. See the second option called "Download". Click on that.
3) Then click on "Create Archive", then Google will mail you the log within minutes.
maybe before issuing a request to get the feed the script shuld add a User-Agent HTTP header of well known browser, for Google to decide that the request came from that browser.

Track incoming Referring site via link in PDF file?

I have recently placed an ad in a weekly publication that sends out a PDF file. My ad is directly linked so that the reader can click on it and go to my website. The PDF file is hosted on a different server, but is, in fact, a PDF file that has to be downloaded and viewed on that site, not emailed or shared that way. I have Google Analytics and a couple other stats tracking programs installed and I can't see the referring URL from this other site at all, in anything. Is there something I can ask the designer of the PDF file to include in her links to make them trackable? Or is this simply not possible?
Use Google Analytics Campaign Tagging.
This tool will help set it up. You'll want to classify the variables such that the source and the medium are set, at minimum.
http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55578
So, for example, if your URL is http://example.com, you could set the parameters as such:
utm_source: BlahNews
utm_medium: newsletter
utm_campaign: july10issue
Your resulting URL would be http://example.com/?utm_source=BlahNews&utm_medium=newsletter&utm_campaign=july10issue
Google Analytics would track these hits under that Campaign, Source and medium.
If the URL is displayed raw, and want to avoid 'displaying' an ugly URL, you could setup an internal redirect to that URL, and it looks like you're using WordPress, there are a few free plugins that manage redirects like this (I happen to like 'Redirection')
So, you could tell the plugin to redirect
http://example.com/blahnews TO http://example.com/?utm_source=BlahNews&utm_medium=newsletter&utm_campaign=july10issue
Can you ask them to put some token in the query string of the URL to the site?