I just started to manage a website for the company I work for and it is still running with ASP Classic so I have had to learn quite a bit.
Recently our managment has started to post a dynamically updated PDF to the website every few hours.
The problem I have been experiencing is that some users who access the file are now getting old cached versions of them despite changing headers to prevent this.
So in my search for a solution for this I came across this Stack Overflow post
Right way to have asp.net not cache pdf files
But it was written for ASP.NET and not ASP Classic but I was able to come up with a solution using VBScript that works thanks to that post and thought I should share it with others.
The code below generates a link with the current time converted to a Double so it produces a random link each time the page is loaded to trick the browser into thinking it is a new pdf.
Link to the PDF
Now is the current time
CDbl(Now) Converts the current time into a Double
CStr(CDbl(Now)) Converts the Double into a string
Related
I am using txt2pdf to convert text files to pdfs. It's been working great but I got a new PC and I can't get it to retain the settings for lines per page. I don't see any contact information on their web site.
https://www.sanface.com/txt2pdf.html
Does anyone know where older program s might store their data?
I found it using a system file watcher:
C:\Users[user]\AppData\Local\VirtualStore\Windows\win.ini
Problem: Need to convert local html (with local images etc) to pdf from an AIX box running Universe 11.2.5 with System Builder
Current solution: FTP over html file to a Windows server which converts in batches and sends the e-mail to the destination
Proposed Solution: Do everything on the AIX box, from converting html to pdf and sending the e-mail.
Current problem: Unable to find a way to convert local html to PDF on the AIX box. I have been trying many different ways from trying to install Python3, but to no avail.
The only really difficult part of the process is getting the HTML to render into a format will properly display your html into pages that are suitable for printing. There is a fair amount of magic that goes on between HTTP:GET and clicking print on a browser window that needs to be accounted for.
I was trying accomplish something similar many moons ago on AIX but kind of ran into a skill level/time wall because I was going to have essentially create a headless browser to render the html. It looks like there are now some utilities that you might be able to leverage. I found this recent updated article on Super User that actually got me somewhat excited, especially since I don't use AIX anymore so precompiled binaries and well understood and easily attainable dependencies are something I can actually have in my life.
https://superuser.com/questions/280552/how-can-i-render-a-website-as-an-image-from-the-shell
Good Luck.
There seems to be several questions rolled into this one item.
Converting HTML to PDF, while that is just a data manipulation that you could do in basic, writing such code would be a large task. The option you use sending it to another system is valid, but put more points of failure into the system. I would think you could find code to do it on the AIX box.
Rocket plans on getting the MV Python to work on AIX, this will make the converting of html to PDF much easier since there are a lot of open source modules.
As for my suggestion of using sockets, that would be if you intend to send it to a service that will take the htms, and return the pdf document.
i.e. Is there a web service for converting HTML to PDF?
Once you have the pdf document, you can either store it in a UniVerse type-19 file, or do the base64 encoding and store it in UniVerse hash file.
Hope this helps,
Mike
I'm trying to amend our content management system so it'll handle SQL database failures more gracefully. It's a bunch of ASMX pages, and a Helpers.vb file in which I've written a SQL connection tester function.
Each of the ASMX pages call the same function.
I need to create a variable I can check that's persistent and performant, otherwise I'm going to have fall back on something disasterously slow like reading a text file every time I set up a sql connection string.
I've tried using application caching, but either it doesn't work in the context of my helpers.vb file, or I've made a mess of the syntax. One problem that's already stymied some of the approaches I've found via google - I can't use 'Import System.Web.Caching' - IntelliSense doesn't show the 'Caching' part.
Has anyone got any example code that might get me up and running? Or an alternative approach?
#Mike,
Many thanks, now I'm using HttpRuntime.Cache correctly... it works!
Thanks everyone for taking the time to post :)
I am looking for a free ASP script that will allow me to upload files to my server but limiting the size and type of the files uploaded. It should also inform the user of the errors and not throwing him to IIS error page because of IIS size limits.
I'd really appriciate if there will be an addition that will check the size limit before the file is actually uploaded (meaning - at the browser)
Is there anything like this?
Thanks
Tal
I found this code by a guy named Lewis Moten several years ago:
http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=8525&lngWId=4
It includes checking of file size, though it does happen server-side, not client-side.
I used this code for a project a few years ago and it worked great.
We have our production server running our website. Then we have a test server which has exact same data but with changes to code to do some new functionality. This web app has over 500 pages.
Is there any program that can
Login to the test site
Crawl through each page and then save the page as html
Compare with the same page saved with live site?
This way we can make sure that new features that we add to our test site will not break the live site when code updates are applied to production.
I am currently trying to use WinHTTrack website copier and then comparing the test and live folders with some code comparison tool like beyond compare. This works ok but there are lot of files changed because of the domain name changes.
Looking forward to ideas / solutions for this problem.
Regards
Have you looked at using Watir for this? It's not exactly the thing you are looking for but it might allow you some more granularity in your tests and ensure the site is functionally identical rather than getting caught up on changing guids, timestamps and all the other things that tend to change across any significant size website from day to day as part of it's standard functionality.
Apparently you can't make consistent, reproduceable builds in your project, can you? I would recommend moving towards that in the long run, it will save you a lot of headaches. That way you would know exactly what was deployed to which server when, so there would be no more need to bend around backwards to get the deployed sources back like this...
I know this is not a direct solution to your problem... but maybe it is worth comparing, whether you would save more in the long run by investing the efforts into your build process now, instead of implementing this workaround (and then improving your build process anyway - because one day you will almost surely need to do that).
wget has a --convert-links option, there are also some options to preserve cookies that might let you do it logged in http://drupal.org/node/118759#comment-664498
use an Offline Downloader, download all files to your computer from both sources, then compare the folder contents using a free tool like Total Commander.
EDIT
Load both of your sources into a CVS, and compare it there.