Ok, I am not sure about this and am hoping to get some insight. Somewhat of a philosophy more than code specific coding question. Is this realistic or do I need to do something very differently?
I want to embed a PDF in an email as described at Ray's blog Ray Camden Post
I am trying to get the PDF from the dynamic generator (existing code), NOT generating it on the page as described in Ray.
I have a url basically (http://blah.com/index.cfm?pdfId=490) (The real URL is private)
It works in a browser, I get a PDF that opens just fine in the browser.
When I try to open with CFHTTP I do not get any binary data.
<cfhttp url="#arguments.url#"
getasbinary="auto" method="get"
result="urlContent"></cfhttp>
"FileContent" (empty/null)
"Header" (HTTP/1.1 200 OK ...)
"mimetype" (text, not pdf)
"Status" (200)
Can someone explain to me what is occurring with the browser .... ie why do I not get a PDF (file/CFHTTP) when the general link works?
Should I force 'binary' (I expected auto would work)?
Does the browser issue a new request?
FYI I am running CF 9.0.2 (developer)
I think Leigh and Ray answered the question, or helped me solve the problem. While debugging, saving the results to disk (rather than looking at it in the debugger ...) was the key to finding my error.
That and a good weekends rest did wonders too!.
Related
I only found api to get issue list, issue content, issue comments list and content, no issue content edit history, no issue comments edit history.
No, this cannot currently be done purely from the API.
However, if we reverse engineer the way GitHub loads past edits in the web interface, and do a bit of scraping, we can accomplish the same thing without the API. Unfortunately, this means that we don't have the reliability of an API - GitHub's web interface is liable to change at any time, breaking our code. But it's better than nothing!
So, first we need a log of all the edits for a comment. Let's do this with the comment https://github.com/seisvelas/crypsee/issues/1#issue-874033952 (from a test repo provided by the gentleman who set the bounty on this question). On order to get a log of this issue's comments, we will need to base64 encode the issue number with '05:' then the word 'Issue' at the beginning. Why '05:'? I have no idea. But it's always there and it won't work with out it. So we'll be base64 encoding the string "05:Issue874033952", which gives us MDU6SXNzdWU4NzQwMzM5NTI=
Great, now we insert MDU6SXNzdWU4NzQwMzM5NTI= into this URL scheme: https://github.com/_render_node/{BASE64 ENCODING HERE}/comments/comment_edit_history_log, resulting in a link to https://github.com/_render_node/MDU6SXNzdWU4NzQwMzM5NTI=/comments/comment_edit_history_log
Following that link, we see an edit history, but not the contents of the edits themselves:
However, this gives us the information we need! If we look at the HTML, we see that all edits prior to the current edit are defined as buttons with a link to that edit:
<button
type="button"
class="btn-link dropdown-item p-2"
role="menuitem"
data-edit-history-url="/user_content_edits/MDE1OlVzZXJDb250ZW50RWRpdElzc3VlRWRpdDo1MzIxODcxNzE="
>
The URL pointed to by the data-edit-history-url is the same URL loaded via the browser's networking tab when clicking to view a past edit in the web interface!
Unfortunately, if you attempt to view that page on it's own, you get a 404. It is intended to be viewed only from the web interface. But that's no problem, just go to the web interface, view one of the edits, and copy the headers it sends along. In my case I'm using Chromium, so I just find the request to the edit in my networking tab, right click and hit 'copy as Fetch request (nodejs)' and viola, with those headers I'm good to go!
For example, for the comment we've been using this whole time, I make that request and get back a bunch of HTML. The content of the original edit is near the end:
<ins><p class="rich-diff-level-zero">before edit</p></ins>
There it is! I could write a script to automate this, but then I'd be doing everything for you :3 Suffice it to say that with a day's work of cleverly organized scraping, this is roughly what you must to in order to view these revisions. If someone does make such a tool, the OSINT community will surely be immensely grateful!
To see the features of github api, it is better to read the following link
The best source to get the answer:
https://docs.github.com/en/rest/reference/issues
Check the issues you mentioned, ie issue comments, edit history issue, etc. in the link above
As far as I saw it is possible to receive issue comments but I did not see a section for edit history issue
I also suggest you see the following links for the edit history issue:
https://github.com/isaacs/github/issues/954
I've searched all over the place and I can't figure out what I'm doing wrong. No matter what I still get a Page does not contain authorship markup on the structured data testing tool
I have two sites with almost identical pages. The rel=author tags are inserted the same way.
Here is an example of one page that works: http://bit.ly/18odGef
Here is an example of one page that doesn't: http://bit.ly/12vXdAm
I tried adding ?rel=author to the end of the Google+ profile URL, which doesn't seem to work on either site. I am not blocking anything via nofollow or robots.txt. The tool is not being blocked by a firewall or anything. Can anyone see what I'm doing wrong here and why it works for one site, but not the other?
FYI, the site that does not work used to work without a problem. I hadn't changed anything with how the author markup was organized until I realized it wasn't working anymore.
When I test both of those pages in Google's structured data test tool, it shows that authorship is working correctly for both pages.
Here are the results for the page you said was working: https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fnikonites.com%2Fd5100%2F2507-d5100-vs-d90.html%23axzz2rFFm1eVv
Here are the results for the page you said wasn't working: https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fcellphoneforums.net%2Fsamsung-galaxy%2Ft359099-enable-auto-correct-galaxy-note-ii.html%23axzz2rFFlwz3W
I am currently working on a Flattr plugin for a popular open-source RSS reader (tiny tiny RSS).
I am using the lookup API for the first time and am unsure why I am getting mixed results.
So I'm unsure if I use the API correctly and want to confirm with you experts if I got something basic wrong.
First, let's see if I can come up with an API call that looks up a thing successfully. I look at the Flattr page of thing 1066706 (I can't post the whole URL here as SO only allows me two URLs for this whole post). On that page, I find the official URL which Flattr stores for that thing and look that up with the API:see here
This returns {"type":"thing","resource":"https:\/\/api.flattr.com\/rest\/v2\/things\/1066706", ... so that's good.
But it seems this method is not a sure way to test if things exist. Here is an example that doesn't work: I open the Flattr page of thing e7579b349cb7b319b28d883cd4064e1e.
That URL I find on that page is indeed the URL of that article and I don't see any other URL it might have. I look that up in the same way as above:check this
Alas, I get {"message":"not_found","description":"No thing was found"}
(I also tried both of these with encoded URLs, but got the same result. I figured this is easier to read for you.)
So, why would that second thing not be found? Thanks for any enlightenment.
The id "e7579b349cb7b319b28d883cd4064e1e" is not a real thing id but a hash that identifies a temporary thing for a not yet submitted thing - it's part of Flattr's autosubmit functionality: http://developers.flattr.net/auto-submit/
So the system is very correct in telling you that a thing for that URL doesn't exist - someone would need to flattr that thing for it to become submitted for real and created in the system with a real id to it.
(Just for reference - for some URL:s, like Twitter URL:s, Flattr can actually answer that the URL is flattrable even though it can't find it in the system: {"message": "flattrable", "description": "Thing is flattrable "} That way you can now that it is possible to flattr that thing without you having to use any kind of flattr-button/url supplied by the author to be able to flattr the URL)
Also - if you don't know it yet then for a RSS reader you should primarily be looking for rel-payment links to find out whether an entry is flattrable or not, see http://developers.flattr.net/feed/ and http://relpayment.com/
Is there any API available by Adobe that would enable me to convert Office Documents (docx, xlsx, pptx, etc.) files to a PDF file format?
I would prefer to use .NET to do so, but if I have to I can resort to C/C++.
I've already tried using Adobe SDK, but it seems to me it works to automate the Acrobat application instead of giving me access to underlying functionality. If it's possible and anyone would care to give me an example, I'd be very thankful - after many hours googling it I was unable to find a good answer (a lot of samples doing the contrary, though - converting from PDF to Word).
One last thing, I need it to be an library from Adobe. So, PDFCreator, BCL EasyPDF, Aspose.Words/Cells/Slides etc., unfortunately, won't help me.
UPDATE 1:
I decided to ask this question in the forum because, first, I can't believe that Adobe wouldn't have a library to do this; Of course, it may be the case, but it's very strange.
UPDATE 2:
I also looked already into AdobePDFMakerX.Word interface. I tried calling the CreatePDF(string in, string out) interface, but to no avail. It always returns false, and there is no error description that I can use.
want to convert doc file to pdf file using Adobe pdf service api
In short it has two parts:
make a post request ( providing require parameters ) and from header take x-request-id
make a get request ( providing require parameters ) and as responce you will get your pdf documet
it is working fine
Are you sure Aspose.Words didn't work for you? I tested the below code sample and works fine.
string filePdf = #"D:\\Projects\\original.pdf";
string fileDocX = #"D:\\Projects\\New.docx";
Aspose.Words.Document doc = new Aspose.Words.Document(fileDocX);
doc.Save(filePdf, Aspose.Words.SaveFormat.Pdf);
Graph API is not returning image("picture" attribute) for objects corresponding to community pages, which used to be returned earlier. For example this https://graph.facebook.com/178790412179919 does not have picture attribute whereas the corresponding page has an image.
Also the FQL query done on the "albums" connection for some objects does not have a "cover_pid" attribute for an album corresponding to type "profile", which again used to work earlier.
Does anybody know if anything has changed in Graph API corresponding to this in last couple of weeks (I am fairly confident it used to work earlier in the expected way). I looked through Facebook API release notes but could not find any changes corresponding to this. Please let me know if this not appropriate post for this forum.
https://developers.facebook.com/docs/reference/api/page/
picture is a connection, not an attribute. So ...
https://graph.facebook.com/178790412179919/picture
And as the docs say: Returns a HTTP 302 with the URL of the user's profile picture.
Kinda goofy? Yes, but it works exactly as the docs say it does. I suspect they implemented it this way so it could easily be used in an <IMG> tag.
UPDATE:
It still works via FQL. In your case:
https://api.facebook.com/method/fql.query?query=SELECT+page_id%2C+pic+FROM+page+WHERE+page_id+%3D+178790412179919&format=json
I can confirm that this PREVIOUSLY worked, but NO LONGER works. Facebook have removed the picture connection from Community Pages.
I suspect the reason is that most of these images are pulled from Wikipedia, and there was a licensing / attribution issue.
Unfortunately, Facebook is no longer a reliable source of images for entities (e.g. bands).