I just got my Chrome's "Cookies" file which has a .bin extension from my Windows 10 machine at the path:
C:\Users\\AppData\Local\Google\Chrome\User Data\Default\
Now, since I used to use just Firefox which doesn't encrypt it's cookies values in the cookies database, I don't know what to do about getting the value of cookies in the Chrome db.
Just by looking into it with SQLite Explorer I learned that values are encrypted in a special column of the table with the value BLOB and I can only get a binary format of it that is useless for me.
Now I saw many solutions over the Internet, with scripts mainly written in Python and Perl but no one worked for me.
At this point I really don't know how to handle the situation and I really need to get those cookies values.
I got skills in programming with Python 3 and Java but I'm not skilled enough about cookies and about the way Chrome encrypts them to write a decrypt script, so if you have any advice on where or how should I get started on this I would be grateful you to tell me.
Please don't report as a duplicate, since every other topic that I read about this didn't help me at all and I think that maybe starting a new question about this could help me find a better solution, thanks.
Related
Hi everyone,
I would like to see my passwords saved for firefox using a C++ application. I did a bit of reading and no solution was conclusive:
I found that the passwords are stored in 2 files: logins.json and key3.db (I also found a third file, cert8.db, on one site without any mention of it anywhere else).
I found that the encryption algorithm firefox uses is 3des, and that if no password is specified, an empty, 0-char password was used.
I found that the key in the logins.json file is the one encrypted with 3des and encoded with b64. The key for the encryption was stored in key3.db and one was useless without the other.
I found that I cannot read key3.db with sqlite3.dll, MySQL on XAMPP, or any other method I know of reading sql databases.
I found that although most b64 strings have a '=' char at the end, the ones in my logins.json (a single password setup just for this research) did not. Putting it through the b64 decoder in NP++ returns garbage (if its 3des encrypted not surprising) but any online decoder I used returns nothing.
I found no sources dating to 2016.
Any source code that I found to this purpose did not compile and/or was full of errors. When I removed the errors as best I could it had issues preparing the database.
Is there any information I am missing, or that is incorrect? I am truly lost here and I would appreciate it if someone were to point me in the right direction. Thank you.
You can have a look at the following C project: https://github.com/philsmd/pswRecovery4Moz
It is older but still shows how to get the important data from the key3.db file.
Firefox is open source, just go through the source. Probably the db is initialized using a password. There is a 'dbtest.c' file inside firefox source, Have a look at that file.
Also use the NSS tools. may be it helps
The best way to sum up the issue would be with a screenshot, but unfortunately my screenshots auto-save to downloads and so I can't upload them. What's happening is that when I enter the 'file select' dialog in trying to upload a file, it's automatically in the 'drive' folder and won't move to any other folder. I've tried restarting and resetting my machine, tried the upload process on a bunch of different platforms, tried using the other user account on my machine, tried updating my software, but none of these have made any difference. I can get into my downloads folder and open files from it fine outside of this context, and I can workaround by using the drag-and-drop to upload on platforms which have this feature, but otherwise I'm stuck.
I've googled extensively to see if anyone else is having this issue and have found this thread: https://productforums.google.com/forum/#!topic/chromebook-central/d7g9EEDsr8w but there's no helpful solution there (recommended a powerwash but the asker has already done this several times). I've also tried to find a solution with the help of my (programmer) employer, but no luck so he recommended asking here. It seems like it wouldn't be a hardware issue when I can still access the folder outside of this specific function, but if it were a problem with the running system it seems it would be happening across the board and therefore show up more in a google? If anyone has any suggestions I'd be very grateful as it's getting quite tiresome having to drag and drop things into Facebook messages to get them uploaded! The machine is less than a year old so if I can't find any solution I'll see about getting it replaced under warranty. Thanks in advance for any help, and please let me know if there's any key info I've left out!
Machine: Samsung Chromebook XE303C12
OS: Version 38.0.2125.110
After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer
I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL).
Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?"
If this makes any sense, help would be much appreciated :)
P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.
Have a look at the crypto cookies/sessions in apache - one way to do this is to put a must-have-valid session limit on that directory - forward anyone who does not have a valid one to a cgi-script; auth there - and then forward back to the actual download.
That way apache can use its normal sendfile() and other optimizations.
However keep in mind that a shell script or perl script ending with a simple 'execvp', 'exec cat' or something like that is not that expensive.
An alternative is more uRL based - like http://authmemcookie.sourceforge.net/.
Dw.
I ended up solving this with a CGI script as mentioned… cookies weren't an option because we need to be able to support clients that don't use cookies (apt).
We have our production server running our website. Then we have a test server which has exact same data but with changes to code to do some new functionality. This web app has over 500 pages.
Is there any program that can
Login to the test site
Crawl through each page and then save the page as html
Compare with the same page saved with live site?
This way we can make sure that new features that we add to our test site will not break the live site when code updates are applied to production.
I am currently trying to use WinHTTrack website copier and then comparing the test and live folders with some code comparison tool like beyond compare. This works ok but there are lot of files changed because of the domain name changes.
Looking forward to ideas / solutions for this problem.
Regards
Have you looked at using Watir for this? It's not exactly the thing you are looking for but it might allow you some more granularity in your tests and ensure the site is functionally identical rather than getting caught up on changing guids, timestamps and all the other things that tend to change across any significant size website from day to day as part of it's standard functionality.
Apparently you can't make consistent, reproduceable builds in your project, can you? I would recommend moving towards that in the long run, it will save you a lot of headaches. That way you would know exactly what was deployed to which server when, so there would be no more need to bend around backwards to get the deployed sources back like this...
I know this is not a direct solution to your problem... but maybe it is worth comparing, whether you would save more in the long run by investing the efforts into your build process now, instead of implementing this workaround (and then improving your build process anyway - because one day you will almost surely need to do that).
wget has a --convert-links option, there are also some options to preserve cookies that might let you do it logged in http://drupal.org/node/118759#comment-664498
use an Offline Downloader, download all files to your computer from both sources, then compare the folder contents using a free tool like Total Commander.
EDIT
Load both of your sources into a CVS, and compare it there.