assume I have some thesis etc. and want to give the audience the possibility to download the coding part and test it;
Is there a platform for professionally upload it and also keep it there permanently (of course it should not be deleted within a couple of months)
Thanks for your help...
You can use tools such as Git-hub or bitbucket. These allow you to upload code and even have version control. Users can download your code directly and use it if they need to.
Related
Is there a way to do this easily? Keep version history of the document on the document? As a .gdoc or .whatever-format or am I resigned in having to download, separately, all the revisions made in the past on said document?
For context: I have a document I've been editing and revising over the years for my own medical history and list of meds, history, etc. etc. and have been using Google Docs to do this, because it was convenient and I didn't have to pay for Microsoft Office and additionally install a good word processor on my PC. Now recently I've purchased Dropbox Personal for cloud storage needs.
I want to do the following: Take the Google Doc and save it as a .gdoc (which isn't an option in the File menu??) and take it over to Dropbox's Vault as an editable hardcopy with its revisions history in tact.
Otherwise, what I have done (before I even comprehended revision history was a thing) is just copy pasted its current version, onto a new .gdoc in Dropbox Vault.
So, is that possible? And if so, how and as easily (lazily) as I possibly can? Also, is this even the right place to ask for this? Apologies if it isn't. I didn't see much else about this specific issue anywhere... (also lazy)
Thanks!
EDIT:
I am by no means a coder in any sense. I'm a full time elderly caretaker and I'm just a guy with a specific, niche?, technical, problem and thought this was the first place to ask without having to go through tech support w/ Google chat etc. And it might also help some other people that like seeing how their documents have changed over the years, history fans etc. At the end of the day it's a programming/coding issue, that could be resolved someway some how... Right?
If I can add pictures here for context, LMK.
Thanks :)
The .gdoc file format is only accessible via Google Docs which is on web. Downloading the file to your local storage means you would have to access it on your device using your local apps (word editor) such as Microsoft Office,Libre Office, etc.(other word editor apps on desktop application level) which is why the .gdoc format is not available when you download. This is also why you won't be able to have it openable from your dropbox.
The version/revision history on Google Docs is intact only to that specific file with that unique ID. So when you download the file, the version history won't be available to the physically downloaded file which is stored on web or even when you make a copy of it, the version history does not get copied, therefore that won't be an option too.
It looks like you'll have to stick to manually copying or making a backup of the current version of the file before editing, since the version history is only kept for a period of 30 days or the last 100 versions, unless manually set to "Keep forever" to keep a version forever.
Google drive version history: https://googledrivepro.com/google-drive-version-history/
I used to work on the live website when I'm editing a website (I'm working alone), but some people told me "it's the old way". I'm inclined to evolve and I like to work, but how can't I lose time doing this?
First, that means that I need to get a copy of the website on my computer. I need to copy the files, dump and restore the database, first waste of time. If my customer adds extension on the website in the meantime(for example, Wordpress) my modification should be impacted then I need to add it on my local copy to. If I need to modify the DB I will need to do it on the local copy too.
Secondly if I want to show a work in progress to my customer I need to apply all modifications to the live website and check than everything works, still a waste of time.
And finally when everything is ok, I need to update again the live website, files and DB.
So, there's two options:
this is not the correct way to do and there's tools to do all that transparently (I hope so)
this is not a waste of time but a needed time to work properly (I understand why agency have big prices and I'll keep my method)
It depends on the complexity of the project and the size of your team.
One of the major risk of working on a live site is the introduction bugs in production. You also want to have some confirmation of functionalities developed from QA or your customer before having your users access them.
Basically, you want to make sure your new code does not break the live site, so working on the local instance could help you in this way, and you could also deploy on the live test site you changes for approval and QA.
Also if you working with a larger team working on the live site just won't scale up and the risk of introducing bug is even higher.
You could consider using docker, to simplify development on your local machine.
I have some scripts running from GSheet getting data from BigQuery. However, in order to make the files run, I need to manually enable the API every time for a given sheet.
So the question is: How to enable API within the code, so that if I share the GSheet or make a copy I don't have to go to the script editor and enable the API from there?
Thanks
I am a huge fan of this particular use of the Google ecosystem, so I'm happy to help get others up and running using GSheets with BigQuery! Hopefully it is working well for you!
When sharing the sheet with others, there is no need to alter anything in the script editor at all. The scripts should run and query BigQuery without issue; this has been my experience at least. The obvious caveat to this is that the users you share it with must have access to the Google Developer Project that the BigQuery instance is associated with.
However, when copying the sheet, I do not believe it is possible to have it replicate the connection. This is because when the file is copied, it becomes associated with a new Google Developer Project. Thus, you have to go into the script editor, then go to Resources > Developers Console Project and change the project listed to the one in which you have BigQuery enabled.
Hopefully this helps! Sorry I don't have better news for you!
After using youtrack for quite a while my organization is considering a move to JIRA (because of many reasons). However JIRA doesn't seem to include a youtrack importer/migration out of the box (though there seems to be plenty of importers/migrations the other way around).
Has anyone migrated from youtrack to JIRA and have any experience in this?
Edit:
To anyone who might have this problem later, my final solution ended up something like this:
transfer all "basic" data by hand (user accounts, basic project setup etc)
write a small C# program using the atlassian sdk and the youtrack sdk that transfers from one to the other (creating empty placeholder issues if issues was missing due to someone deleting them in youtrack in order to keep numbering).
This approach worked good enough and I managed to transfer pretty much all data without any loss of any very important data (though of course all timestamps are messed up now, but we saw that as an acceptable loss).
Important to know is that youtrack handles issues moved from one project to another a bit counter-intuitive (they still show up in their first project even when they're moved away from there, but they have an issue id from their new project - a slight wtf when I ran into that the first time).
Also, while the atlassian sdk did allow me to "spoof" the creator of an issue (that is, being logged in as used A and creating an issue while telling the system that it's actually user B who is creating this issue) it does not allow you to do this with comments. So in order to transfer those properly I had to actually loop through the comments and log in with the corresponding new user and post the comments.
Also, attachments from youtrack was a bit annoying to download, so I ended up having to download those "by hand". :/
But all in all, it was relatively pain-free. Some assembly required, some final touch-ups required, but it was all done within a couple of days.
I had the same problem. After a discussion with JIM (Jira Importer) developer, I used YouTrack Rest API and Python script to make JSON files. Then I used JIM JSON import.
With this solution you can import almost all fields from YT - the standard one and files with description, links between issues and projects and so on...
I don't know if I can push it to GitHub, I have to ask my boss - I did it during my work hours.... But of course you can ask me if you want.
The easiest approach is probably to export the data from youtrack into CSV and use the JIRA CSV importer. You may have to modify some of the data to fit the expected format for the CSV importer
I was wanting to use a file sharing server to keep certain files up-to-date and constant across multiple instances of my application across multiple computers - like (for example) writing a multiplayer game, which stores all the player's positions in a text file, and uses something like Dropbox to keep the text file constant across all the applications, and each application instance can change the file with that application's player's position, and then the rest of the applications can update accordingly. This is only an example, and is not what I intend to do using this technology. What I want to do does not rely on fast sharing of data very quickly - but only periodically downloading and updating the text file.
I was wondering how I might be able to do this using the Dropbox API for Objective-C without prompting the user for any Dropbox username/password - just store a single Dropbox account's login information, log into it automatically and update/download the file stored on it?
From what I have found out from experimenting, Dropbox prompts users for their passwords via a web-broswer, and is designed to accommodate multiple accounts, whereas I only need to accommodate the 'Server' account.
So, is there anyway to do this sort of thing using the Dropbox API, or should I use something else. Or do I need to find out how to write my own server. Using some sort of file sharing API seems a lot easier to me than writing an actual server.
Thanks for any help,
Ben
You might think about using Google App Engine (GAE). I had a similar requirement recently and I'm thinking this is a good option when you want centralized data. Plus you can do the no-browser account login by using your own custom authentication, or I think it's even possible via OAuth? Depends on how sensitive the data is I guess. I just rolled my own.
From my research I found that using Dropbox as a server has some issues with scalability, since you'll be limited to maybe 5,000 calls per day. source It's built on Amazon S3, so you could also look at using that directly.
GAE lifts that limit up to 675,000, but can be increased up to 91 million for free.
https://developers.google.com/appengine/docs/quotas
I did find an open-source project for doing this with Java, alternative you could look at Python example
I've written a daemon that continuously checks for updated files and syncs them. I wrote it for my own file manager iOS app. You can find the implementation here:
https://github.com/H2CO3/MyFile/tree/master/DropboxDaemon
I'm personally not an iOS developer but I came across this question while looking for something else and thought I would offer up another potential solution to the OP's question.
Microsoft just released something called Azure Mobile Services which supports iOS development (among other platforms). It's basically a convenient way to set up a back end system complete with push notifications, authentication, etc. without rolling your own. You don't need to know anything about Azure or servers as the setup process walks you through most of it. It is new so keep that in mind, but it looks promising for situations like this.
Here's a 10 minute video explaining how to use it with an iOS developed app along with links to more documentation:
http://channel9.msdn.com/posts/iOS-Support-in-Windows-Azure-Mobile-Services/
Hope this helps.