Github Desktop - How to keep track on code history of local, remote repo created separately? - overwrite

I want to keep track on 2 repo with the SAME NAME created SEPARATELY. 1 is in local machine and 1 is in remote.
But whenever I pressed "publish repository" after "commit to production", there was a red warning saying that "Repository creation failed. (name already exists on this account)".
So my question is, is there any way for me to keep track on the code history of these 2 repo?
Thank you.

I'm going to assume that when you say "repo" you mean "branch" (a big stretch, I know), and that the situation is something like this: GitHub has this:
A -- B -- C (mybranch)
But you on your local machine have this:
X -- Y -- Z (mybranch)
And I take it that what you want is this, on both:
A -- B -- C -- X -- Y -- Z (mybranch)
The way to obtain that is as follows:
git fetch
git switch mybranch
git rebase --keep-empty origin/mybranch
git push origin mybranch
You will observe that I have not limited myself to the restriction that this should be done using GitHub Desktop alone. I don't know whether it has the power to do this (and I don't really care).

Related

git revert : Unable to undo an individual commit even in a simple case

In order to try and understand git revert, I made a series of 4 simple commits -- A, B, C, D -- to a text file, foo.txt, with the intention of undoing only commit B later and leaving commits A, C, D intact.
So, in each commit, I added a new line each to the file, emulating either a feature added or a bug introduced.
After commit A, contents of foo.txt:
Feature A
After commit B, contents of foo.txt: (Here, I introduce a bug that I'll later try to undo/revert.)
Feature A
Bug
After commit C, contents of foo.txt:
Feature A
Bug
Feature C
After commit D, contents of foo.txt:
Feature A
Bug
Feature C
Feature D
Now, to undo the effects of Commit B (which introduced the bug), I did:
git revert master^^
What I expected to happen was, a new Commit E that removed the Bug line from the file, leaving the file contents as:
Feature A
Feature C
Feature D
However, I got the error:
error: could not revert bb58ed3... Bug introduced
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'
with the contents of the file following the unsuccessful git revert being:
Feature A
<<<<<<< HEAD
Bug
Feature C
Feature D
=======
>>>>>>> parent of bb58ed3... Bug introduced
( bb58ed3 is the hash of Commit B, and 'Bug introduced' this commit's comment.)
Question:
What is going on here?
If even such a simple, one-line commit cannot be reverted/undone automatically, and must require manual resolution from me, then how could I revert a much more complex commit whose original developer may even be someone else!
Is there a special set of cases where git revert would be better applicable?
git sees each commit as a changelist (I simplify things here) and tries to "unapply" that when you call git revert. Each changelist also includes some context to ensure that the change makes sense. For example, if the change we want to make is "add return after line 10", it's more likely to break things than "add return after line 10, if lines 7-9 contain X, Y, and Z". So, we can describe your second commit as (again, simplifying this a little here):
Assuming that the first line of the file is Feature A.
Assuming that there is no second line.
Make the second line contain Bug.
After you've added few more lines, context of the Bug changed significantly, so git revert is not sure whether it can simply remove the line. Maybe the newly added lines actually fixed the bug. So it asks you to explicitly resolve the conflict of contexts.
As for your questions 2-3: yes, git revert is usable in cases when you're reverting a piece of file which was not changed since then. For example, the bug was introduced in a foo function, but only bar function (which is located ten lines below) was modified since then. In that case, git revert is likely to automatically revert the change, because it sees that the context is unchanged.
UPD: here is an example of why context matters even if you're trying to revert your own code:
Commit A (mind the mistype):
int some_vlue = 0;
read_int_into(some_vlue);
some_vlue = some_vlue++;
Commit B (bug introduced):
int some_vlue = 0;
some_vlue = 123;
some_vlue = some_vlue++;
Commit C (name fixed):
int some_value = 0;
some_value = 123;
some_value = some_value++;
Now, in order to revert commit B, one have to have some context, as we cannot simply replace some_value = 123 with older line read_int_into(some_vlue) - it would be compilation error.

Why are my flows not connecting?

Just starting with noflo, I'm baffled as why I'm not able to get a simple flow working. I started today, installing noflo and core components following the example pages, and the canonical "Hello World" example
Read(filesystem/ReadFile) OUT -> IN Display(core/Output)
'package.json' -> IN Read
works... so far fine, then I wanted to change it slightly adding "noflo-rss" to the mix, and then changing the example to
Read(rss/FetchFeed) OUT -> IN Display(core/Output)
'http://xkcd.com/rss.xml' -> IN Read
Running like this
$ /node_modules/.bin/noflo-nodejs --graph graphs/rss.fbp --batch --register=false --debug
... but no cigar -- there is no output, it just sits there with no output at all
If I stick a console.log into the sourcecode of FetchFeed.coffee
parser.on 'readable', ->
while item = #read()
console.log 'ITEM', item ## Hack the code here
out.send item
then I do see the output and the content of the RSS feed.
Question: Why does out.send in rss/FetchFeed not feed the data to the core/Output for it to print? What dark magic makes the first example work, but not the second?
When running with --batch the process will exit when the network has stopped, as determined by a non-zero number of open connections between nodes.
The problem is that rss/FetchFeed does not open a connection on its outport, so the connection count drops to zero and the process exists.
One workaround is to run without --batch. Another one I just submitted as a pull request (needs review).

TFS RM - Release - Commits details

Does anyone knows how the commits from a release are determined?
From my tests it resulted that is taking into the consideration the last commit even if that commit was associated with a build was triggered couple of builds ago.
In the below screenshots it's displayed that all three last releases are using changeset 47512 which quite confusing for me:
Release 10-01:
Release 09-01:
Release 08-01:
Release shows the difference between w.r.t last release. While showing the difference, it shows the difference between the builds used in the two releases.
Let me explain with some example. Say there are Release1 (with buildA having commit A,B,C) , Release2(with buildB having commit D,E,F) and Release3 (with buildC having commit G,H,,I) and Release4 (with buildC having commit G,H,I).
Here commit tab of Release3 will show the diff w.r.t Release2 and it should show G,H,I.
Similarly commit tab of Release4 will show the diff w.r.t Release3 and it should show 'No diff' as build used is same.

Avoid hanging while compiling Oracle package

we have a situation where the compiling of a package takes for ever!
if we compile the package with a new name then it works!
what I understood, Compiling hangs because of locks on the package!
something like this might help identify the problem!
SELECT s.sid,
l.lock_type,
l.mode_held,
l.mode_requested,
l.lock_id1,
FROM dba_lock_internal l,
v$session s
WHERE s.sid = l.session_id
AND UPPER(l.lock_id1) LIKE '%PROCEDURE_NAME%'
AND l.lock_type = 'Body Definition Lock';
also this
select
x.sid
from
v$session x, v$sqltext y
where
x.sql_address = y.address
and
y.sql_text like '%PROCEDURE_NAME%';
is it only 'body Definition Lock' that prevent the compiling?
is there any other lock types that prevent the compiling?
how to avoid the locks and do the compiling? by killing the sessions only? is there something else?
You might want to look into Edition-based Redefinition which will let you create a new revision, compile new versions without being blocked by other sessions currently using the packages and enable the new revision later on.
Basically, if someone or something else (any other scheduled job) is executing the package, then you won’t be able to perform the recompile. To get around this, you need to identify the locking session and kill it. Killing session is that option we have, dbms_lock is only useful on locks created by dbms_lock You cannot just "unlock" some object - the lock is there for an extremely relevant reason.
Other lock you may come across is Dependency Lock: Consider
Procedure-1 from Package A contains a call to Procedure-2 from Package B.
procedure-1 from Package A is running.
Then you may get lock while compiling Package-B

julia on PBS cluster: what to give to addprocs()?

I'm trying to setup a cluster across machines on a PBS managed cluster. I'm perfectly able to compute within one node by saying julia -p 12 (after having reserved one node with 12 CPUs).
I understand that to use several machines, I have to add them to the master process with addprocs. I was able to do that on a different cluster (SGE). on this one here something is going wrong.
You can see everything I'm doing, including submit scripts etc, on this branch of a github repo.
to get a list of machines, I parse the PBS_NODEFILE, which for the case of a submit script with option
#PBS -l nodes=2:ppn=12 # give me 2 nodes with 12 processors each
looks like something like this:
red0004
red0004
...
red0004
red0347
...
red0347
I parse this file with bind_pe_procs() in sge.jl in the repo and give a vector of machine names to addprocs. When I submit this I get this error which I put up a gist with the resulting SSH error. I don't know what it means.
has this to do with a system setting, ie do i have to talk to the sys admin about SSH between machines? What are the right questions to ask?
I am unsure about what exactly I have to give to addprocs(). I don't want to add the master process (I don't want worker 1 SSHing into itself?), so I exclude ENV["HOST"] = node001 from my list. but what about all processors with the same name node002? do i list all of those
machines = [ "red0347" for i=1:12]
or just once
machines = ["red0347"]
in addprocs(machines)
thanks!