I'm afraid I know the answer to this but I'll ask on the longshot chance that I'm wrong:
I've been doing some freelance work creating an iPhone application for a company. They've created their own developer account and added me as an team member with "admin" rights. That seems to be the highest assignable rights (with the only higher level being "agent" and belonging only to whoever signed up for the account). Yet, I don't have an option under the provisioning portal to create a distribution certificate or profile.
Is there any way to create these myself without having to ask my client for their primary login? They're not particulary tech savy so it would be difficult to walk them through the process to create the necessary certificates (and would require me giving them a certificate request from my computer, etc. etc.). But it seems like there should be some way to create a distribution build without "agent" rights, right? Could Apple seriously expect only one person from a company to do all the building and uploading of apps to the store?
You are right. Only the agent can create a distribution profile and a distribution certificate. There is no way around that. The easiest thing to do is work with him/her to create the key and certificate for distribution and install a copy of both on your machine as well. They are also the only one who can submit the binary on iTunes Connect.
It is annoying, but that's the way the final build must work - done by the team agent. I ended up getting my boss's login info. Switching team agents is also hard. IIRC, you can't be the team agent on two separate accounts.
Related
I have been working to link my AD to G-Suit and have an auto sync established. The reason I put this here because I have had hard time to figure out everything. I am still not at the end of this procedure and I would appreciate if the skilled people would contribute to help me and I guess many others as well, on this topic.
I downloaded GCDS tool (4.5.7) and installed on a member server. Tried to go through the steps and failed, except to the first one to authenticate to Google.
Learnt: It is a Java (Sun) based product and when it come to authentication or SSL it will through errors that need to be sorted.
Step 1, Google Auth - done and very simple as long as you can logon to your GAE account
Step 2, LPAD config... this was tricky
I created a service account to use
Learnt:
You need to have the SAMS account matching with the displayname and name as well; only this way I could authenticate.
In most cases you don't need any admin rights, a domain user should be able to read the DN structure from LDAP.
I have the OU structure, but I need LDAP working on the DC (this works somehow)
Learnt:
Simple connection through port 389;
SSL would use port 636;
in most cases
GCDS only uses Simple authentication!
Learnt:
With port 389
Domain group policy needed to changed to non LDAP auth required (Domain controller: LDAP server signing requirements changed to none!) to be able to logon - this one is working and good for DEVSERV
Question: Should I use it for PRODSERV or I need to aim to use SSL?
Learnt:
With port 636 (SSL) you need a certificate
Question: I tried to add self cert based on the following article, added to the trusted cert root but Google cannot see it?
BASE DN can be read out through LDP.EXE (built in LDAP browser by MS)
Learnt:
You can add your OU you wanted doesn't have to be the root of the tree
Question: does it mean you have implemented extra security?
Step 3,Defining what data I can collect. OU and person I picked.
Learnt
Profile will collect extra information to Google, such as job title, phone etc. I only wanted them for the company signature... Well that is still not clear if this can be done. If that is not possible, I can't see the reason why I should disclose unwanted information to store on another server.
Question: Can job description be included to the Google Mail signature?
I am keep adding my finding to it as I am working through but would appreciate any input from people who managed to set it up.
Step 4, Searching in the Organisation Unit - confusing again but it is done. (More to follow.)
When connecting to the mail server via the email client, we are forced to use SSL. Yet, we only have a self-signed certificate which the IT dept wants us to trust.
What are the real security repercussions?
Assuming the root key doesn't leak, which would break down the whole company CA system, the only issue specific to this use of a self signed certificate is distribution; a certificate authority certificate is normally already on any computer that needs a connection to the server, while this certificate needs to be distributed manually.
If a new computer needs a connection to the server and does not have the certificate, there is no real security if you connect anyway and just accept the certificate. For it to be of any use, it needs to already exist on the computer.
Just as the other two have said, it basically relies on how much you trust your company, which is a factor anyway, so it's likely not a big deal (though they could easily get a free SSL certificate from startcom, so I have no idea why they would insist on a self-signed one).
But as Paul outlined with his example, it also matters if they have you install their own root certificate on your computer; if they don't and instead ask you to click through warning boxes each time, I would suggest speaking up, and emailing a link to this page to your company's IT department.
This is a more complex question than it might appear. The short answer is that if they are following best practices with regard to protecting their self-signed root CA and deploying the root to client machines (incredibly important!) then there is no additional risk beyond that normally incurred with X509 PKI.
The longer answer is, as usual, "it depends what corners they cut". Here's a scenario under which the risk is higher...
Your company does not install the self-signed root directly on
employee laptops. When asked about this oversight they say "that's a
real PITA in a heterogenous computing environment". This (depending on
client and other factors) forces you to choose to click "continue"
through an untrusted dialog each time you launch the client. Big deal
right? It's still encrypted. One day you're in Madrid, enjoying room
service in a five star hotel (as a rising star within the company you
get the poshest assignments) and you open your laptop up to get some
work done...
You always connect to the VPN, but you left your mail client open last
time you put the laptop to sleep and it throws up the same annoying
warning it always shows when you open it. You swiftly click through
the dialog because you've been trained by your IT dept to do so. Sadly,
this time it's a different cert. If you had inspected it directly (you
have a photographic memory and love doing comparisons of base64
encoded public keys) you wouldn't have clicked through, but you were
in a hurry and now the unscrupulous hotel manager running the hotel
capture portal knows your email login and password (p#ssw0rd1).
Unfortunately, you use p#ssw0rd1 on all sorts of other websites, so
you find your reddit, amazon, and even stackoverflow accounts swiftly
compromised. Worse yet, your boss gets an email from "you" with an
obscene rant and notice of resignation. All because you clicked
through that innocuous dialog you've clicked through a thousand times
before.
This (unlikely) scenario is possible when blindly clicking through "untrusted cert" dialogs. Users can (hopefully) be trained to avoid this by telling them not to click through such dialogs and by adding the corporate CA to the trusted roots list of your OS/browser or using a trusted public CA.
It's wrong if you are expected to do anything about it such as click dialog boxes accepting certificates.
If they are doing their job properly they should distribute the certificate internally such that all required network elements already trust it, e.g. browsers, mail user agents, applications, ...
In other words, if you know about it, it's wrong.
I'm investigating using Mercurial in a corporate environment. The plan is to use central repositories hosted by a webserver (IIS) which developers will push to once they've tested changes locally or within their teams.
I have IIS configured to authenticate users against Active Directory, but there seems to be a hole in that while I can enforce who can push, I can't enforce that they sign their changesets as themselves.
For example, given a basic "commit" scenario:
user commits to their local repository
user pushes their changes to the central repository
In step 1, the user provides a username (via their .hgrc file or whatever) to their local repository, but there isn't really any way to enforce that this is their "real" username.
In step 2, the user has to provide their "real" credentials to IIS to be allowed to push, but their changesets will show up in the history with whatever username they provided in step 1. It seems like if bob used "alice" as his username for step 1, he could make sure alice got the blame for any of his buggy changes.
Is there a way to make sure these user names match up during the push (via hooks or something)? Or alternatively, some other way to ensure a reasonable level of authenticity in the change long?
Edit: On further consideration, I guess I don't actually want to enforce that these names line up; if Bob and Alice have been collaborating in a separate repo, Bob should ultimately be able to push all of their changes, not just his own. What I really want is just to make sure that if it comes down to it, I can tell who made what changes in a more definitive way than just whatever username was applied.
I'm thinking GpgExtension is part of the answer, but I still don't think I've got the full picture.
I eventually found this discussion, which essentially says that my options are essentially getting everyone to sign changesets with GPG, or setting up a "pushlog" outside of mercurial which tracks what user pushed what to the central repository.
Ry4an also pointed out this (essentially duplicate) question with some good answers that confirm what I'd found elsewhere.
I have various projects being built and tested periodically on a Hudson server, but I don't want every employee in the company to see published artifacts for every project.
Project-based matrix security seemed at first the key, but after many tests I find that granting overall read permissions is mandatory if you want users to be able to read anything in the hudson server.
So, in the end read permissions are binary: either you grant global read permission or you block everything, am I right?
Haven't it tested with the newest release, but I use the matrix setup. I gave Anonymous the overall read. This way they can see the login screen when they type {{http://servername:port/}} but does not give them access to the jobs. In the jobs themselves I configured the users that should actually see the job. Works like a charm.
UPDATE:
Meanwhile I found out that you can use authenticated instead of Anonymous. This enabled access to Hudson/Jenkins through the links in the Build failed messages. Now everyone gets the logon dialog and after signing in, they are right away at the job run of interest.
After trying to do something similar to you with Hudson's authorization settings, I came to the same conclusion you did.
I'd like to understand some of the best practices with respect to code signing. We have an Eclipse-based application and think it would be appropriate to sign our plug-ins. This raised a lot of questions:
Can/Should the private key be in
source control?
Should we sign the code as part of
our nightly build process or as part
of our release process?
Should the code be signed
automatically, or is there a reason
why that should be a manual step?
My inclination is to say, "Yes", "Nightly", and "Automatically", but I could see an argument for only signing the release products. I might even make the argument that SQA should sign the code after they have verified it, although that would really mess with our release process.
How do other people manage this?
It depends on how secure you want your private key to be, it might not be something that you want a temporary employee with source access to have full access to.
At my work, we do the following:
"Test sign" binaries as part of our daily builds with a checked in key. This requires a test root certificate to be on machines in order to trust the binaries, but they will not be trusted if the bits are deployed outside the company.
Weekly (and for external releases), we sign with the real key. This is done via a separate, somewhat manual process. Only a few people have access to the key to sign the product.
I can tell you how I've see this being done in a big corp. Individual developers could build the code, but they could not sign it. That would be a private build. Contiguos integration machine would drop nightly builds signed with a key stored on the build machine keystore, which would be a test key signed by a corporate certificate authority (ie. a key trusted only within the corp). The official build could be signed only by controlled machines with the official, global trusted authority signed, signature key stored in hardware modules in controled access room.
The idea is that a private key should realy have only one copy in the world (at most one extra for escrow). The whole value of the key is derived from its privacy, not from anything else. The moment is available to your entire org, is as good as putting it out on pirate bay.