Creating a process in a non-zero session from a service in windows-2008-server? - windows-server-2008

I was wondering if there is a simple way for a service to create a process in user session?
My service is running as a user(administrator) account and not as a LocalSystem acount, therefore i can't use the WTSQueryUserToken function.
i have tried calling
OpenProcessToken(GetCurrentProcess,TOKEN_ALL_ACCESS,TokenHandle);
but when i use this token to run
CreateProcessAsUser(TokenHandle,.....)
my process is still running in session 0.
how can i resolve this issue?
I'm using an Ole automation so i don't really care on which session the process will be running on, as long it is not the session 0 - because the Ole from some reason doesn't create its processes (winword.exe for instance) in session 0, but rather it creates them in other user sessions.
Any suggestions will be welcome.
Thanks in advance.

I Have been able to resolve this issue myself, thanks for all of those who have looked at this question.
Ok, so as i mentioned above - the Token belongs to a process which is running in session 0...
so what i have done...is looked for a token of a process that is not running in session 0.
when you take it's process id as the parameter for OpenProcessToken.
than the CreateProcessAsUser will create the process in the same session (and probably with the same cridentals as the process you have chosen);
The problem was that i couldn't have get any details on most of the processes using the function: QueryFullProcessImageName - because it has a bug, and it doesn't work on proccesses that are created from a path that contains spaces (like C:\Program files for instance)
and another issue with that function i guess is that because i'm running the original process using a user cridentals i can't access the information of a process that is running using the Local-system account. which is pretty bad because i wanted to take the winlogon.exe as my process (because it indicate a new opened session).
also in order to succeeed in that trick, you must play a little bit with the security of the system, in order to allow the process to ask for elevated security:
what i have chosen to get elevated for is :
SeDebugPrivilege - for finding information on the running processes
SeAssignPrimaryTokenPrivilege - in order to run a new process with the token i extracted from the user session process(i.e explorer.exe)
SeCreateTokenPrivilege - i dont know if it is needed but i did it anyway because it sounds related.
in order to succeed in elevating this Privileges - you must add the user that run the process to the relevant users in all of this Privileges in run->gpedit.msc or run->secpol.msc (under Local Computer Policy\Computer Configuration\ Windows Settings\Security settings\ LocalPolicies\User Rights Assignments)
add your account to the following rights(compaitable with the Previleges above) :
Create a token object
Debug Programs
Replace a Process level token
and that is it! :)
it has been working Great!
Btw, you might want to disable all the UAC stuff...i dont know if it is related or not, but it has made the working with 2008 less painful - no more annoying popups.

Related

Workaround to seeing data factory v2 debug runs

I realise normally a debug run is not visible in the data factory v2 UI after closing the browser window, however unfortunately I needed to restart my machine unexpectedly and it's a long running pipeline.
I thought maybe the runs might be available via powershell, but I haven't had any luck.
The pipeline is likely still running.
We do have external logging, however ideally I'd like to see how long each activity is taking as I'm load testing.
And more importantly I do not want to do another run until I'm sure it's finished.... notably I'll run it from a trigger next time (just in case!).
EDIT:
It looks like a sandbox id is used which is stored in the browser local storage and there appears to be undocumented API endpoints for gathering info using the sandbox id. But there doesn't appear to be a way of getting old sandbox id's so I'm probably out of luck.
There is a button for view all debug runs.
Taken from Microsoft documentation:
To view a historical view of debug runs or see a list of all active debug runs, you can go into the Monitor experience.

CloudTrail RunInstances event, who actually provisioned EC2 instance when STS AssumeRole used?

My client is in need of an AWS spring cleaning!
Before we can terminate EC2 instances, we need to find out who provisioned them and ask if they are still using the instance before we delete them. AWS doesn't seem to provide out-of-the-box features for reporting who the 'owner'/'provisioner' of an EC2 instance is, as I understand, I need to parse through gobs of archived zipped log files residing in S3.
Problem is, their automation is making use of STS AssumeRole to provision instances. This means the RunInstances event in the logs doesn't trace back to an actual user (correct me if I'm wrong, please please I hope I am wrong).
AWS blog provides a story of a fictional character, Alice, and her steps tracing a TerminateInstance event back to a user which involves 2 log events: The TerminateInstance event and an event "somewhere around the time" of an AssumeRole event containing the actual user details. Is there a pragmatic approach one can take to correlate these 2 events?
Here's my POC that's parsing through a cloudtrail log from s3:
import boto3
import gzip
import json
boto3.setup_default_session(profile_name=<your_profile_name>)
s3 = boto3.resource('s3')
s3.Bucket(<your_bucket_name>).download_file(<S3_path>, "test.json.gz")
with gzip.open('test.json.gz','r') as fin:
file_contents = fin.read().replace('\n', '')
json_data = json.loads(file_contents)
for record in json_data['Records']:
if record['eventName'] == "RunInstances":
user = record['userIdentity']['userName']
principalid = record['userIdentity']['principalId']
for index, instance in enumerate(record['responseElements']['instancesSet']['items']):
print "instance id: " + instance['instanceId']
print "user name: " + user
print "principalid " + principalid
However, the details are generic since these roles are shared by many groups. How can I find details of the user before they Assumed Role in a script?
UPDATE: Did some research and it looks like I can correlate the Runinstances event to an AssumeRole event by a shared 'accessKeyId' and that should show me the account name before it assumed a role. Tricky though. Not all RunInstances events contain this accessKeyId, for example, if 'invokedby' was an autoscaling event.
Direct answer:
For the solution you are proposing, you are unfortunately out of luck. You can take a look at http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html#w28aac22b9b4b7b3b1. On the 4th row, it says that the Assume Role will save the Role identity only for all subsequent calls.
I'd contact aws support to make sure of this as I might very well be mistaken.
What I would do in your case:
First, wait a couple of days in case someone had a better idea or I was mistaken and aws support answers with an out-of-the-box solution
Create an aws config rule that would delete all instances that have a certain tag. Then tell your developers to tag all instances that they are sure that should be deleted, then these will get deleted
Tag all the production instances and still needed development instances with a tag of their own
Run a script that would tag all of the untagged instances with a separate tag. Douple and triple check these instances.
Back up and turn off the instances tagged in step 3 (without
deleting the instances).
If someone complained about something not being on, that means they
missed an instance in step 1 or 2. Tag this instance correctly and
turn it on again.
After a while (a week or so), delete the instances that are still
stopped (keep the backups)
After a couple months, delete the backups that were not restored
Note that this isn't foolproof as it has the possibility of human error and possible downtime, so double and triple check, make a clone of the same environment and test on that (if you have a development environment that already has such a configuration, that would be the best scenario), take it slow to be able to monitor everything, and be sure to keep backups of everything.
Good luck and plzz tell me what your solution ended up being.
General guidelines for the future:
Note: The following points are very opiniated, and are general rules that I abide by as I find them saving me a load of trouble from time to time. Read them, dismiss what you find as unfit for you and take the things that you find reasonable.
Don't use assume role that often as it obfuscates user access. In case it was a script run on the developer's pc, let it run with their own username. If it's running on a server, keep it with the role it was created in. The amount of management will be less that way as you just cut the middle-man (the assume-role) and don't need to create roles anymore, just assign the permissions to the correct group/user. Take a look below for when I'd consider using the assume-role as a necessity.
Automate deletions. The first things you should create is automating the task of keeping the aws account as clean as possible as this would save both $$$ and debugging pain. Tags and scripts to act on these tags are very powerful tools. So if a developer needs an instance for a day to try out something new, he can create a tag that times the instance out, then there is a script that cleans it up when the time comes. These are project-specific, and not everyone needs all of these, so see and assess what you need for your project and act on them.
What I'd recommend is giving the permissions to the users themselves in the development environment as it would make tracking things to their root and finding the most knowledgeable person to solve things easier. As of the production environment, everything should be automated anyway (creation when needed and deletion when no longer needed) and no one should have any write access to that account, ever.
As for the assume-role, I only use it in case I want to give access to read-only production logs on another account. Another case would be something that really shouldn't be happening that often, if at all, but still need to give some users access to it. So, as an extra layer of protection against the 'I did it by mistake', I make them switch role to do it, and never have a script that automatically switches roles and do the action in an attempt to make it as deliberate as possible (think deleting a database and such). Another thing would be accessing sensitive information (credit-card database, etc.). Many more scenarios can occur, and here it comes to your judgement.
Again, Good Luck.

Is it possible to accurately log what applications the user has launched through the linux kernel?

My goal is to write to a file (that the user whenever the user launches an application, such as FireFox) and timestamp the event.
The tricky part is having to do this from the kernel (or a module loaded onto the kernel).
From the research I've done so far (sources listed below), the execve system call seemed the most viable. As it had the filename of the process it was handling which seemed like gold at the time, but I quickly learned that it wasn't as useful as I thought since this system call isn't limited to user-related operations.
So then I thought of using ps -ef as it listed all the current running processes and I would just have to filter through which ones were applications opened by the user.
But the issue with that method is that I would have to poll every X seconds so, it has the potential to miss something if the user launched and closed an application within the time that I didn't call ps -ef.
I've also realized that writing to a file would be a challenge as well, since you don't have access to the standard library from the kernel. So my guess for that would be making use of proc somehow to allow the user to actually access the information that I'm trying to log.
Basically I'm running out of leads and I'd greatly appreciate it if anyone could point me in the right direction.
Thanks.
Sources:
http://tldp.org/LDP/lkmpg/2.6/html/x978.html (not very recent)
https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html
First, writing to a file or reading a real file from the kernel is a bad idea which is not used in the kernel. There is of course VFS files, like /sys/fs or /proc, but this is a special case and this is allowed.
See this article in Linux Journal,
"Driving Me Nuts - Things You Never Should Do in the Kernel" by Greg Kroach-Hrtman
http://www.linuxjournal.com/article/8110
Every new process that is created in Linux, adds an entry under /proc,
as /proc/pidNum, where pidNum is the Process ID of the new process.
You can find out the name of the new application which was invoked simply by
cat /proc/pidNum/cmdline.
So for example, if your crond daemon has pid 1336, then
$cat /proc/1336/cmdline
will give
cron
And there are ways to monitor adding entries to a folder in Linux.

How to override edit locks

I'm writing a WLST script to deploy some WAR's and an EAR. However, intermittently, the script will time out because it can't seem to get an edit lock (this script is part of a chain of many other scripts). I was wondering, is there a way to override or stop any current locks on the server? This is only a temporary solution, but in the interest of time, it will do for now.
Thanks.
You could try setting a wait period and timeout:
startEdit([waitTimeInMillis], [timeoutInMillis], [exclusive]).
Are other scripts erroring out, leaving the session locked? You could try adding exception handling around those. Also, if you have 'Automatically acquire lock" enabled in the Admin Console and you use the admin console sometimes it can cause problems if you are running scripts at the same time, even though you are not making "lock-requiring" changes.
Also, are you using the same user for the chained scripts?
Within WLST, you can pass a number as a parameter to gain an exclusive lock. This allows the script to grab a different lock than the regular one that's used whenever an administrator locks from the console. It also prevents two instances of the same script from stepping on each other.
However, this creates complex change merge scenarios that are best avoided (by processes).
Oracle's documentation on configuration locks can be found here.
Alternatively, if you want the script to temporarily relieve any existing locks regardless of the pending changes, you may as well disable change management from the console, minimizing the inconvenience caused.
WLST also contains the cancelEdit command that you could run before you startEdit. Hope one of these options pan out!
To take the configuration change lock from another administrator:
If another administrator already has the configuration lock, the following message appears: Another user already owns the lock. You will need to either wait for the lock to be released, or take the lock.
Locate the Change Center in the upper left corner of the
Administration Console.
Click Take Lock & Edit.
Make your configuration changes.
In the Change Center, click Activate Changes. Not all changes take
effect immediately. Some require a restart (see Use the Change
Center).
As long as you're running WLST as an administrative user, you should be able to jump into an existing edit session with the edit() command - I've done a quick test with two admin users, one in the Admin Console, and one using WLST, and it appears to work fine - I can see the changes in the Admin Console session inside the WLST interpreter.
You could put a very simple exception handler around your calls to startEdit that will log the exception's stack trace, but do nothing else. And then rely on the edit call to pop you into the change session.
Relying on that is going to be tricky though if another script has started an edit session and is expecting to be able to commit that change session itself - you'll be getting exceptions and unreliable behaviour across multiple invocations.

SPWebService.RemoteAdministratorAccessDenied - How to use it in a proper way?

We have created a SharePoint web part with creates and updates SharePoint Timer Jobs automatically. The web part runs from the content web applications and not from the central admin.
I've learnt that MSFT has made some minor changes in updating SPPersistedObject. So I'm getting Access Denied while calling Update().
But here are my questions -
I understood that we cannot set
SPWebService.RemoteAdministratorAccessDenied
= false from the code running in content web applications. Is there a
STSADM command for it other than
powershell?
I can turn it off from a FARM
feature but is that secure if I
don't turn it off immediately?
What is the best way to use it?
I don't believe so - you need to set the property from code running in CA or set it from a Powershell script.
I honestly am not sure what security loophole Microsoft was trying to close with this one - but I'm also not a security guru - in fact quite the opposite.
My suggestion is to disable the security feature, do what you need to do, then turn it back on. Since it's a very simple Powershell script (or farm feature receiver if that's your thing) it should be pretty simple to disable/reenable each time you need to do something (which hopefully won't be that often anyway).