wlst commands for Weblogic security realm Roles creation and add conditions - weblogic

I have started my WL Admin console. I went to the Security Realms -> myrealm -> Roles and Policies -> Global Roles -> Roles. There I clicked on "New" button, created a new role, then modified it, giving it a LDAP user as Role condition.
I was wondering if we can automate this job by creating wlst script. Could you please help us to identify the wlst commands for - Create a role & adding conditions.
I have done some study about cmo.getSecurityConfiguration().getDefaultRealm().lookupRoleMapper("XACMLRoleMapper") from Oracle pages but not much sure about the implementation.

Here is a sample to script used to create a global role and a policy on a jms resource :
connect('...','...','t3://localhost:7001')
realm=cmo.getSecurityConfiguration().getDefaultRealm()
rm=realm.lookupRoleMapper(""XACMLRoleMapper"")
rm.createRole(None,""role1"",None,"""")
rm.createRole(None,""role2"",None,"""")
authorizer=realm.lookupAuthorizer(""XACMLAuthorizer"")
authorizer.createPolicy('type=<jms>, application=SystemModule-0, destinationType=queue, resource=Queue-0','{Rol(role1)}')
authorizer.removePolicy('type=<jms>, application=SystemModule-0, destinationType=queue, resource=Queue-0','{Rol(role1)}')
authorizer.getPolicyExpression('type=<jms>, application=SystemModule-0, destinationType=queue, resource=Queue-0')

rm=cmo.getSecurityConfiguration().getDefaultRealm().lookupRoleMapper("XACMLRoleMapper")
print rm.getProviderClassName()
print rm.getName()
cursor = rm.listAllRoles(1000)
print cursor
userReader = rm
while userReader.haveCurrent(cursor):
usrrd = userReader.getCurrentProperties(cursor)
print usrrd.get('RoleName')
#print usrrd
print "\t",usrrd.get('Expression')
userReader.advance(cursor)
userReader.close(cursor)

Related

How do I resolve this error message? Azure DevOPs TF401232:Work item does not exist, or you do not have permissions to read it

Azure DevOPs TF401232:Work item does not exist, or you do not have permissions to read it
Open project setting -> Project configuration -> Areas -> select area path( 01 - Template ) -> click “…” -> security -> search for your account and then check the permission-View work items in this node and ensure it set to allow.
ALSO, make sure you are not part of the User Group that has an access level set to deny for view work items for respective node.

Is it possible to use service accounts to schedule queries in BigQuery "Schedule Query" feature ?

We are using the Beta Scheduled query feature of BigQuery.
Details: https://cloud.google.com/bigquery/docs/scheduling-queries
We have few ETL scheduled queries running overnight to optimize the aggregation and reduce query cost. It works well and there hasn't been much issues.
The problem arises when the person who scheduled the query using their own credentials leaves the organization. I know we can do "update credential" in such cases.
I read through the document and also gave it some try but couldn't really find if we can use a service account instead of individual accounts to schedule queries.
Service accounts are cleaner and ties up to the rest of the IAM framework and is not dependent on a single user.
So if you have any additional information regarding scheduled queries and service account please share.
Thanks for taking time to read the question and respond to it.
Regards
BigQuery Scheduled Query now does support creating a scheduled query with a service account and updating a scheduled query with a service account. Will these work for you?
While it's not supported in BigQuery UI, it's possible to create a transfer (including a scheduled query) using python GCP SDK for DTS, or from BQ CLI.
The following is an example using Python SDK:
r"""Example of creating TransferConfig using service account.
Usage Example:
1. Install GCP BQ python client library.
2. If it has not been done, please grant p4 service account with
iam.serviceAccout.GetAccessTokens permission on your project.
$ gcloud projects add-iam-policy-binding {user_project_id} \
--member='serviceAccount:service-{user_project_number}#'\
'gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com' \
--role='roles/iam.serviceAccountTokenCreator'
where {user_project_id} and {user_project_number} are the user project's
project id and project number, respectively. E.g.,
$ gcloud projects add-iam-policy-binding my-test-proj \
--member='serviceAccount:service-123456789#'\
'gcp-sa-bigquerydatatransfer.iam.gserviceaccount.com'\
--role='roles/iam.serviceAccountTokenCreator'
3. Set environment var PROJECT to your user project, and
GOOGLE_APPLICATION_CREDENTIALS to the service account key path. E.g.,
$ export PROJECT_ID='my_project_id'
$ export GOOGLE_APPLICATION_CREDENTIALS=./serviceacct-creds.json'
4. $ python3 ./create_transfer_config.py
"""
import os
from google.cloud import bigquery_datatransfer
from google.oauth2 import service_account
from google.protobuf.struct_pb2 import Struct
PROJECT = os.environ["PROJECT_ID"]
SA_KEY_PATH = os.environ["GOOGLE_APPLICATION_CREDENTIALS"]
credentials = (
service_account.Credentials.from_service_account_file(SA_KEY_PATH))
client = bigquery_datatransfer.DataTransferServiceClient(
credentials=credentials)
# Get full path to project
parent_base = client.project_path(PROJECT)
params = Struct()
params["query"] = "SELECT CURRENT_DATE() as date, RAND() as val"
transfer_config = {
"destination_dataset_id": "my_data_set",
"display_name": "scheduled_query_test",
"data_source_id": "scheduled_query",
"params": params,
}
parent = parent_base + "/locations/us"
response = client.create_transfer_config(parent, transfer_config)
print response
As far as I know, unfortunately you can't use a service account to directly schedule queries yet. Maybe a Googler will correct me, but the BigQuery docs implicitly state this:
https://cloud.google.com/bigquery/docs/scheduling-queries#quotas
A scheduled query is executed with the creator's credentials and
project, as if you were executing the query yourself
If you need to use a service account (which is great practice BTW), then there are a few workarounds listed here. I've raised a FR here for posterity.
This question is very old and came on this thread while I was searching for same.
Yes, It is possible to use service account to schedule big query jobs.
While creating schedule query job, click on "Advance options", you will get option to select service account.
By default is uses credential of requesting user.
Image from bigquery "create schedule query"1

Script to Map the users and groups to roles of the application in websphere 8.5.5

Am trying to automate users and groups mapping to application in WAS below is the script that am trying i want to verify if this is how to do since i don't know much about WAS
import sys
filename=""
fileread = open(filename, 'r')
filelines = fileread.readlines()
for row in filelines:
column=row.strip().split(';')
user_name=column[0]
print user_name
pass_word=column[1]
first=column[2]
last=column[3]
AdminTask.createUser(['-uid',user_name, '-password', pass_word, '-confirmPassword', pass_word, '-cn', first, '-sn', last ])
AdminTask.mapUsersToAdminRole(['-roleName','Administrator','-userids',user_name])
AdminConfig.save()
print 'Userid creation completed for', user_name
AdminApp.install('myapp.ear', '[-MapRolesToUsers [["All Role" No Yes "" ""]
["Every Role" Yes No "" ""] [DenyAllRole No No user1 group1]]]')
agmBean = AdminControl.queryNames('type=AuthorizationGroupManager,process=dmgr,*')
AdminControl.invoke(agmBean, 'refreshAll')
fileread.close()
Judging by your previous question, I'm assuming you have the LDAP server set up. If you are mapping users from the LDAP server to administrator roles, then you don't need to create new users. Something like this command will map user1 from LDAP to the admin role:
AdminTask.mapUsersToAdminRole('[-accessids [user:defaultWIMFileBasedRealm/cn=user1,ou=users,dc=yourco,dc=com ] -userids [user1 ] -roleName administrator]')
The realm name can be found in wimconfig.xml under CELL_DIR/wim/config; defaultWIMFileBasedRealm is the default. I would suggest running the command manually, and when you get everything to work, write the script.

Adding a TFS server group to access levels via command line

I am creating a group of users within TFS 2013 and I want to add them to the none default access level (ex. the full access group) but I noticed I am only able to do this through the web interface by adding a TFS Group under that certain level. I am wondering if there is a way to do this via the developer tool (command line) as everything I am doing is being done in a batch script.
Any input would be appreciated. Thanks!
Create 3 TFS server groups; add these groups to the different access levels (e.g. TFS_ACCESS_LEVEL_(NONE|STANDARD|FULL)). Now use the TFSSecurity commandline tool to add groups to these existing and mapped groups(tfssecurity /g+ TFS_ACCESS_LEVEL_NONE GroupYouWantToHaveThisAccessLevel). There is no other way to directly add people to the access levels, except probably through the Object Model using C#.
For the record, tfssecurity may require the URI, which can be obtained via API. This is easy to do in Powershell, here is how to create a TFS group
[psobject] $tfs = get-tfs -serverName $collection
$projectUri = ($tfs.CSS.ListAllProjects() | where { $_.Name -eq $project }).Uri
& $TFSSecurity /gc $projectUri $groupName $groupDescription /collection:$collection
Full script at TfsSecurity wrapper.

Using Cacti for monitoring Microsoft SQL servers

Is anyone using Cacti to monitor SQL server counters (disk queue length, i/o requests etc).
If you are, how did you go about accomplishing this? Basically I gather a number of performance counters on my SQL Servers. I need a way to create graphs and slice and dice the data that I have gathered? If you know of any other graphing solutions let me know?
Yes, done this a few times:
http://docs.cacti.net/usertemplate:host:microsoft:sqlserver
It works really well. You need access to create a login. This is the script you run which is not invasive:
/* SQL 2005/2008 */
USE [master]
GO
CREATE LOGIN [cactistats] WITH PASSWORD=SomePassword, DEFAULT_DATABASE=[master], DEFAULT_LANGUAGE=[us_english], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
EXEC sys.sp_addsrvrolemember #loginame = N'cactistats', #rolename = N'processadmin'
GO
CREATE USER [cactistats] FOR LOGIN [cactistats] WITH DEFAULT_SCHEMA=[dbo]
GO
GRANT SELECT ON [sys].[dm_os_performance_counters] TO [cactistats]
GO
/* END */
Once it's run and you've added the scripts as per the installation documentation, you will be able to graph SQL metrics.
Mike
This answer is in addition to the correctly marked answer.
if you need to monitor a particular sql server instance, then you need to edit this script file
/usr/share/cacti/site/scripts/ss_win_mssql.php
and change the line:
if (! $link = mssql_connect($host.':'.$port, $username, $password) )
to
$host = ($port == '1433' ? $host : $host.':'.$port);
if (! $link = mssql_connect($host, $username, $password) )
return;
and when creating the graphs set the hostname and instance like such: