Is there a way to add additional configurable settings in OpsCenter 6.0.2 Lifecycle Manager config profiles? - opscenter

I would really like to add the following settings to our spark-defaults.conf using OpsCenter 6.0.2 in order to avoid configuration drift. Is there a way to add these config items to the config profile template?
spark.cores.max 4
spark.driver.memory 2g
spark.executor.memory 4g
spark.python.worker.memory 2g

NOTE: As Mike Lococo has pointed out in the comments for this answer -- this answer may work to update the config profile values but will not result in those values being written to spark-defaults.conf.
The following is not a solution!
You can; you have to update the config profile via the LCM Config Profile API (https://docs.datastax.com/en/opscenter/6.0/api/docs/lcm_config_profile.html#lcm-config-profile).
First, identify the config profile that needs updating:
$ curl http://localhost:8888/api/v1/lcm/config_profiles
Get the href for the specific config profile that needs updating, request it, and save the response body to a file:
$ curl http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 > profile.json
Now, in the profile.json file you just saved to, you add or edit the key at json > spark-defaults-conf to include the following keys:
"spark-defaults-conf": {
"spark-cores-max": 4,
"spark-python-worker-memory": "2g",
"spark-ssl-enabled": false,
"spark-drivers-memory": "2g",
"spark-executor-memory": "4g"
}
Save the updated profile.json. Finally, execute an HTTP PUT to the same config profile URL, using the edited file as the request data:
$ curl -X PUT http://localhost:8888/api/v1/lcm/config_profiles/026fe8e3-0bb8-49c1-9888-8187b1624375 -d #profile.json

Related

How to check content of a Noobaa bucket

I am able to check status of Nooba bucket using noobaa bucket status <bucket> command.
$ noobaa bucket status XYZ
INFO[0005] ✅ Exists: NooBaa "noobaa"
INFO[0005] ✅ Exists: Service "noobaa-mgmt"
INFO[0006] ✅ Exists: Secret "noobaa-operator"
INFO[0006] ✅ Exists: Secret "noobaa-admin"
INFO[0008] ✈️ RPC: bucket.read_bucket() Request: {Name:XYZ}
INFO[0010] ✅ RPC: bucket.read_bucket() Response OK: took 14.3ms
Bucket status:
Bucket : XYZ
OBC Namespace : xyz-namespace
OBC BucketClass : default-bucket-class
Type : REGULAR
Mode : OPTIMAL
ResiliencyStatus : OPTIMAL
QuotaStatus : QUOTA_NOT_SET
Num Objects : 1
Data Size : 3.000 B
Data Size Reduced : 5.000 B
Data Space Avail : 1.000 PB
But I am not able to check content present inside Noobaa bucket.
How can we check content of a Noobaa bucket? using Noobaa CLI or any other way?
Your question made me realize that noobaa CLI should have noobaa object list command so I opened a new issue for this enhancement on the operator github repo. Thanks :)
Until this is added, there are several ways we use to list objects:
run noobaa ui - notice that it opens the browser quickly, but on the terminal it prints the credentials for you to use for login. You can probably find the buckets and the drill down to the objects in the UI on your own, and you can also check out some recorded videos that navigate the UI - for example this video.
Take the admin S3 credentials and endpoint from noobaa status and then use your favorite s3 client - I currently use aws-cli or rclone:
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint $NOOBAA_S3_ENDPOINT --no-verify-ssl s3'
and then:
s3 ls XYZ
Not many noticed but the NooBaa system CR contains a useful Readme text in its status, with commands to "Test S3 client" - ready to copy-paste to set up your aws-cli, including kubectl port-forward to support secure networks and reading the credentials from secrets. Check it out with kubectl describe noobaa. This 40 seconds youtube video shows this briefly. BTW, the readme text is generated for the system but its text does not contain actual secrets, only kubectl commands to read those secrets if permitted to.
$ kubectl describe noobaa
...
Phase: Ready
Readme:
Welcome to NooBaa!
-----------------
NooBaa Core Version: 5.3.0-9f579d9
NooBaa Operator Version: 2.1.0
Lets get started:
1. Connect to Management console:
Read your mgmt console login information (email & password) from secret: "noobaa-admin".
kubectl get secret noobaa-admin -n backup-service -o json | jq '.data|map_values(#base64d)'
Open the management console service - take External IP/DNS or Node Port or use port forwarding:
kubectl port-forward -n backup-service service/noobaa-mgmt 11443:443 &
open https://localhost:11443
2. Test S3 client:
kubectl port-forward -n backup-service service/s3 10443:443 &
NOOBAA_ACCESS_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_ACCESS_KEY_ID|#base64d')
NOOBAA_SECRET_KEY=$(kubectl get secret noobaa-admin -n backup-service -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|#base64d')
alias s3='AWS_ACCESS_KEY_ID=$NOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=$NOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3'
s3 ls
...
Last option, which should have been mentioned first, but unfortunately I just saw it is broken in the current version v2.1.0 (opened new issue), is to use the generic noobaa api command in order to call the object_api list_objects method like so:
noobaa api object list_objects '{ "bucket": "first.bucket" }'
I hope that helps, feel free to open github issues with suggestions/issues.
Thanks!
(NooBaa CTO)

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

Dockerized DCTM 7.3 and Dockerized DCTM REST 7.3 not able to retrieve global registry or its documents

My setup consists of
Documentum Content Server 7.3 (dctm-cs) running in a docker container (from EMC)
Documentum REST Services 7.3 (dctm-rest) running in a docker container (from EMC)
I am definitively able to get information from within dctm by running queries against it with iapi, for example:
API> ?,c,select user_name from dm_user enable (return_top 5)
user_name
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docu
ubuntudb
dm_superusers
dm_superusers_dynamic
dm_browse_all
(5 rows affected)
I am also able to $ curl http://localhost:8080/dctm-rest/repositories.json from both the dctm-rest container as well as its host container and get the results:
{"id":"http://localhost:8080/dctm-rest/repositories","title":"Repositories","author":[{"name":"EMC Documentum"}],"updated":"2017-08-16T21:42:44.177+00:00","page":1,"items-per-page":1000,"total":1,"links":[{"rel":"self","href":"http://localhost:8080/dctm-rest/repositories.json"}],"entries":[{"id":"http://localhost:8080/dctm-rest/repositories/ubuntudb","title":"ubuntudb","summary":"ubuntudb","updated":"2017-08-16T21:42:44.178+00:00","published":"2017-08-16T21:42:44.178+00:00","links":[{"rel":"edit","href":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}],"content":{"type":"application/json","src":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}}]}
Attempting to $ curl http://localhost:8080/dctm-rest/repositories/ubuntudb.json, however hangs indefinitely.
I have attempted to provide the default username and password via basic HTTP authentication, also with the same results.
The contents of the dfc.properties file in dctm-cs:
dfc.data.dir=/opt/dctm
dfc.tokenstorage.dir=/opt/dctm/apptoken
dfc.tokenstorage.enable=false
dfc.docbroker.host[0]=ubuntustateless
dfc.docbroker.port[0]=1489
dfc.crypto.repository=ubuntudb
dfc.session.secure_connect_default=try_secure_first
dfc.globalregistry.repository=ubuntudb
dfc.globalregistry.username=dm_bof_registry
dfc.globalregistry.password=AAAAEL9wp8c6k3K2UTQJwTYO5kMnE3rDrHJVDL+LijAg+zLk
The contents of the dfc.properties file in dctm-rest:
dfc.docbroker.host[0]=172.18.0.1
dfc.docbroker.port[0]=1489
#Add the global registry repository name to the following key.
dfc.globalregistry.repository=ubuntudb
#Add the username of the global registry user to the following key.
dfc.globalregistry.username=dmadmin
#Add an encrypted password value for the following key.
dfc.globalregistry.password=password
dfc.exception.include_id=false
dfc.exception.include_decoration=false
I have attempted to change the value of dfc.globalregistry.username to be the same as in dctm-cs, to no avail and same hang on request.
I have also attempted to use both encrypted and decrypted values for dfc.globalregistry.password, in both dctm-cs and dctm-rest also with no luck.

How do I post data to an external endpoint from Apache Module?

I'm coding an Apache Module. In the same I'm able to read the request parameters and make some params by processing the request. Now I want to post this data to the external endpoint. How do I do that?
Say I have a data
char* data = "{clientid:2433211456}"; and I want to post it to a URL example.com/getPostedData in async mode, how do I do that?
NOTE : Currently I'm working with plan Apache libs and APXS tool. If I can have some modules on which I can Build the same, please do suggest.
I have solved this issue as follows :
I used curl to post the request.
Installed libcurl-devel
In the handler method :
char* postData="hello=hie";
CURL *curl;
curl = curl_easy_init();
ap_rprintf(r,"Intialized Curl!<br>");
if(curl) {
ap_rprintf(r,"Curl Request Posting Started!<br>");
curl_easy_setopt(curl, CURLOPT_URL,"http://google.com/search");
//curl_easy_setopt(curl, CURLOPT_POST,1L);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS,postData);
ap_rprintf(r,"%d",curl_easy_perform(curl));
curl_easy_cleanup(curl);
}
Compiled it as : apxs -i -a -c mod_example.c -lcurl
Done!!
For more reference on libcurl : http://curl.haxx.se/libcurl
Find my sample module code on github here.

Only show repos in gitweb that user has access to via Gitolite

I have gitolite setup and working with SSH key based auth. I can control access to repos via the gitolite-admin.git repo and the conf file. All of this works great over SSH but I would like to use GitWeb as a quick way to view the repos.
GitWeb is working great now but shows all repositories via the web interface. So my goal here is to:
Authenticate users in apache2 via PAM, I already have the Ubuntu server authenticating aginst AD and all the users are available. This should not be an issue.
Use the user name logged in with the check gitolite permissions
Display apropriate REPOS in the web interface.
Does anyone have a starting point for this? The Apache part shouldn't be difficult, and I'll set it to auth all fo the /gitweb/ url. I dont know how to pass that username around and authorize it against gitolite. Any ideas?
Thanks,
Nathan
Yes, it is possible, but you need to complete the gitweb config scripts in order to call gitolite.
The key is in the gitweb_config.perl: if that file exists, gitweb will include and call it.
See my gitweb/gitweb_config.perl file:
our $home_link_str = "ITSVC projects";
our $site_name = "ITSVC Gitweb";
use lib (".");
require "gitweb.conf.pl";
In gitweb/gitweb.conf.pl (custom script), I define the official callback function called by gitweb: export_auth_hook: that function will call gitolite.
use Gitolite::Common;
use Gitolite::Conf::Load;
#$ENV{GL_USER} = $cgi->remote_user || "gitweb";
$export_auth_hook = sub {
my $repo = shift;
my $user = $ENV{GL_USER};
# gitweb passes us the full repo path; so we strip the beginning
# and the end, to get the repo name as it is specified in gitolite conf
return unless $repo =~ s/^\Q$projectroot\E\/?(.+)\.git$/$1/;
# check for (at least) "R" permission
my $ret = &access( $repo, $user, 'R', 'any' );
my $res = $ret !~ /DENIED/;
return ($ret !~ /DENIED/);
};
From the comments:
GL_USER is set because of the line:
$ENV{GL_USER} = $cgi->remote_user || "gitweb";
$cgi->remote_user will pick the environment REMOTE_USER set by any Apache Auth module which has completed the authentication (like in this Apache configuration file).
You can print it with a 'die' line.
"Could not find Gitolite/Rc.pm" means the INC variable used by perl doesn't contain $ENV{GL_LIBDIR}; (set to ~/gitolite/lib or <any_place_where_gitolite_was_installed>/lib).
That is why there is a line in the same gitweb/gitweb.conf.pl file which adds that to INC:
unshift #INC, $ENV{GL_LIBDIR};
use lib $ENV{GL_LIBDIR};
use Gitolite::Rc;
Edit from Nat45928: in my case I needed to insert my home path into all the '#H#' entries. That solved all of my issues right away.