How to bulk suspend users in G-Suite - google-gsuite

I have a few hundred accounts that need to be suspended due to inactivity. I can "Download users" and get a CSV that includes "Last sign in" as a value, but the CSV of users data I get from "Bulk update users" does not.
My plan is to...
use "Download users" to get a CSV that contains the "Last sign in"
value
in a text editor, extract the neglected accounts
format them to the "Bulk update users" specs
add "Suspended" to the "New Status [UPLOAD ONLY]" field
add "Suspended Accounts" to the "Org Unit Path [Required]" field
upload using "Bulk update users"
I think this will work, but I wonder if there's a better way to do this without adding any 3rd party access.
(Edited to add step 5, the organizational unit, which has to be created first, so that the suspended accounts can be more easily reviewed.)

Related

SSAS Tabular - Power Query Editor stuck at "Operation in progress" - "identifying schemas"

In Visual Studio, with an SSAS Tabular model open, in the Power Query Editor window, when I make a change to a large partition (1 million+ rows) that sources its data from an Azure SQL Database, the edit and preview happens quickly in the Power Query Editor window itself. However, when I click "Close & Update" or "Close & Update Without Processing" this message appears for a very long time ("Operation in progress" - "identifying schemas"):
At the same time, Task Manager shows Visual Studio downloading at several Mbps the entire time, so I am assuming that Visual Studio is attempting to download the full contents of the table.
Is there a way to prevent this behavior? I was thinking that "Close & Update Without Processing" would prevent this behavior but it does not.
My current workaround is:
Rename the Azure SQL Database table.
Create a new empty Azure SQL Database table with the same name and fields as the original table.
Perform the "Close & Update" letting it use this empty table as a source so it completes instantly.
Delete the new empty Azure SQL Database table.
Rename the Azure SQL Database table back to what it was previously.
The work around that I have found is to use a parameter to limit the number or rows being used for development. An example of how to set this up is here:
https://blog.crossjoin.co.uk/2018/02/26/filtering-data-loaded-into-a-workspace-database-in-analysis-services-tabular-2017-and-azure-analysis-services/

"save query" in BigQuery overwrite my old saved query without previous notification

I am using Google cloud BigQuery.
I just realized that when I save my query and then I edit / create new query... and then I click save query ... it overwrite my old saved query without any question before.
I hope will be good to popup some message about > ...if I want overwrite my old query.

SSAS : How to generate a csv of all the users having access to a cube?

I need to compare the users listed in a file with the users having access to a certain cube to check which to add and which to remove so it checks out. Problem is, there are about 1000 users, so I'm not gonna be able to do it one by one. Is there a quick way to generate a CSV of all the users of the cube so I can work some Python magic and get the overlap / difference easily ?
Connect to the Cube Server via SSMS.
Expand your cube
Expand "Roles"
Right Click on "ReadOnly" >> "Script Role as" >> "CREATE TO" >> "New Query Editor Window" or "File..."
Now you have a XML-File containing all the Users with access to your cube (One comment: if you/ your server admin are working with security groups and you don't have the rights to look into those security groups, then you need to reach out to him, so that he/she can give you the list of members of this security group).
If you are not so much into querying XML-Files, here is a pretty simple workaround of how to get your list:
- Go to your favourite Editor (Notepad++, etc.) and remove everything except the lines with < Name >.
In Notepad++ that can easily be done by highlighting lines with "< Name >" (Press Search STRG+F, then go to highlight and activate "Set Bookmark")
then go to "Search" >> "Bookmark" >> "Remove all lines without Bookmarks"
Finally Search and Replace "< Name >" and "< / Name >" with "".
Now you have your list without annoying XML-Content, which you can for example paste into Excel and compare it via a vlookup with your list or better: you insert both list in sql tables and compare them via sql

Combine multiple stored procedures into one script

I have multiple stored procedures in my application database. I need an easy way for someone else to integrate all my stored procedures into his/her database. Is there a way to combine all my stored procedures into one script so that someone can run the single script to recreate all my stored procedures in his/her database?
Use the sql server "Generate Script" Wizard.
Right Click on the database from which you want to Generate the scripts
Choose Tasks --> Generate Scripts
Click Next on the "Introduction" window and in the 2nd screen select the option button "Specific Database objects" and click the combo box near "Stored Procedure" (If you are only taking the scripts of stored procedures.
On the Next screen give the path and file name to where you want to save the script.
Click the advanced button and change the following
a) Check for Object Existence to "True"
b) Script Use database - false
c) Script Drop and Create
d) Script Object-level Permissions - True
items c & d are optional.
Once you have all these set Click Next till you reach the Final screen and now hit Finish. You will get all the procedures in a single .sql file

Why Pentaho Data-integration cannot read new field on table?

I am trying to copy record from a few table into a new one(report_table). But when I've created the transformation on kettle, I need to add a new field into report_table. After I add the field, kettle wont show it. When I try to "enter field mapping", it does not show on 'target field' Why cant kettle read the field?
There's no special thing. I just put "Input Table" and give it a query to select from my resource table. Then I put "Output Table" and give a "Hop" between input and output table. Then when I choose "Enter field mapping" kettle can't read all field from target table.
Any idea.
Clear the database cache. PDI caches the database structure, and also the hop metadata.
Also, i've seen bugs in 5.0.x where it gets into its head the structure of the metadata and will not change until you restart spoon. So try that too! (Note this only happens occasionally in my experience, and I work with PDI all day every day.