I have set up Apache Guacamole with AD authentication and Postgres; everything seems to be right but I am not able to see any button to create any connection. Does anyone have any idea how to troubleshoot this?
I initialized the database
logs show a successful connection
I used a master DB account in the property files to role out any permission issue.
Tables are in place.
I am not able to see any connection. anyone has any idea how to troubleshoot this.
database tables:
Schema | Name | Type | Owner
--------+---------------------------------------+-------+--------
public | guacamole_connection | table | master
public | guacamole_connection_attribute | table | master
public | guacamole_connection_group | table | master
public | guacamole_connection_group_attribute | table | master
public | guacamole_connection_group_permission | table | master
public | guacamole_connection_history | table | master
public | guacamole_connection_parameter | table | master
public | guacamole_connection_permission | table | master
public | guacamole_entity | table | master
public | guacamole_sharing_profile | table | master
public | guacamole_sharing_profile_attribute | table | master
public | guacamole_sharing_profile_parameter | table | master
public | guacamole_sharing_profile_permission | table | master
public | guacamole_system_permission | table | master
public | guacamole_user | table | master
public | guacamole_user_attribute | table | master
public | guacamole_user_group | table | master
public | guacamole_user_group_attribute | table | master
public | guacamole_user_group_member | table | master
public | guacamole_user_group_permission | table | master
public | guacamole_user_history | table | master
public | guacamole_user_password_history | table | master
public | guacamole_user_permission | table | master
(23 rows)
No available connection button to create new connection
If you have import all the sql files available in guacamole, it will create an admin user for you by default with the following details.
username: guacadmin
password: guacadmin
After you login, you will see empty connections screen (As you shown above).
To create new connection, Click on the username at the top right (a sub menu will appear), click on Settings.
You will see several tabs here. To create connection, go to Connection tab, and click on 'New Connection' button.
Related
I apologise in advance because I have no idea how to structure this question.
I have the following tables:
Sessions:
+----------+---------+
| login | host |
+----------+---------+
| breilly | node001 |
+----------+---------+
| pparker | node003 |
+----------+---------+
| jjameson | node004 |
+----------+---------+
| jjameson | node012 |
+----------+---------+
Userlist:
+----------+----------------+------------------+
| login | primary_server | secondary_server |
+----------+----------------+------------------+
| breilly | node001 | node010 |
+----------+----------------+------------------+
| pparker | node002 | node003 |
+----------+----------------+------------------+
| jjameson | node003 | node004 |
+----------+----------------+------------------+
What kind of SQL query should I perform so I can get a table like this?:
+----------+---------+------------+
| login | Host | Server |
+----------+---------+------------+
| jjameson | node004 | Secondary |
+----------+---------+------------+
| jjameson | node012 | Wrong Node |
+----------+---------+------------+
| pparker | node003 | Secondary |
+----------+---------+------------+
| breilly | node001 | Primary |
+----------+---------+------------+
Currently I'm just using Go with a bunch of structs / hashmaps to generate this.
I am planning to migrate the users / sessions to an in memory sqlite Database, but I can't seem to wrap my head around a query to get this sort of table.
The Server column is based on whether the user is logged on his primary / secondary or wrong machine.
I've put this in SQL Fiddle as well
Use case logic:
select s.*,
(case when s.host = ul.primary_server then 'primary'
when s.host = ul.secondary_server then 'secondary'
else 'wrong node'
end) as server
from sessions s left join
userlist ul
on s.login = ul.login;
I was creating a Audit trail table where some_user can edit a table, Autid table logs it. But some_user cannot edit the audit table
I have the following table items. which looks like this
| date | id | s_px | c_px | fee |
+------------+-----+----------+----------+-----+
| 2015-01-01 | 001 | 5355.00 | 5355.00 | 2 |
| 2015-01-01 | 002 | 13240.00 | 13240.00 | 3 |
| 2015-01-01 | 003 | 5840.00 | 5840.00 | 1 |
| 2015-01-01 | 004 | 20.55 | 20.59 | 5 |
| 2015-01-01 | 005 | 64.42 | 64.42 | 6 |
I created an audit_tb to track any changes for the table items with trigger that call a function audit_function() for any Insert, Update, Delete.
audit_function() insert any changes on items table to audit_tb.
https://wiki.postgresql.org/wiki/Audit_trigger
Everything works fine when I am a power user have all access to items and audit_tb. Problem is power_user can also modify the audit_tb.
So I created some_user which can change items, and can only select audit_tb.
The Problem is with this audit_function() cannot Insert since user some_user is limited to Select only.
ERROR: permission denied for relation audit_tb
You need to grand permission to the user
GRANT ALL PRIVILEGES ON DATABASE mydb TO admin_user;
Create audit_function() as the superuser with the option security definer. If you do that, the function will run with the privileges of the superuser (=the owner), not with the privileges of the users that triggered the function.
The title may not be that helpful but what I am trying to do is this.
For simplicity's sake I have two tables one called logs and another called Log controls
In LOGS I have and a log event column, this is automatically populated by imported information. On the LOG CONTROLS I have a manually entered list of Log events (to match the ones coming in) and I have this table to have them assigned ID numbers and other details about the event.
What I need to do is have a column in the LOGS table which looks at the Log events, matches it to the ID from the LOG CONTROLS table and assigns the ID into the LOGS table.
I have seen a few methods of changing information in columns based of information in other tables but all of these seem to be one way checks i.e if ID = X change to VALUE FROM OTHER TABLE where as what I need is IF VALUE = X FROM OTHER TABLE CHANGE ID FIELD TO = Y FROM OTHER TABLE
Below is a mock up of the tables.
+----+-----------+----------+------------+
| ID | Date_Time | Event | Control ID|
+----+-----------+----------+------------+
| 1 | 0/0/0 | Shutdown | |
| 2 | 0/0/0 | Start up | |
| 3 | 0/0/0 | Error | |
| 4 | 0/0/0 | Info | |
| 5 | 0/0/0 | Shutdown | |
| 6 | 0/0/0 | Error | |
+----+-----------+----------+------------+
+-------------------+----------+--------+-------+
| Control ID | Event | Export | Flag |
+-------------------+----------+--------+-------+
| 1 | Shutdown | TRUE | TRUE |
| 2 | Start up | TRUE | FALSE |
| 3 | Error | TRUE | TRUE |
| 4 | Info | TRUE | FALSE |
+-------------------+----------+--------+-------+
So I need the Control ID in the first table to match the control ID from the second table depending on what the event was.
I hope this makes sense.
Any help or advice would be greatly appreciated.
From your description, it seems that a simple UPDATE statement is all you need:
update logs
set control_id = c.control_id
from log_controls as c
where c.event = logs.event;
I have QV report with table that looks like this:
+---------+--------+---------------+------+-------+
| HOST | OBJECT | SPECIFICATION | COPY | LAST |
+---------+--------+---------------+------+-------+
| host001 | obj01 | spec01 | c1 | 15:55 |
| host002 | obj02 | spec02 | c2 | 14:30 |
| host003 | - | - | - | - |
| host004 | - | - | - | - |
+---------+--------+---------------+------+-------+
now I got another small table:
spec1
host1
host4
all I need is in loading script to connect these tables in this way:
the first row is specification and all others are hosts. If there is host with name from second row of second table(host1) and with specification from first row, than I need to copy all other values from the host row (host1) to rows where are other host from second table(host4), e.g.:
+---------+--------+---------------+------+-------+
| HOST | OBJECT | SPECIFICATION | COPY | LAST |
+---------+--------+---------------+------+-------+
| host001 | obj01 | spec01 | c1 | 15:55 |
| host002 | obj02 | spec02 | c2 | 14:30 |
| host003 | - | - | - | - |
| host004 | obj01 | spec01 | c1 | 15:55 |
+---------+--------+---------------+------+-------+
I have several tables like the second one and I need to connect all of them. Sure, there can be multiple rows with same host, same specification, etc. in firts table. "-" sign is null() value and one can change the second table layout.
I tried all JOINS and now Im trying to iterate over whole table and comparing, but Im new to QV and Im missing some SQL features like UPDATE.
I appreciate all your help.
Here's a script, it's not perfect and there is probably a neater solution(!) but it works for your scenario.
I rearranged your "Copy Table" so that it has three columns:
HOST SPECIFICATION TARGET_HOST
You could then repeat rows for the additional hosts that you wish to copy to as follows:
HOST SPECIFICATION TARGET_HOST
host001 spec01 host004
host001 spec01 host003
The script (I included some dummy data so you can try it out):
Source_Data:
LOAD * INLINE [
HOST, OBJECT, SPECIFICATION, COPY, LAST
host001, obj01, spec01 , c1, 15:55
host002, obj02, spec02 , c2, 14:30
host003
host004
];
Copy_Table:
LOAD * INLINE [
HOST, SPECIFICATION, TARGET_HOST
host001, spec01, host004
];
Link_Table:
NOCONCATENATE
LOAD
HOST & SPECIFICATION as %key,
TARGET_HOST
RESIDENT Copy_Table;
DROP TABLE Copy_Table;
LEFT JOIN (Link_Table)
LOAD
HOST & SPECIFICATION as %key,
HOST, OBJECT, SPECIFICATION, COPY, LAST
;
LOAD
*
RESIDENT Source_Data;
Complete_Data:
NOCONCATENATE LOAD
TARGET_HOST as HOST,
OBJECT, SPECIFICATION, COPY, LAST
RESIDENT Link_Table;
CONCATENATE (Complete_Data)
LOAD
*
RESIDENT Source_Data
WHERE NOT Exists(TARGET_HOST,HOST & SPECIFICATION); // old condition: WHERE NOT Exists(TARGET_HOST,HOST);
DROP TABLES Source_Data, Link_Table;
I would like to know how to do this.
For example:
I have c:/temp/.
Inside this temp folder I have various files and folders in various structure.
What would be the easiest way to gather all file names inside temp and its subdirectories and then insert them into a table?
I am planning the table structure will be simple.
It will have:
Primary key
Path and filename
CreatedDate
ModifiedDate
DeleteDate
So the table would look something like this:
Key | PathFilename | Modified | Created | Delete |
1 | c:\temp\fil7.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
2 | c:\temp\fi5e.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
3 | c:\temp\1ile.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
4 | c:\temp\2ile.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
5 | c:\temp\3ile.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
6 | c:\temp\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
7 | c:\temp\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
8 | c:\temp\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
9 | c:\temp\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
10 | c:\temp\folde1\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
11 | c:\temp\folde2\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
12 | c:\temp\folde4\file.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
13 | c:\temp\folder\fil5.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
14 | c:\temp\folder\fil6.txt | 2013/02/01 | 2013/02/01 | 1900/01/01|
Can I do this with a SSIS job? Or is there any other solution that can accomplish this task?
Is there any tutorial on how to do this step by step?
Thank you
ps: I have a filesystemWatcher vb.net program that will watch any created files and modified files
but for the initial start, I would like to fill up the table with files that already exists. I don't know if filesystemWatcher can do this initial task? can it?
I would create a Variable, FolderSource of type String and assign it your value of c:\temp.
While you can do all of this in a single Script Task which is an object on the Control Flow, I am going to describe how to do it with the Data Flow Task as that might be a better construct for learning how SSIS generally works. Drag a Data Flow Task onto the canvas. Double click on it.
Inside your Data Flow Task, add a Script Component. I add a reference to the Variable FolderSource as ReadOnly. In the Inputs and Outputs, I renamed the Output buffer to FS and added the columns below. Data types were 4 byte integer, string 255 and then date (DT_DATE).
public override void CreateNewOutputRows()
{
string src = Variables.FolderSource;
int key = 1;
System.IO.FileInfo fileInfo = null;
foreach (string currentFile in System.IO.Directory.EnumerateFiles(src, "*.*", SearchOption.AllDirectories))
{
fileInfo = new FileInfo(currentFile);
FSBuffer.AddRow();
FSBuffer.Key = key++;
FSBuffer.PathFilename = currentFile;
// Have UTC flavored methods too
FSBuffer.Created = fileInfo.CreationTime;
FSBuffer.Modified = fileInfo.LastWriteTime;
FSBuffer.Delete = new DateTime(1900, 1, 1);
}
}
That'll get the data streaming down your data flow. If you need to do anything with the data, you would add various components now.
Once you've manipulated the rows of data you'll need to land them somewhere. There are a host of destinations available but you'll probably only want to use the OLE DB Destination component. Connect the output of the Script Task, or any subsequent task(s) you used, to the destination. Double click on it and that will allow you to specify the database connection, the table name and the mapping of columns---in that order.
You probably don't have an OLE DB Connection Manager defined so click the Connection Manager button in the destination and create a new one. After creating a Connection Manager, you'll select the table where the data should reside. Then on the Columns tab, map the source columns (from the Script Component) to the destination (table).