For the life of me I cannot find where Pentaho stores the user-created CDE Dashboard files in the file structure? I am using the Community Edition, and I assumed that all the files would be stored in /biserver-ce/pentaho-solutions. When I sign into the Pentaho User Console and select Browse Files, the folders I see do not match what I see in the pentaho-solutions directory. For example, there is a "Steel Wheels" folder in the Browse Files pane, but I cannot find it in the Pentaho directory. Similarly, if I create a folder and a CDE Dashboard within it, I cannot find it in the Pentaho directory. I have done find / -name "*.wcdf" and it doesn't find the CDE Dashboard I created.
I have tried a couple tutorials where I manually create the CDF files, such as index.properties, index.xml, the .xcdf file and .html file, and if I place the files in a folder in the /biserver-ce/pentaho-solutions directory (and of course, stop and start the Pentaho server), I do not see the dashboard show up in the Browse Files pane through the Pentaho User Console.
The files must be somewhere; likewise, if we manually create the files per the tutorials, they must be able to be placed somewhere for the PUC to pick them up. Can anyone please help? Sadly, I have already spent hours on this and not sure I will be able to figure this out without some help.
AFAIK, it depends on Pentaho version. From Pentaho 5 till current Pentaho 7 the Jackrabbit JCR repository is used to sort of 'emulate' a file system, but with possibilities to store the entire system in different storage types. Most of Pentaho server objects, such as CDF dashboards, datasource definitions etc are stored in this repository. In the default pentaho installation JCR uses a database with name 'jackrabbit' in your DBMS to persist its state. But the actual repository location can be defined somewhere in configuration files in pentaho-solutions/system/jackrabbit/ directory.
You cannot directly access the dashboard files outside the pentaho
server. If you want to access these files outside then you need to
download these files, you can download
dashboard.wcdf,dashboard.cda,dashboard.cdfde files from Pentaho your
console. Then again if you want to send your dashboard to your friend
or someone you can upload these files in pentaho user console.
Related
I'm using Hitachivantara Pentaho BI community edition v9.1 on a reporting server.
Reporting server stopped working properly because Generated Content folder was bursting of report auto-generated files. There were so many files that UI was unable to load folder content as individual files, so as soon as UI loaded with a minimum functionality after several hours, I could select complete folder and move it to trash.
After that I'm experiencing an error when trying to permanently delete the Generated Content folder from trash folder:
Error subject is:
"You do not have permission to delete this file. Contact your administrator for assitance".
The issue is I already own administrator privileges.
I can't move again problematic folder to its original location and try to delete individual files first, because reporting server stops working as soon as it completes the task due to huge number of files.
Any help is welcome.
I am attempting to give access to parquet files on a Gen2 Data Lake container. I have owner RBAC on the container but would prefer to limit access in the container for other users.
My Query is very simple:
SELECT
TOP 100 *
FROM
OPENROWSET(
BULK 'https://aztsworddataaipocacldl.dfs.core.windows.net/pocacl/Top/Sub/part-00006-c62926ba-c530-4ad8-87d1-cf38c67a2da3-c000.snappy.parquet',
FORMAT='PARQUET'
) AS [result]
When I run this I have no problems connecting. I have attempted to add ACL rights onto the files (and of course the containing folders 'Top' and 'Sub').
I've give RWX on the 'Top' folder using Storage Explorer and default so that it cascades to the 'Sub' folder and parquet files as I add them
When my colleague attempts to run the SQL script the get the error message. Failed to execute query. Error: File 'https://aztsworddataaipocacldl.dfs.core.windows.net/pocacl/Top/Sub/part-00006-c62926ba-c530-4ad8-87d1-cf38c67a2da3-c000.snappy.parquet' cannot be opened because it does not exist or it is used by another process.
NB similar results are also experienced in Spark but with a 403 instead
SQL on-demand provides a link to the following help file after the error, it suggests:
If your query fails with the error saying 'File cannot be opened because it does not exist or it is used by another process' and you're sure both file exist and it's not used by another process it means SQL on-demand can't access the file. This problem usually happens because your Azure Active Directory identity doesn't have rights to access the file. By default, SQL on-demand is trying to access the file using your Azure Active Directory identity. To resolve this issue, you need to have proper rights to access the file. Easiest way is to grant yourself 'Storage Blob Data Contributor' role on the storage account you're trying to query.
I don't wish to grant Storage Blob Data Contributor or Storage Blob Data Reader as this gives access to every file on the container and not just those I want end users to be able to query. We have found the same experience occurs for SSMS connecting to parquet external tables.
So then in parts:
Is this the correct pattern using ACL to grant access, or should I use another method?
Are there settings on the Storage Account or within my query/notebook that I should be enabling to support ACL?*
Has ACL been implemented on Synapse Workspace to date given that we're still in preview?
*I have resisted pasting my entire settings as I really have no idea what is relevant and what entirely irrelevant to this issue but of course can supply.
It would appear that the ACL feature was not working correctly in Preview for Azure Synapse Analytics.
I have now managed to get it to work. At present I see that once Read|Execute is provided to a folder it allows access to the files contained within that folder and sub folders. Access is available even when no specific ACL access is provided on a file in a sub folder. This is not quite what I expected however it provides enough for me to proceed: only giving access to the Gold folder allows for separation of access to the files I want to let users query and the working files that I want to keep hidden.
When you assign ACL to a folder it's not propagated recursively to all files inside the folder. Only new files inherit from the folder.
You can see this here
Go to azure storage explorer change ACL permissions in the route Folder and right click on your storage and click on "propogate access control lists"
I have a windows form application that requires users to log in to access the information. I have created a local compact database file for the credentials to be stored. I added the database file to my the folder but when I open my application and try to log in it tells me that it cannot find the database file.
Should the file be stored on a different folder, or should I need to install an instance of sql on the user computer.
This is my first deployment so I am not sure how to go about it. I have done some research on the subject, but it does not seem related to my issue. The help section of Intallshield was not clear either.
I am looking for some resources on how to accomplish this.
I figure out the issue, in order to work all files, including the database files need to be dumped under the userprofile folder.
I've got a DotNetNuke system (v 5.6) that's hosting several different portals, and I'd like to move one of them to another hosting provider. What's the easiest way to do this?
Every web site I find that claims to explain how to move a DotNetNuke site essentially says "Copy the entire database over to the new system." That's great if you've only got one portal in the database, but I've got a dozen of them. I only want to move one portal, not all of them.
Exporting the site to a .template is another popular suggestion. This exports the structure of the site (all the tab definitions, for example), but it doesn't include any of the actual HTML content. As such, that's essentially worthless.
There must be a reasonable way to do this short of trying to strip one individual portals data out of every single DNN table. Right?
When you export a site template, you can include the content of the site, as well (for the modules that support portability, which includes the standard HTML module). This is how the default site template has all of its content. When you do this, there will be a .template.resources file that you'll need, as well as the .template file.
The other option is to do a full backup and restore, and then remove the other sites once you've restored. If you have significant content in a module that doesn't support portability, I think this will be your best bet.
FYI, I did find a solution from someone over on the DotNetNuke forums.
Create a 2nd version of that install, then delete all the other
portals. Move the install with the one portal. We've done this several
times with installs with lots of portals and it works just fine. Yeah
there's still some noise left in the db, but it's a quick and
effective way of doing things.
Edit note that this will give you an install with 1 portal. You can't detach a portal from one install and reattach it to an existing
install (well, you can, but basically you have to export the portal as
a template and that isn't 100%)
This is the approach I took, and sure enough, it works.
In a nutshell:
Mirror the files for the web site to another server.
Mirror the DNN database to another server.
Log in a Host on the new setup and delete all the portals but the one you want to migrate.
Delete any module definitions that are not in use by the remaining portal.
Open up your favorite SQL tool and delete any entries in the Users and UserProfile tables that no longer have a matching row in the UserPortals table. DNN does not remove these by default, which is frustrating.
Hop in to Windows Explorer and delete all of the Portal folders you no longer need (ie: /Portal/1, /Portal/2, etc.)
Back up the database using Enterprise Manager to create a .bak file
Make a .zip of the entire DNN installation folder.
You now have a .bak that contains the database and a .zip that contains the files. Send those off to the new hosting company, and you should be all set. Just make sure to update your web.config to set the connection string properly to point to the new database server at the new hosting company.
It's just that easy. ;)
I'm working on a "UPLOAD DOCUMENTS" functionality where different customer can upload the required documents and employer should be able to view all the uploaded documents by the customers. Currently in my local system I can upload the documents and it saves the uploaded file to "inetpub" folder. But in order to provide "upload documents" feature to production environment what should be the path? Where Can these documents be saved?
Any suggestion is appreciated.
The files need to be accessible to users for download somehow. This is tricky however, as it opens you up to security issues if they upload an executable file and then request it.
What I normally do is keep the file information (name, type, etc.) in a database. Then, I name the file on disk with a consistent naming structure, such as UPLOAD_ASSET_123456, with no file extension. I also keep them out of the web root.
Then, to retrieve to the file from the web end, have a script that accepts an ID, and then the script echos the file contents.