use dfs does not work in later versions of drill on the drill web page - sql

When using the web page displayed by drill on localhost:8047/query (by default) running the following commands fail:
use dfs.mydfs;
and then:
show files;
Then I receive this error:
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: SHOW FILES is supported in workspace type schema only. Schema [] is not a workspace schema. [Error Id: 872e6708-0aaa-480e-af32-9aaf6f84de2b on 172.28.128.1:31010]
While if I use the terminal to enter the same commands the command works correct.
I've also found that this affects 1.6 and above and that this behaviour is not seen on 1.5 below.
This command works in both the web and commmand line/terminal version:
show files in df.workspace;
I have configured multiple types of dfs and have tried both OS X and Windows 10 and found the issue to be the same.
I tried looking through the drill jira to see if this was registered as bug and I looked briefly through the release notes as well.

Related

Unable to get image details : Environment version Autosave_(date)T(time)Z_******** provided in request doesn't match environ

On AzureML Batchendpoint, I'm recently hitting the following error:
Unable to get image details : Environment version Autosave_(date)T(time)Z_******** provided in request doesn't match environ.
when I setup the batch-endpoint with a yml config:
environment: azureml:env-name:env-version
So, AzureML creates and builds the environment with the version I specify env-version, which is just a number (in my case = 3).
and then for some weird reason, AzureML creates an extra environment version called Autosave_(date)T(time)Z_********, which is not built, but based on the previous one just created, and then it becomes the latest version of that environment.
In summary, AzureML instead of looking for the version that I specified as env-name:3 it seems to be looking for env-name:Autosave_(date)T(time)Z_******** and then throws the error message mentioned above.
I found the problem was that when creating an environment from a YAML specification file, one of my conda dependencies was cmake, which I needed to allow installation of another python module. The docker image is exactly the same as a previously created environment.
Removing the cmake dependency from the YAML file, eliminated the issue. So the workaround is to install it using a Dockerfile.
The error message was very misleading to start with, but got there in the end after understanding that AzureML reuses a cached image, based on the hash value, from the environment definition accordingly to this
So for that reason, the automatically created Autosave docker image references to that same build, which only happens once when the first job is sent.

Connecting R to Snowflake through ODBC

I have been consistently receiving the same error while trying to create a connection between R and Snowflake using an ODBC driver. The error that I'm receiving says:
'''Error during wrapup: nanodbc/nanodbc.cpp:1021: 00000: [unixODBC][Driver Manager]Can't open lib 'Snowflake' : file not found
Error: no more error handlers available (recursive errors?); invoking 'abort' restart'''
The instructions given by Snowflake to connect a driver are rather cut and clear - but provides little insight on what to do with errors. Can anyone lead me as to where to go given this error?
This smells like a configuration issue. Your driver manager, unixODBC, isn't able to locate the Snowflake driver. (The Snowflake documentation says on MacOS to use iODBC. Don't. It won't work with the odbc R package.)
Check that unixODBC is findable. Run odbcinst -j in a terminal. If it works, you will know that you have unixODBC properly installed, and it will give you the paths to your various configuration files.
On to checking configuration. This is the documentation for configuring on Linux using unixODBC. If you are using MacOS the same general instructions apply, but the file extensions will change from .so to .dylib . Since it's saying it can't find the file, I'm thinking that using full paths might resolve this for you. It's also possible that there's some issue with how you are specifying the driver.
Also, it looks like it's searching for a file named 'Snowflake'. I'm thinking you've got Driver=Snowflake somewhere in one of your config files. Best change that to Driver=<path>/<to>/<driver>/libSnowflake.dylib (or .so if you're on Linux). Do this in all the places where you have Driver=Snowflake.

Docker build fails always with error hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1) Windows Containers

Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: FunciĆ³n incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.

Installing Google Adwords Api Library (using docker)

Googles documentation on installing the library, found here: https://github.com/googleads/googleads-php-lib/blob/master/README.md#getting-started, instructs us to copy adsapi_php.ini, as constructed here: https://github.com/googleads/googleads-php-lib/blob/master/examples/AdWords/adsapi_php.ini, to your home directory.
I filled out the necessary variables in the .ini, and I am using docker so I have placed this file inside my container at /var/www/home/node/ and when I run the command composer require googleads/googleads-php-lib I am given the following error in the command prompt:
Your requirements could not be resolved to an installable set of packages.
Problem 1
- Installation request for googleads/googleads-php-lib ^37.1 -> satisfiable by googleads/googleads-php-lib[37.1.0].
- googleads/googleads-php-lib 37.1.0 requires ext-soap * -> the requested PHP extension soap is missing from your system.
To enable extensions, verify that they are enabled in your .ini files:
- /usr/local/etc/php/php.ini
- /usr/local/etc/php/conf.d/adsapi_php.ini
- /usr/local/etc/php/conf.d/docker-php-ext-pdo_pgsql.ini
- /usr/local/etc/php/conf.d/docker-php-ext-sodium.ini
- /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
You can also run `php --ini` inside terminal to see which files are used by PHP in CLI mode.
Installation failed, reverting ./composer.json to its original content.
I assumed the issue was my adsapi_php.ini was simply in the wrong location as it contains what I believe is necessary to avoid the above issue, but I have tried placing it in several different places and yet I always get the same error.
Any help would be appreciated!
Just try to edit php.ini inside docker (docker exec -t {container} bashand enable there the extenstion soap

BigQuery error in query operation : Project id not found

I am getting a project not found error when trying to run queries with the bq command line tool or the BigQuery browser window.
I've registered the BigQuery API with the project. I've also setup billing.
For bq, I've setup the .bigqueryrc with the numeric project id.
When I try to query the system response is using the friendly project id so it seems that BigQuery is aware enough to do this mapping of numeric to friendly ids.
I've used the bq shell to verify that prompt reflects the right project id.
I can run 'bq ls publicdata:samples' just fine so I'm assuming the authorization really kicks in to query the data.
What's missing or wrong here?
It looks like there is an issue recognizing projects created through AppEngine. This is a bug and we're actively working on a fix.
As a workaround, you can use a project created through https://code.google.com/apis/console instead.
In my project I didn't have App Engine enabled. For me it was solved by authenticating again though gcloud:
$ gcloud auth login