Kubernetes 1.2.2: api-server fails: can't find mounted certs for TLS on etcd - ssl

I'm been struggling to get api-server 1.2.2 to run with etcd secured with TLS.
I am upgrading from 1.1.2 to 1.2.2
In 1.1.2 I was using the --etcd-config flag and had a file that looked like:
{
"cluster": {
"machines": [
"https://XXX.XXX.XXX.XXX:2379",
"https://XXX.XXX.XXX.XXY:2379",
"https://XXX.XXX.XXX.XXZ:2379"
]
},
"config": {
"certFile": "/etc/ssl/etcd/etcd-peer.cert.pem",
"keyFile": "/etc/ssl/etcd/private/etcd-peer.key.pem",
"caCertFiles": [
"/etc/ssl/etcd/ca-chain.cert.pem"
],
"consistency": "STRONG_CONSISTENCY"
}
}
now this is no longer supported and I switched to using the flags:
--etcd-cafile="/etc/ssl/etcd/ca-chain.cert.pem"
--etcd-certfile="/etc/ssl/etcd/etcd-peer.cert.pem"
--etcd-keyfile="/etc/ssl/etcd/private/etcd-peer.key.pem"
--etcd-servers="https://XXX.XXX.XXX.XXX:2379, https://XXX.XXX.XXX.XXY:2379,https://XXX.XXX.XXX.XXZ:2379"
now I am getting this error:
F0421 00:54:40.133777 1 server.go:291] Invalid storage version or misconfigured etcd: open "/etc/ssl/etcd/etcd-peer<nodeIP>.cert.pem": no such file or directory
So, it seems like it cannot find the cert file.
The file paths and names are the same as before, and they are mounted with hostPath the exact same way as with v1.1.2, so I don't understand why api-server would not not find them.
I have been trying to figure what is going on with the file paths by simply switching the command in the pod from
- /hyperkube
- api-server
...
to
- /bin/sleep
- 60
but kubelet won't start this pod for some reason I don't understand.
Does it have to do with the yaml file name or something?
I don't understand what is happening why kubelet won't run with this command.
Any help with this would be greatly appreciated.
Thanks
UPDATE
I was able to get into the running container after replacing the command with /hyperkube scheduler
i can cat the files that apiserver is complaining about, so I don't understand why they're not found.

Well, the culprit was as simple as ""
--etcd-cafile="/etc/ssl/etcd/ca-chain.cert.pem"
--etcd-certfile="/etc/ssl/etcd/etcd-peer.cert.pem"
--etcd-keyfile="/etc/ssl/etcd/private/etcd-peer.key.pem"
--etcd-servers="https://XXX.XXX.XXX.XXX:2379, https://XXX.XXX.XXX.XXY:2379,https://XXX.XXX.XXX.XXZ:2379"
is WRONG
but this works:
--etcd-cafile=/etc/ssl/etcd/ca-chain.cert.pem
--etcd-certfile=/etc/ssl/etcd/etcd-peer.cert.pem
--etcd-keyfile=/etc/ssl/etcd/private/etcd-peer.key.pem
--etcd-servers=https://XXX.XXX.XXX.XXX:2379,https://XXX.XXX.XXX.XXY:2379,https://XXX.XXX.XXX.XXZ:237

Related

Parsoid: Unexpected Token error and failing to initialize

mwApis:
- # This is the only required parameter,
# the URL of you MediaWiki API endpoint.
uri: 'http://spgenerations.com/wiki/api.php'
On my linux box, I can curl this URL and receive the api data.
Regardless of using the apt-get installation or developer installation (ngm install) both instances give me this error:
{"name":"parsoid","hostname":"play.projecttidal.com.KVM","pid":12636,"level":30,"levelPath":"info/service-runner","msg":"master(12636) initializing 2 workers","time":"2019-03-12T03:55:47.504Z","v":0}
{"name":"parsoid","hostname":"play.projecttidal.com.KVM","pid":12645,"level":60,"moduleName":"lib/index.js","levelPath":"fatal/service-runner/worker","msg":"Unexpected token ...","time":"2019-03-12T03:55:47.917Z","v":0}
{"name":"parsoid","hostname":"play.projecttidal.com.KVM","pid":12636,"level":40,"message":"first worker died during startup, continue startup","worker_pid":12645,"exit_code":1,"startup_attempt":1,"levelPath":"warn/service-runner/master","msg":"first worker died during startup, continue startup","time":"2019-03-12T03:55:48.925Z","v":0}
For context, the hostname here is incorrect and the domain has been removed.
This is my parsoid config:
// Parsoid configuration
$wgVirtualRestConfig['modules']['parsoid'] = array(
'url' => 'server.spgenerations.com',
'forwardCookies' => true
);
I have tried everything under the hidden voodoo sun to get this thing to work and I'm beyond frustrated. 4 hours spent tinkering with URL links to no avail, so please, if you know anything relating to this error, lend a hand.
Check what Node.JS version you are running with:
nodejs --version
If it is 4.x: That's too old for Parsoid. I had the same situation (Debian 9, still such an old Node.JS version in the repositories..). After upgrading to 10.x it ran fine for me.
I used the following guide (see Install using a PPA) to update to a newer Node.JS release: https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-debian-9

Janus Graph Remote Graph NoSuchFieldError: V3_0 error

I follow this example;
https://github.com/JanusGraph/janusgraph/tree/master/janusgraph-examples/example-remotegraph
and I would like to debug this project, I configured(HBase+Solr) and run Janus Graph server with
$JANUSGRAPH_HOME/bin/gremlin-server.sh $JANUSGRAPH_HOME/conf/gremlin-server/gremlin-server.yaml
command.
I passed this argument to IDEA via Run Configuration > Program Arguments
[Project Home]/conf/jgex-remote.properties
my jgex-remote.properties file is:
gremlin.remote.remoteConnectionClass=org.apache.tinkerpop.gremlin.driver.remote.DriverRemoteConnection
# cluster file has the remote server configuration
gremlin.remote.driver.clusterFile=[Project Home]/conf/remote-objects.yaml
# source name is the global graph traversal source defined on the server
gremlin.remote.driver.sourceName=g
and my remote-objects.yaml file includes:
hosts: [127.0.0.1]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0,
config: {
ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry]
}
}
It tries to run this command:
cluster = Cluster.open(conf.getString("gremlin.remote.driver.clusterFile"));
And throws this exception:
Exception in thread "main" java.lang.NoSuchFieldError: V3_0 at
org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0.(GryoMessageSerializerV3d0.java:41)
at
org.apache.tinkerpop.gremlin.driver.ser.Serializers.simpleInstance(Serializers.java:77)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:472)
at
org.apache.tinkerpop.gremlin.driver.Cluster$Builder.(Cluster.java:469)
at
org.apache.tinkerpop.gremlin.driver.Cluster.getBuilderFromSettings(Cluster.java:167)
at
org.apache.tinkerpop.gremlin.driver.Cluster.build(Cluster.java:159)
at org.apache.tinkerpop.gremlin.driver.Cluster.open(Cluster.java:233)
at
com.ets.dataplatform.init.RemoteGraphApp.openGraph(RemoteGraphApp.java:72)
at com.ets.dataplatform.init.GraphApp.runApp(GraphApp.java:290) at
com.ets.dataplatform.init.RemoteGraphApp.main(RemoteGraphApp.java:195)
It is not meaningful for me.
Thanks in advance.
I would try to align your versions. I assume that you are using JanusGraph 0.2.0. If you look at the pom.xml for that version you'll see that it is bound to TinkerPop 3.2.6:
https://github.com/JanusGraph/janusgraph/blob/v0.2.0/pom.xml#L68
Change to that version in your application and see if the connection works. Taking that approach should not only fix your problem but also ensure that you don't run into other incompatibilities. That is not to say that you can't configure later versions of TinkerPop to work with 3.2.6, but it requires a bit more configuration and you have to be aware of minor changes that might affect how certain operations might behave.

Erlang can't connect to any HTTPS url

The problem arose when I was trying to build rebar3, but it couldn't fetch its dependencies from Amazon S3. Debugging into the problem I found out that the erlang runtime couldn't connect to any HTTPS site, although curl or wget work completely fine. When I set httpc:set_options([{verbose,trace}]), I get the following output:
{failed_connect,
[{to_address,{"s3.amazonaws.com",443}},
{inet,
[inet],
{eoptions,
{undef,
[{ssl,connect,
["s3.amazonaws.com",443,
[binary,
{active,false},
{ssl_imp,new},
inet],
20000],
[]},
{http_transport,connect,4,
[{file,"http_transport.erl"},{line,135}]},
{httpc_handler,connect,4,
[{file,"httpc_handler.erl"},{line,891}]},
{httpc_handler,
connect_and_send_first_request,3,
[{file,"httpc_handler.erl"},{line,905}]},
{httpc_handler,init,1,
[{file,"httpc_handler.erl"},{line,242}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,239}]}]}}}]}
Where can I look to find out what the problem is ?
You are missing the ssl Erlang application. Some Linux distributions split Erlang into several subpackages. You should make sure that you have the package erlang-ssl installed on your system.

Behat 2.4 upgrade issues, or Unrecognized options "extensions" under "behat" - error when attempting to activate sahi extension via behat.yml

Fairly new to this Behat stuff, and I've run into a roadblock I can't seem to get around. I've been staring at the docs, googling like it's my job, and doing my best to refrain from tossing my computer off the fire escape.
I'm working with a fairly complex project, and I'm not the one who set it up. So I'm a little lost in some areas.
Currently, I'm trying to use the Sahi driver, because selenium isn't cutting it for some dynamic forms I need to test. I can run the tests fine with the default selenium driver, but the tests fail because it doesn't adequately trigger JavaScript events on form input. Specifically, it'll work with the workarounds covered in that link, but only if I have the browser in focus. Which means it fails when the tests are run in sauce or via jenkins with xvfb.
I'm explaining all this only because this is my larger issue, which I'm attempting to address by using the Sahi driver. Which brings me to:
[Symfony\Component\Config\Definition\Exception\InvalidConfigurationException]
Unrecognized options "extensions" under "behat"
That's what I get when I try to activate the Sahi driver for a particular profile in my bahat.yml the way the documentation says to.
Here's the default profile and the profile I'm currently working with in my behat.yml (slightly modifies for public consumption):
default:
paths:
features: 'features'
bootstrap: '%behat.paths.features%/bootstrap'
sahi:
extensions:
Behat\MinkExtension\Extension:
sahi: ~
context:
class: 'FeatureContext'
parameters:
environment: 'staging'
mink: 'sahi'
Fwiw, the tests are on a vm, which I ssh -X into, then run the test using
$ behat --tags #test_name_tag --profile=sahi
When I'm using the default selenium driver and the #javascript tag, the browser pops up and the tests run and pass (assuming I keep the browser in focus, of course).
I installed the additional drivers using composer:
{
"require": {
"behat/behat": "2.4.*#stable",
"behat/mink": "1.4#stable",
"behat/mink-extension": "*",
"behat/mink-selenium2-driver": "*",
"behat/mink-sahi-driver": "*"
}
}
I've added use Behat\Mink\Driver\SahiDriver; to my MinkContext.php, EnvironmentContext.php and FeatureContext.php, though I'm guessing that's probably either overkill or not necessary. It doesn't seem to be making a difference at this point, though. I get the same error with or without it.
I also added a sahi.php which lives in features/bootstrap/mink:
<?php
return
array(
'default_session' => 'sahi',
'sahi' => array(
'capabilities' => array(
'browserName' => 'firefox',
'browserVersion' => 7,
),
),
);
I thought maybe adding a directory in features/bootstrap called exensions might help for some reason. Even stuck a file in there called sahi.php. That didn't help much.
I think that covers everything. Thanks in advance for any help, and if this is covered elsewhere, please direct me to it, because I've spent countless hours looking and haven't found anything that helps me.
Update:
I uninstalled the old versions of behat, mink and gherkin, and installed 2.4, et al as per this https://lestbddphp.wordpress.com/2012/08/31/behatcomposer/
I've been making my way through "Migrating from Behat 2.3 to 2.4" in the docs. (Sorry, SO won't let me post any more links, but it's in the official Behat docs.)
My composer.json:
{
"require": {
"behat/behat": "2.4.*#stable",
"behat/mink": "1.4#stable",
"behat/mink-goutte-driver": "*",
"behat/symfony2-extension": "*",
"symfony/class-loader": "2.1.*",
"symfony/form": "2.1.*",
"symfony/validator": "2.1.*",
"behat/mink-selenium-driver": "*",
"behat/mink-selenium2-driver": "*",
"behat/mink-extension": "*",
"behat/mink-sahi-driver": "*"
},
"minimum-stability": "dev",
"config": {
"bin-dir": "bin/"
}
}
I moved my behat.yml file to the root of the project, as directed. I updated my default profile to:
default:
paths:
features: 'features'
bootstrap: '%behat.paths.features%/bootstrap'
extensions:
Behat\Symfony2Extension\Extension:
mink_driver: true
kernel:
env: test
debug: true
Behat\MinkExtension\Extension:
default_session: symfony2
sahi: ~
though I'm not entirely sure that's what I need. Just going by the example given in the docs.
I updated my vendor/autoload.php by replacing the require_once with require:
<?php
// autoload.php generated by Composer
require __DIR__ . '/composer' . '/autoload_real.php';
return ComposerAutoloaderInit::getLoader();
but I'm a little confused by this, because that file is different from the example code in the docs. If I were to add the line in the docs here, instead of what was already there, then it would just be loading itself. (I tried. It barfed.) Am I completely dense, or is the wording here confusing/misleading? Did I do this correctly?
As I mentioned before, I have 3 context files in features/bootstrap:
FeatureContext.php
EnvironmentContext.php
MinkContext.php
When running the tests via cli, I pass it a --profile, and then it uses the appropriate profile in behat.yml. In almost all of the profiles, FeatureContext is used.
context:
class: 'FeatureContext'
FeatureContext then gets EnvironmentContext and MinkContext, from what I can tell. So, theoretically, everything should be working there.
Only it's not.
$ bin/behat --profile=sahi
[ReflectionException]
Class AppKernel does not exist
Before I added all the Symfony stuff, I was getting this:
Warning: require(Behat\Symfony2Extension\Extension): failed to open stream: No such file or directory in /path/to/project/vendor/behat/behat/src/Behat/Behat/Extension/ExtensionManager.php on line 112
Fatal error: require(): Failed opening required 'Behat\Symfony2Extension\Extension' (include_path='/usr/share/pear:/usr/share/php:/usr/share/git core/templates/hooks:.') in /path/to/project/vendor/behat/behat/src/Behat/Behat/Extension/ExtensionManager.php on line 112
Which is why I added the Symfony stuff via composer.
Also possibly of note: when I forgot to pass it a --profile, before installing the Symfony stuff via composer, I got this:
Notice: Undefined index: environment in /home/lbaron/development/BeHat-Functional/features/bootstrap/FeatureContext.php on line 43
Warning: include(/path/to/project/features/bootstrap/environment/.php): failed to open stream: No such file or directory in /path/to/project/features/bootstrap/FeatureContext.php on line 44
Warning: include(): Failed opening '/path/to/project/features/bootstrap/environment/.php' for inclusion (include_path='/usr/share/pear:/usr/share/php:/usr/share/git-core/templates/hooks:.') in /path/to/project/features/bootstrap/FeatureContext.php on line 44
Catchable fatal error: Argument 1 passed to EnvironmentContext::__construct() must be an array, boolean given, called in /path/to/project/features/bootstrap/FeatureContext.php on line 44 and defined in /path/to/project/features/bootstrap/EnvironmentContext.php on line 27
Which I guess is to be expected.
So I'm at a loss now. Ideas?
I'm going to keep banging on it to see if I can figure it out, but any ideas/input would be greatly appreciated.
Update again:
Removing the extensions section from yml gives me this:
Catchable fatal error: Argument 2 passed to Symfony\Component\BrowserKit\Client::__construct() must be an instance of Symfony\Component\BrowserKit\History, array given, called in /usr/share/pear/mink/src/Behat/Mink/Behat/Context/MinkContext.php on line 163 and defined in /home/lbaron/development/BeHat-Functional/vendor/symfony/browser-kit/Symfony/Component/BrowserKit/Client.php on line 52
Current state of behat.yml:
default:
paths:
features: 'features'
bootstrap: '%behat.paths.features%/bootstrap'
formatter:
parameters:
language: 'en'
extensions:
Behat\MinkExtension\Extension:
sahi: ~
goutte: ~
You are running a version of behat which is older that 2.4 (the current version). I can tell because the command you use is "behat" instead of "bin/behat". Older versions had a different architecture and did not use extensions. The documentation on the behat.org website is all for the new 2.4 version and, as far as I know, does not have the documentation for older versions available anymore. You should upgrade your behat version to 2.4, there is a guide on how to do this here

Hadoop configurations seem not to be read

Every time when I try to start my mapreduce application (in standalone Hadoop), it tries to put stuff in the tmp directory, which it can't:
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-username\mapred\staging\username-1524148556\.staging to 0700
It ties to use an invalid path (slashes should be the other way around for cygwin).
I set hadoop.tmp.dir in core-site.xml (in the conf folder of Hadoop), but it seems that the config file is never read (if I put syntax errors in the file, it makes no difference). I added:
--config /home/username/hadoop-1.0.1/conf
To the command, but no difference. I also tried:
export HADOOP_CONF_DIR=/home/username/hadoop-1.0.1/conf
but also that does not seem to have an effect....
Any pointers on why the configs would not be read, or what else I am failing to see here?
Thanks!
It's not that the slashes are inverted, it's that /tmp is a cygwin path which actually maps to /cygwin/tmp or c:\cygwin\tmp. since hadoop is java and doesn't know about cygwin mappings, it takes /tmp to mean c:\tmp.
there's an awful lot of stuff to patch if you want to get 1.0.1 running on cygwin.
see: http://en.wikisource.org/wiki/User:Fkorning/Code/Hadoop-on-Cygwin
I found the following link useful, it seems that the problem stands with newer version of Hadoop. I'm using version 1.0.4 and I'm still facing this problem.
http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/25837
UPDATED: in Mahout 0.7 and for the ones who use the "Mahoot in Action" book example, you shoud change the example code as follows:
File outFile = new File("output");
if (!outFile.exists()) {
outFile.mkdir();
}
Path output = new Path("output");
HadoopUtil.delete(conf, output);
KMeansDriver.run(conf, new Path("testdata/points"), new Path("testdata/clusters"),
output, new EuclideanDistanceMeasure(), 0.001, 10,
true, 0.1, true);