How to access H2O Flow when using Google Colab - java

Does anyone know how to access H2O Flow when using Google Colab?
My code is as follows:
!pip install H2O
import h2o
h2o.init(bind_to_localhost=False, log_dir="./")
from google.colab.output import eval_js
print(eval_js("google.colab.kernel.proxyPort(54321)"))
this code shows the following output:
Checking whether there is an H2O instance running at http://localhost:54321 ..... not found.
Attempting to start a local H2O server...
Java Version: openjdk version "11.0.10" 2021-01-19; OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.18.04); OpenJDK 64-Bit Server VM (build 11.0.10+9-Ubuntu-0ubuntu1.18.04, mixed mode, sharing)
Starting server from /usr/local/lib/python3.7/dist-packages/h2o/backend/bin/h2o.jar
Ice root: /tmp/tmp5mullu7m
JVM stdout: /tmp/tmp5mullu7m/h2o_unknownUser_started_from_python.out
JVM stderr: /tmp/tmp5mullu7m/h2o_unknownUser_started_from_python.err
Server is running at http://127.0.0.1:54321
Connecting to H2O server at http://127.0.0.1:54321 ... successful.
H2O_cluster_uptime: 02 secs
H2O_cluster_timezone: Etc/UTC
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.32.1.1
H2O_cluster_version_age: 3 days
H2O_cluster_name: H2O_from_python_unknownUser_0ttq4b
H2O_cluster_total_nodes: 1
H2O_cluster_free_memory: 3.180 Gb
H2O_cluster_total_cores: 2
H2O_cluster_allowed_cores: 2
H2O_cluster_status: accepting new members, healthy
H2O_connection_url: http://127.0.0.1:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.10 final
https://0258qgrdz6tx-496ff2e9c6d22116-54321-colab.googleusercontent.com/
and clicking https://0258qgrdz6tx-496ff2e9c6d22116-54321-colab.googleusercontent.com/ returns HTTP 500 error with "Not Implemented" message instead of H2O Flow (Web UI) page.
It seems that the message is returned by Persist class.

You can use localtunnel to expose the port that H2O.ai runs on:
Install localtunnel:
!npm install -g localtunnel
Start localtunnel:
!lt --port 54321
Then you can navigate to the url it returns and access H2O.ai notebook.

Related

Is openJdk upgrading to 8u292 break my aosp build system?

Software environment:
Ubuntu 20.04 LTS server;
Android AOSP 8.0;
OpenJDK 8;
It works very well util yesterday I upgraded my OpenJDK from 8u282 to 8u292. Now the broken building log says:
Ensuring Jack server is installed and started
FAILED: setup-jack-server
/bin/bash -c "(prebuilts/sdk/tools/jack-admin install-server prebuilts/sdk/tools/jack-launcher.jar prebuilts/sdk/tools/jack-server-4.11.ALPHA.jar 2>&1 || (exit 0) ) && (JACK_SERVER_VM_ARGUMENTS=\"-Dfile.encoding=UTF-8 -XX:+TieredCompilation\" prebuilts/sdk/tools/jack-admin start-server 2>&1 ||
exit 0 ) && (prebuilts/sdk/tools/jack-admin update server prebuilts/sdk/tools/jack-server-4.11.ALPHA.jar 4.11.ALPHA 2>&1 || exit 0 ) && (prebuilts/sdk/tools/jack-admin update jack prebuilts/sdk/tools/jacks/jack-4.32.CANDIDATE.jar 4.32.CANDIDATE || exit 47 )"
Jack server already installed in "~/.jack-server"
Launching Jack server java -XX:MaxJavaStackTraceDepth=-1 -Djava.io.tmpdir=/tmp -Dfile.encoding=UTF-8 -XX:+TieredCompilation -cp ~/.jack-server/launcher.jar com.android.jack.launcher.ServerLauncher
Jack server failed to (re)start, try 'jack-diagnose' or see Jack server log
SSL error when connecting to the Jack server. Try 'jack-diagnose'
SSL error when connecting to the Jack server. Try 'jack-diagnose'
ninja: build stopped: subcommand failed.
10:11:50 ninja failed with: exit status 1
I checked the log in ~/.jack-server/log/xxxx-0-0.log. It has nothing about error.
I use curl command to connect to the server, it says:
$ curl https://127.0.0.1:8076/jack
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:8076
I changed the script in prebuilts/sdk/tools/jack-admin to print the $CURL_CODE, samed as my shell curl command, report error code 35.
This url discussed about samliar problem:
https://forums.gentoo.org/viewtopic-t-1060536-start-0.html
But I am not sure.
Here is the source script link which prompts the above error:
https://android-opengrok.bangnimang.net/android-8.1.0_r81/xref/prebuilts/sdk/tools/jack-admin?r=692a2a62#89
I have same issue and it was fixed by removing "TLSv1, TLSv1.1" in jdk.tls.disabledAlgorithms configuration in file /etc/java-8-openjdk/security/java.security.
I think that there is a good chance that it is this:
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8202343
Basically, they have turned off (default) support for TLS 1.0 and 1.1, starting in 8u291. These versions of TLS are old, insecure and deprecated; see https://en.wikipedia.org/wiki/Transport_Layer_Security
This is mentioned in the 8u291 release notes.
My advice would be to find out why your build system is not using TLS 1.2 or later. Then upgrade / fix that.
You can test if this is the problem by running curl with the --tlsv1.2 option.
removing "TLSv1, TLSv1.1" in jdk.tls.disabledAlgorithms configuration in file /etc/java-8-openjdk/security/java.security.
It work for me.
Ubuntu update jdk 8u292 background, so it hard related to jdk .
Firsty, Some info link to change Jack port , I had change Jack port but it doesnot work.
Secondly, I have try update ubuntu16.04.2 and ubuntu16.04.7. but error of "SSL error when connecting to the Jack server. Try 'jack-diagnose'" still occurs.
Thanks #Guillaume P a lot.

"Unavaliable io exception" when connecting to remote Bazel master on bazel-buildfarm

I want to setup a small POC remote area with 1x master (192.168.60.99) and 1x worker (192.168.60.98) using bazel-buildfarm. Both are CentOS 7 machines provisioned with Vagrant. When connection from a Ubuntu workstation (third machine) in the network, the following error occurs:
$ bazel build --verbose_failures //projects/myproj:app
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=229
INFO: Reading rc options for 'build' from /home/user/tests/ecommerce/.bazelrc:
'build' options: --strategy=TypeScriptCompile=worker --strategy=AngularTemplateCompile=worker --symlink_prefix=dist/ --define=compile=legacy --incompatible_strict_action_env --experimental_allow_incremental_repository_updates --distdir=third_party/_distdir
INFO: Reading rc options for 'build' from /home/user/.bazelrc:
'build' options: --spawn_strategy=remote --genrule_strategy=remote --strategy=Javac=remote --strategy=Closure=remote --remote_executor=192.168.60.99:8980
INFO: Writing tracer profile to '/home/user/.cache/bazel/_bazel_user/24700f1ad3e201a00a1c26bd59dc6502/command.profile.gz'
INFO: Invocation ID: 569b59ca-edcb-4922-92a0-b6f0b5ca2819
ERROR: Failed to query remote execution capabilities: UNAVAILABLE: io exception
The network connection is working and I even can connect to Bazel using telnet:
telnet 192.168.60.99 8980
Trying 192.168.60.99...
Connected to 192.168.60.99.
Escape character is '^]'.
.bazelrc file of the third Ubuntu machine:
$ cat ~/.bazelrc
build --spawn_strategy=remote --genrule_strategy=remote --strategy=Javac=remote --strategy=Closure=remote --remote_executor=192.168.60.99:8980
Buildfarm setup
Both got a clon of the buildfarm git repo. The example config files were used. Just on the server I replaced localhost by 192.168.60.99 (master server ip).
I know that bazel run is not recommended. But in lack of better alternatives that works, my idea is to get the documented way working first (Bazel itself doesn't mention any alternative). Since not even bazel run works, I think that something is wrong with my installation.
All machines use version 1.1.0, which is the latest stable one at the time of writing. It's definitely an issue with bazel-buildfarm, since the local build works fine on the Ubuntu machine.
Master server
bazel run //src/main/java/build/buildfarm:buildfarm-server $(pwd)/examples/server.config.example
Worker
bazel run //src/main/java/build/buildfarm:buildfarm-operationqueue-worker $(pwd)/examples/worker.config.example --distdir ~/distdir/
The distdir is a workaround for our company proxy, that manipulates files with MITM attacks. Since Bazel doesn't allow this, I downloaded the affected file for it's jdk manually:
[vagrant#localhost bazel-buildfarm]$ l ~/distdir/
total 188M
-rw-rw-r--. 1 vagrant vagrant 188M Jan 17 2019 zulu11.2.3-jdk11.0.1-linux_x64.tar.gz
If Bazel >= 1.0 is used , we need to specify the protocol grpc in .bazelrc like this:
--remote_executor=grpc://192.168.60.99:8980
Without the protocol, the UNAVAILABLE: io exception occurs. There is currently no documentation about this issue.

No able to connect to spark cluster via sparklyr package when my custom package method is invoked via OpenCpu

I have created an R package that makes use of the sparklyr capabilities within a dummy hello function. My package does a very simple thing as connection to a spark cluster, print the spark version and disconnect. The package is successfully clean and build and is successfully executed from R and Rstudio.
# Connect to Spark cluster
spark_conn <- sparklyr::spark_connect(master = "spark://elenipc.home:7077", spark_home = '/home/eleni/spark-2.2.0-bin-hadoop2.7/')
# Print the version of Spark
sv<- sparklyr::spark_version(spark_conn)
print(sv)
# Disconnect from Spark
sparklyr::spark_disconnect(spark_conn)
It is very important for me to be able to execute the hello function from OpenCpu rest api. (I have used opencpu api for executing many other custom created packages.)
When invoking opencpu api like:
curl http://localhost/ocpu/user/rstudio/library/myFirstBigDataPackage/R/hello/print -X POST
i get the following response:
Failed while connecting to sparklyr to port (8880) for sessionid (89615): Gateway in port (8880) did not respond.
Path: /home/eleni/spark-2.2.0-bin-hadoop2.7/bin/spark-submit
Parameters: --class, sparklyr.Shell, '/home/rstudio/R/x86_64-pc-linux-gnu-library/3.4/sparklyr/java/sparklyr-2.2-2.11.jar', 8880, 89615
Log: /tmp/ocpu-temp/file26b165c92166_spark.log
---- Output Log ----
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
---- Error Log ----
In call:
force(code)
Of course allocate more memory to both java & spark executor does not resolve the issue. permission issues are also discarded as i already configured the etc/apparmor.d/opencpu.d/custom file so as to permit opencpu to have rwx privileges on spark. It seems to be a connectivity issue that i do not know how to face. During method invocation via opencpu api spark logs do not even print something.
For you info my environment configuration is as follows:
java version "1.8.0_65"
R version 3.4.1
RStudio version 1.0.153
spark-2.2.0-bin-hadoop2.7
opencpu 1.5 (compatible with my Ubuntu 14.04.3 LTS)
Thank you very much for you support and time!!!

Elasticsearch & NetFlix Edda - NoNodeAvailableException: No node available

I am trying to get Netflix open source solution Edda to work with Elasticsearch. I know I've installed Edda correctly because I can get it working with MongoDB as a backend successfully. I'd prefer to use Elasticsearch so I can get the benefits of Kibana rather than write my own frontend. So I'm running Edda and Elasticsearch on the same server in AWS at the moment (just trying to get it working). Elasticsearch is operational:
{
"name" : "Arsenic",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
And to show it's listening:
netstat -tulpn | grep java
tcp 0 0 ::ffff:<myip>:9300 :::* LISTEN 2270/java
tcp 0 0 ::ffff:<myip>:9200 :::* LISTEN 2270/java
My java version I updated from 1.7 to 1.8 as I believe the java version for Elasticsearch and what is running on the server have to match. I can't see a reason why 1.8 would be causing an issue:
java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)
Here's my edda properties file:
cat /home/ec2-user/edda/src/main/resources/edda.properties | grep elasticsearch
edda.datastore.current.class=com.netflix.edda.elasticsearch.ElasticSearchDatastore
edda.elector.class=com.netflix.edda.elasticsearch.ElasticSearchElector
edda.elasticsearch.cluster=elasticsearch
edda.elasticsearch.address=<myip>:9300
edda.elasticsearch.shards=5
edda.elasticsearch.replicas=0
# http://www.elasticsearch.org/guide/reference/api/index_/
edda.elasticsearch.writeConsistency=quorum
edda.elasticsearch.replicationType=async
edda.elasticsearch.scanBatchSize=1000
edda.elasticsearch.scanCursorDuration=60000
edda.elasticsearch.bulkBatchSize=0
And in my elasticsearch.yml file:
network.host: <myip>
I haven't specified a clustername so it assumes the default 'elasticseach'.
So when I run Edda to poll AWS and populate elasticsearch with the data it finds I receive this error:
[Collection aws.hostedZones] init: caught org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at com.netflix.edda.Collection$$anonfun$init$1.apply$mcV$sp(Collection.scala:471)
at com.netflix.edda.Utils$$anon$1.act(Utils.scala:169)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at scala.actors.ReactorTask.compute(ReactorTask.scala:63)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Clearly it can't connect to the elasticsearch cluster yet the cluster name is correct, it's listening on the correct port and ip address as far as I can tell and I don't think there's an issue with the java version.
I'm missing something probably very simple.
Thanks in advance for all your assistance.
Regards
Neilos
I've figured it out, the java client used in Edda is set to use version 0.90.0 of elasticsearch which is set in build.gradle, if you install that version of Elasticsearch it works. Obviously that's a very old version of Elasticsearch which you are not likely to want to use. If you change the version number in this file it fails when it tries to compile due to broken paths (missing assemblies). I'm weighing up whether it's worth trying to resolve these assembly issues to get it working with the latest version of Elasticsearch or choose to use MongoDB which works without any code changes but will only provide REST Api functionality. At least the problem is resolved.

Hudson fails to use unix user/group to do authentication

I'm trying to use unix user/group database as security realm of hudson. The linux server is using NIS for user management. My account could login the hudson server via ssh.
And the hudson server is running by user 'hudson' that is also a member of group 'shadow', so hudson could read /etc/shadow. And I tested the configuration using 'test' button, hudson tells me it works well.
But I can't use my unix account and password to login the hudson sever.
And I found below java exception in the log of hudson,
Jan 12, 2011 8:23:42 AM hudson.security.AuthenticationProcessingFilter2 onUnsuccessfulAuthentication
INFO: Login attempt failed
org.acegisecurity.BadCredentialsException: pam_authenticate failed : Authentication failure; nested exception is org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure
at hudson.security.PAMSecurityRealm$PAMAuthenticationProvider.authenticate(PAMSecurityRealm.java:100)
at org.acegisecurity.providers.ProviderManager.doAuthentication(ProviderManager.java:195)
at org.acegisecurity.AbstractAuthenticationManager.authenticate(AbstractAuthenticationManager.java:45)
at org.acegisecurity.ui.webapp.AuthenticationProcessingFilter.attemptAuthentication(AuthenticationProcessingFilter.java:71)
at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:252)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.ui.basicauth.BasicProcessingFilter.doFilter(BasicProcessingFilter.java:173)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249)
at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:66)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:76)
at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:164)
at winstone.FilterConfiguration.execute(FilterConfiguration.java:195)
at winstone.RequestDispatcher.doFilter(RequestDispatcher.java:368)
at winstone.RequestDispatcher.forward(RequestDispatcher.java:333)
at winstone.RequestHandlerThread.processRequest(RequestHandlerThread.java:244)
at winstone.RequestHandlerThread.run(RequestHandlerThread.java:150)
at java.lang.Thread.run(Thread.java:595)
Caused by: org.jvnet.libpam.PAMException: pam_authenticate failed : Authentication failure
at org.jvnet.libpam.PAM.check(PAM.java:105)
at org.jvnet.libpam.PAM.authenticate(PAM.java:123)
at hudson.security.PAMSecurityRealm$PAMAuthenticationProvider.authenticate(PAMSecurityRealm.java:90)
... 18 more
Update on Jan. 17,
The host is RHEL 4.5, and I created user and group shadow, then add hudson into group shadow.
-bash-3.00$ cat /etc/redhat-release
Red Hat Enterprise Linux WS release 4 (Nahant Update 5)
-bash-3.00$ ll /etc/shadow
-r--r----- 1 root shadow 1114 Jan 4 11:37 /etc/shadow
-bash-3.00$ cat /etc/group |grep shadow
shadow:x:44:hudson
I also tried to setup hudson on another RHEL 4.8 host. This time I ran the hudson by root,
kzhu0#pek-wb-rhws4_32:~$ ps -ef|grep hudson
root 18764 29161 0 Jan14 pts/5 00:00:33 /usr/bin/java -Dcom.sun.akuma.Daemon=daemonized -Djava.awt.headless=true -DHUDSON_HOME=/var/lib/hudson -jar /usr/lib/hudson/hudson.war --logfile=/var/log/hudson/hudson.log --daemon --httpPort=8080 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
kzhu0 22404 18833 0 10:52 pts/2 00:00:00 grep hudson
kzhu0#pek-wb-rhws4_32:~$ cat /etc/redhat-release
But I still don't have luck to get unix user/password group work. And I can't find any pam error message in /var/log/messages and /var/log/secure. It looks like hudson throws the exception before actually using pam to get authentication.
Red Hat Enterprise Linux WS release 4 (Nahant Update 8)
I find the solution after debugging the code of libpam4j that is used by hudson for PAM security realm.
the service name must be 'sshd' in my case, because I want to use NIS to do authentication. RHEL 4.x uses the pam 0.77, it strictly depends on the service name specified by hudson. However my Ubuntu 10.04 accepts any meaningless service name, which uses pam 1.1.1.
the user who runs the hudson must have the permission to read the service file of pam, /etc/pam.d/sshd is the file in my case
In my case, ubuntu 10.04 Ihad to use ssh instead of sshd for the Service Name
I have struggled with this problem for many hours. At the end what worked for me:
1. Add 'hudson' user to root and shadow groups
2. Install sshd (missing in /etc/pam.d).
3. Set PAM service to login.
Then I could login to Hudson with Unix account and execute build as Unix user.
I think point 1 is the one which fixed issue.

Categories