I am running two instances of spring boot application on openshift and both are joined to hazelcast cluster. However, I can see continuously see below error message in logs.
{"timestamp":"2021-06-01T20:29:48.775+10:00","app":"my-protected-application","logLevel":"INFO","thread":"hz.peaceful_pike.priority-generic-operation.thread-0","eventSource":"com.hazelcast.internal.cluster.impl.operations.SplitBrainMergeValidationOp","message":"[10.1.28.31]:5701 [spring-session-cluster] [4.2] Removing null, since it thinks it's already split from this cluster and looking to merge."}
{"timestamp":"2021-06-01T20:29:48.776+10:00","app":"my-protected-application","logLevel":"ERROR","thread":"hz.peaceful_pike.priority-generic-operation.thread-0","eventSource":"com.hazelcast.internal.cluster.impl.operations.SplitBrainMergeValidationOp","message":"[10.1.28.31]:5701 [spring-session-cluster] [4.2] Target is this node! -> [10.1.28.31]:5701","stack_trace":"<#d3566be0> j.l.IllegalArgumentException: Target is this node! -> [10.1.28.31]:5701\n\tat c.h.s.i.o.i.OutboundResponseHandler.checkTarget(OutboundResponseHandler.java:226)\n\tat c.h.s.i.o.i.OutboundResponseHandler.sendNormalResponse(OutboundResponseHandler.java:125)\n\tat c.h.s.i.o.i.OutboundResponseHandler.sendResponse(OutboundResponseHandler.java:88)\n\tat c.h.s.i.o.Operation.sendResponse(Operation.java:475)\n\tat c.h.s.i.o.i.OperationRunnerImpl.call(OperationRunnerImpl.java:282)\n\tat c.h.s.i.o.i.OperationRunnerImpl.run(OperationRunnerImpl.java:248)\n\tat c.h.s.i.o.i.OperationRunnerImpl.run(OperationRunnerImpl.java:469)\n\tat c.h.s.i.o.i.OperationThread.process(OperationThread.java:197)\n\tat c.h.s.i.o.i.OperationThread.process(OperationThread.java:137)\n\tat c.h.s.i.o.i.OperationThread.executeRun(OperationThread.java:123)\n\tat c.h.i.u.e.HazelcastManagedThread.run(HazelcastManagedThread.java:102)\n"}
{"timestamp":"2021-06-01T20:29:48.776+10:00","app":"my-protected-application","logLevel":"WARN","thread":"hz.peaceful_pike.priority-generic-operation.thread-0","eventSource":"com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl","message":"[10.1.28.31]:5701 [spring-session-cluster] [4.2] While sending op error... op: com.hazelcast.internal.cluster.impl.operations.SplitBrainMergeValidationOp{serviceName='hz:core:clusterService', identityHash=127643227, partitionId=-1, replicaIndex=0, callId=1046044, invocationTime=1622543389369 (2021-06-01 20:29:49.369), waitTimeout=-1, callTimeout=60000, tenantControl=com.hazelcast.spi.impl.tenantcontrol.NoopTenantControl#0}, error: java.lang.IllegalArgumentException: Target is this node! -> [10.1.28.31]:5701","stack_trace":"<#dbcc9949> j.l.IllegalArgumentException: Target is this node! -> [10.1.28.31]:5701, response: ErrorResponse{callId=1046044, urgent=true, cause=java.lang.IllegalArgumentException: Target is this node! -> [10.1.28.31]:5701}\n\tat c.h.s.i.o.i.OutboundResponseHandler.send(OutboundResponseHandler.java:113)\n\tat c.h.s.i.o.i.OutboundResponseHandler.sendResponse(OutboundResponseHandler.java:96)\n\tat c.h.s.i.o.Operation.sendResponse(Operation.java:475)\n\tat c.h.s.i.o.i.OperationRunnerImpl.sendResponseAfterOperationError(OperationRunnerImpl.java:425)\n\tat c.h.s.i.o.i.OperationRunnerImpl.handleOperationError(OperationRunnerImpl.java:419)\n\tat c.h.s.i.o.i.OperationRunnerImpl.run(OperationRunnerImpl.java:253)\n\tat c.h.s.i.o.i.OperationRunnerImpl.run(OperationRunnerImpl.java:469)\n\tat c.h.s.i.o.i.OperationThread.process(OperationThread.java:197)\n\tat c.h.s.i.o.i.OperationThread.process(OperationThread.java:137)\n\tat c.h.s.i.o.i.OperationThread.executeRun(OperationThread.java:123)\n\tat c.h.i.u.e.HazelcastManagedThread.run(HazelcastManagedThread.java:102)\n"}
{"timestamp":"2021-06-01T20:29:49.779+10:00","app":"my-protected-application","logLevel":"INFO","thread":"hz.peaceful_pike.InvocationMonitorThread","eventSource":"com.hazelcast.spi.impl.operationservice.impl.InvocationMonitor","message":"[10.1.28.31]:5701 [spring-session-cluster] [4.2] Invocations:2 timeouts:1 backup-timeouts:0"}
what does this log mean and what is it's significance.
Hazelcast for OpenShift
Check if you use StatefulSet (not Deployment). In the case of DNS Lookup discovery, using Deployment may cause Hazelcast to start in the split-brain mode. Also you may want to increase the value of the service-dns-timeout parameter.
Related
I'm launching 3 elastic nodes using elastic operator and i tried to set up automated snapshots for these instances.
I followed this doc
I minified the json of the service account key and created a file called gcs.client.default.credentials_file with no file extension and added this file to kubernetes secrets.
And added the secureSettings.secretName field to the spec of the elastic cluster and added the secret name to it which was gcs-credentials
But i get this error on the logs
{"#timestamp":"2022-12-26T18:45:40.037Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elasticsearch-cluster-es-node-1","elasticsearch.cluster.name":"elasticsearch-cluster","error.type":"java.lang.IllegalStateException","error.message":"failed to load plugin class [org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin]","error.stack_trace":"java.lang.IllegalStateException: failed to load plugin class [org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin]\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:607)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadBundle(PluginsService.java:482)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadBundles(PluginsService.java:290)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:159)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.lambda$getPluginsServiceCtor$14(PluginsService.java:634)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:406)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.node.Node.<init>(Node.java:316)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\nCaused by: java.lang.reflect.InvocationTargetException\n\tat java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:79)\n\tat java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)\n\tat java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:484)\n\tat org.elasticsearch.server#8.5.0/org.elasticsearch.plugins.PluginsService.loadPlugin(PluginsService.java:600)\n\t... 9 more\nCaused by: java.lang.IllegalArgumentException: failed to load GCS client credentials from [gcs.client.default.credentials_file]\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.loadCredential(GoogleCloudStorageClientSettings.java:265)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.getClientSettings(GoogleCloudStorageClientSettings.java:221)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.load(GoogleCloudStorageClientSettings.java:209)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin.reload(GoogleCloudStoragePlugin.java:88)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStoragePlugin.<init>(GoogleCloudStoragePlugin.java:36)\n\tat java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:67)\n\t... 12 more\nCaused by: java.io.IOException: Invalid PKCS#8 data.\n\tat com.google.auth.oauth2.ServiceAccountCredentials.privateKeyFromPkcs8(ServiceAccountCredentials.java:496)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromPkcs8(ServiceAccountCredentials.java:474)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromJson(ServiceAccountCredentials.java:212)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromStream(ServiceAccountCredentials.java:548)\n\tat com.google.auth.oauth2.ServiceAccountCredentials.fromStream(ServiceAccountCredentials.java:520)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.lambda$loadCredential$13(GoogleCloudStorageClientSettings.java:257)\n\tat java.base/java.security.AccessController.doPrivileged(AccessController.java:569)\n\tat org.elasticsearch.repositories.gcs.SocketAccess.doPrivilegedIOException(SocketAccess.java:33)\n\tat org.elasticsearch.repositories.gcs.GoogleCloudStorageClientSettings.loadCredential(GoogleCloudStorageClientSettings.java:256)\n\t... 17 more\n"}
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/elasticsearch-cluster.log
Try adding the following lines to your configuration (on each Elasticsearch):
elasticsearch01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.2
...
ulimits:
memlock:
soft: -1
hard: -1
Also check this link on Elasticsearch for more detailed information.
I have uploaded a saved configuration file in a BeanStalk application in a region to another BeanStalk application in another region.
While loading that config I got an error
Stack named 'awseb-e-sme7w3eym3-stack' aborted operation. Current
state: 'CREATE_FAILED' Reason: The following resource(s) failed to
create: [AWSEBLoadBalancer]
Creating load balancer failed Reason: Property Listeners cannot be
empty Any idea about this issue ?
See the config file
AWSConfigurationTemplateVersion: 1.1.0.0
EnvironmentConfigurationMetadata:
DateCreated: '1580272974000'
DateModified: '1580273310143'
Description: xxxxxxxxxxxxxxxxxxxxx
EnvironmentTier:
Name: WebServer
Type: Standard
OptionSettings:
AWSEBAutoScalingGroup.aws:autoscaling:updatepolicy:rollingupdate:
MaxBatchSize: '1'
MinInstancesInService: '1'
RollingUpdateEnabled: true
RollingUpdateType: Health
AWSEBAutoScalingLaunchConfiguration.aws:autoscaling:launchconfiguration:
EC2KeyName: xxxxxxxxxxxxxxxxxxx
AWSEBCloudwatchAlarmHigh.aws:autoscaling:trigger:
UpperThreshold: '60'
AWSEBCloudwatchAlarmLow.aws:autoscaling:trigger:
BreachDuration: '2'
LowerThreshold: '25'
MeasureName: CPUUtilization
Period: '1'
Statistic: Maximum
Unit: Percent
AWSEBLoadBalancerSecurityGroup.aws:ec2:vpc:
VPCId: vpc-xxxxxxxxxxxxxxxx
AWSEBV2LoadBalancerListener.aws:elbv2:listener:default:
ListenerEnabled: false
AWSEBV2LoadBalancerListener443.aws:elbv2:listener:443:
SSLCertificateArns: arn:aws:acm:us-east-2:xxxxxxxxxxx:certificate/xxxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxxxx
AWSEBV2LoadBalancerTargetGroup.aws:elasticbeanstalk:environment:process:default:
HealthCheckPath: /rest/account/ping
MatcherHTTPCode: '200'
Port: '80'
Protocol: HTTP
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
SecurityGroups:
- sg-xxxxxxxxxxxxx
aws:ec2:instances:
InstanceTypes: t2.small
aws:ec2:vpc:
ELBSubnets: subnet-xxxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxx
Subnets: subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx,subnet-xxxxxxxxxxxxxxxxx
aws:elasticbeanstalk:application:environment:
JDBC_CONNECTION_STRING: jdbc:mysql://xxxxxxxxxxxxxxxxxxxxxxxxxxxx?user=xxxxxxxx&password=xxxxxxxxxxx&rewriteBatchedStatements=true&characterEncoding=UTF-8
aws.accessKeyId: xxxxxxxxxxxxxxxxxx
aws.secretKey: xxxxxxxxxxxxxxxxxxxx
com.aws.secretManger.secret.name: xxxxxxxxxxxxxxx
com.aws.secretManger.secret.region: us-east-2
com.decsond.loggly.token: xxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx#xxxxx
com.decsond.metakey: xxxxxxxxxxxxxxxxx/XXX==
com.decsond.mode: debug
com.decsond.server.db.environment: aws
com.decsond.server.dpBinaryColumn: xxxxxxxxxxxx
com.decsond.server.environment: xxxxxxxxxx
com.decsond.server.type: pms
aws:elasticbeanstalk:container:tomcat:jvmoptions:
JVM Options: -XX:+CMSClassUnloadingEnabled -Dmvel.disable.jit=true -Ddrools.permgenThreshold=0
Xms: 512m
Xmx: 1024m
aws:elasticbeanstalk:environment:
LoadBalancerType: application
ServiceRole: arn:aws:iam::xxxxxxxxxxxxxx:role/aws-elasticbeanstalk-service-role
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:managedactions:
ManagedActionsEnabled: true
PreferredStartTime: SAT:03:01
aws:elasticbeanstalk:managedactions:platformupdate:
InstanceRefreshEnabled: true
UpdateLevel: minor
aws:elasticbeanstalk:xray:
XRayEnabled: true
aws:elbv2:listener:443:
DefaultProcess: default
ListenerEnabled: true
Protocol: HTTPS
Rules: ''
SSLPolicy: ELBSecurityPolicy-2016-08
Platform:
PlatformArn: arn:aws:elasticbeanstalk:us-east-2::platform/Tomcat 8.5 with Java 8
running on 64bit Amazon Linux/3.3.1
Any idea about the issue ?
The most likely reason is that you are referencing objects in the region from where the config was saved from.
Is this the first EB application / environment in the new region?
If it is, it's worth first creating a test application and environment, using the features you want ... that will give EB a chance to create all the region specific behind-the-scenes magic it relies on.
let's say my Java chaincode (running on Fabric 1.4.4) wants to throw an exception to show that the new asset to be created already exists. I am throwing a RunTimeException with the problem or error (In this case, "Contract LL00001 already registered") which is logged in the Peer node executing the transaction:
2019-11-29 20:15:37.807 UTC [peer.chaincode.nid1-blockchain-hapeer1-mrrc-0.1.4] func2 -> INFO 16a8ec Contract LL00001 already registered
2019-11-29 20:15:37.807 UTC [peer.chaincode.nid1-blockchain-hapeer1-mrrc-0.1.4] func2 -> INFO 16a8ed java.lang.RuntimeException: Contract LL00001 already registered
But then, after the stack trace I see that the peer node is returning it as 500 error without includeing my error description or any reference to the error Exception in java (this makes sense as that error is language agnostic):
2019-11-29 20:15:37.807 UTC [peer.chaincode.nid1-blockchain-hapeer1-mrrc-0.1.4] func2 -> INFO 16a8ff 20:15:37:804 SEVERE org.hyperledger.fabric.shim.impl.ChaincodeInnvocationTask call [1f56a053] Invoke failed with error code 500. Sending ERROR
Which is logged in my client java application (which uses fabrik-java-sdk):
org.hyperledger.fabric.sdk.exception.InvalidArgumentException: Proposal response is invalid.
at org.hyperledger.fabric.sdk.ProposalResponse.getChaincodeActionResponsePayload(ProposalResponse.java:272)
at ...
So I just know that there was a problem from the chaincode, but I can't know what the problem is. How can I get the error type and description so I can show the problem to the user? Now I need to go to the peer node an check logs there to see what problem is.
Note: I am extending the new org.hyperledger.fabric.contract.ContractInterface in my chaincode class.
Update: peer node logs the error exception (org.hyperledger.fabric.shim.ChaincodeException) and seems to return correctly the error message ("The document was not found") in the 500 response as shown in log, but this message does not get to Java SDK
2019-12-23 22:11:09.178 UTC [peer.chaincode.nid1-blockchain-hapeer1-mrrc-0.9.7] func2 -> INFO 5aa7 22:11:09:176 SEVERE org.hyperledger.fabric.shim.impl.ChaincodeInnvocationTask call [12cc4ad0] Invoke failed with error code 500. Sending ERROR
2019-12-23 22:11:09.179 UTC [peer.chaincode.nid1-blockchain-hapeer1-mrrc-0.9.7] func2 -> INFO 5aa8 22:11:09:177 FINE org.hyperledger.fabric.shim.impl.ChaincodeSupportClient$2 accept > sendToPeer 12cc4ad09a1feb7fc1246ac04bf69509204ca74368be2c7e4bbf0a503e90417f
2019-12-23 22:11:09.181 UTC [endorser] callChaincode -> INFO 5aa9 [mrrc][12cc4ad0] Exit chaincode: name:"mrrc" (36ms)
2019-12-23 22:11:09.181 UTC [endorser] SimulateProposal -> ERRO 5aaa [mrrc][12cc4ad0] failed to invoke chaincode name:"mrrc" , error: transaction returned with failure: The document was not found
Edit: It seems to be a error in Java SDK. I have created a JIRA issue in Fabric's JIRA:
https://jira.hyperledger.org/browse/FABJ-508
To throw this error back to your fabric java sdk client, one way would be to make your chaincode class extend the ChaincodeBase class (which can be imported in your java program by importing org.hyperledger.fabric.shim ) and then you can use its newErrorResponse method in each of your chaincode methods to throw your custom errors wherein you can provide the error string as it's first (or only) parameter. You can possibly have a look at this example from the fabric samples repo:
https://github.com/hyperledger/fabric-samples/blob/master/chaincode/abstore/java/src/main/java/org/hyperledger/fabric-samples/ABstore.java#L28
To have a look at the overloaded implementations of the newErrorResponse method, so that you can see what other possible params you can pass to it, please follow:
https://jar-download.com/artifacts/org.hyperledger.fabric-chaincode-java/fabric-chaincode-shim/1.4.0/source-code/org/hyperledger/fabric/shim/ChaincodeBase.java
UPDATE : If you are using the newer chaincodeInterface (as suggested by icordoba) for your chaincode implementations instead, then you should throw an instance of ChaincodeException class which can be imported by importing org.hyperledger.fabric.shim.ChaincodeException, to achieve the same, you can have a look at a sample chaincode here
When attempting to connect to a topic from Java jetty microservice, I’m getting this Kafka internal version mismatch error:
stream-thread [App-94d44dcd-f1d4-49a6-9dd3-8d4eee06f82a-StreamThread-1] Encountered the following error during processing:
java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
at org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.<init>(SubscriptionInfo.java:67)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Any ideas on what could cause such an exception?
I had come across this error myself and it is most likely because you have used non-unique APPLICATION_ID_CONFIG and/or CLIENT_ID_CONFIG
// Give the Streams application a unique name. The name must be unique in the Kafka cluster
// against which the application is run.
streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-app");
streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, "my-client");
Here it is my part of application properties:
spring.cloud.stream.rabbit.bindings.studentInput.consumer.exchange-type=direct
spring.cloud.stream.rabbit.bindings.studentInput.consumer.delayed-exchange=true
But it appears that in the RabbitMQ Admin page, it does not have x-delayed-type: direct in the Args in feature of my queue. I am referencing to this Spring Cloud Stream documentation: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/
What am I doing wrong? Thanks in advance :D
I just tested it and it worked fine.
Did you enable the plugin? If not, you should see this in the log...
2018-07-09 08:52:04.173 ERROR 156 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=503, reply-text=COMMAND_INVALID - unknown exchange type 'x-delayed-message', class-id=40, method-id=10)
See the plugin documentation.
Another possibility is the exchange already existed. Exchange configuration is immutable; you will see a message like this...
2018-07-09 09:04:43.202 ERROR 3309 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'type' for exchange 'so51244078' in vhost '/': received ''x-delayed-message'' but current is 'direct', class-id=40, method-id=10)
In this case you have to delete the exchange first.
By the way, you will need a routing key too; by default the queue will be bound with the topic exchange wildcard #.