I'm trying to use the Stanford Segementer bit from the NLTK Tokenize package. However, I run into issues just trying to use the basic test set. Running the following:
# -*- coding: utf-8 -*-
from nltk.tokenize.stanford_segmenter import StanfordSegmenter
seg = StanfordSegmenter()
seg.default_config('zh')
sent = u'这是斯坦福中文分词器测试'
print(seg.segment(sent))
Results in this error:
I got as far as to add...
import os
javapath = "C:/Users/User/Folder/stanford-segmenter-2017-06-09/*"
os.environ['CLASSPATH'] = javapath
...to the front of my code, but that didn't seem to help.
How do I get the segmentor to run properly?
Note: This solution would only work for:
NLTK v3.2.5 (v3.2.6 would have an even simpler interface)
Stanford CoreNLP (version >= 2016-10-31)
First you have to get Java 8 properly installed first and if Stanford CoreNLP works on command line, the Stanford CoreNLP API in NLTK v3.2.5 is as follows.
Note: You have to start the CoreNLP server in terminal BEFORE using the new CoreNLP API in NLTK.
English
In terminal:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-preload tokenize,ssplit,pos,lemma,parse,depparse \
-status_port 9000 -port 9000 -timeout 15000
In Python:
>>> from nltk.tag.stanford import CoreNLPPOSTagger, CoreNLPNERTagger
>>> stpos, stner = CoreNLPPOSTagger(), CoreNLPNERTagger()
>>> stpos.tag('What is the airspeed of an unladen swallow ?'.split())
[(u'What', u'WP'), (u'is', u'VBZ'), (u'the', u'DT'), (u'airspeed', u'NN'), (u'of', u'IN'), (u'an', u'DT'), (u'unladen', u'JJ'), (u'swallow', u'VB'), (u'?', u'.')]
>>> stner.tag('Rami Eid is studying at Stony Brook University in NY'.split())
[(u'Rami', u'PERSON'), (u'Eid', u'PERSON'), (u'is', u'O'), (u'studying', u'O'), (u'at', u'O'), (u'Stony', u'ORGANIZATION'), (u'Brook', u'ORGANIZATION'), (u'University', u'ORGANIZATION'), (u'in', u'O'), (u'NY', u'O')]
Chinese
In terminal:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
wget http://nlp.stanford.edu/software/stanford-chinese-corenlp-2016-10-31-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-chinese.properties
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-chinese.properties \
-preload tokenize,ssplit,pos,lemma,ner,parse \
-status_port 9001 -port 9001 -timeout 15000
In Python
>>> from nltk.tag.stanford import CoreNLPPOSTagger, CoreNLPNERTagger
>>> from nltk.tokenize.stanford import CoreNLPTokenizer
>>> stpos, stner = CoreNLPPOSTagger('http://localhost:9001'), CoreNLPNERTagger('http://localhost:9001')
>>> sttok = CoreNLPTokenizer('http://localhost:9001')
>>> sttok.tokenize(u'我家没有电脑。')
['我家', '没有', '电脑', '。']
# Without segmentation (input to`raw_string_parse()` is a list of single char strings)
>>> stpos.tag(u'我家没有电脑。')
[('我', 'PN'), ('家', 'NN'), ('没', 'AD'), ('有', 'VV'), ('电', 'NN'), ('脑', 'NN'), ('。', 'PU')]
# With segmentation
>>> stpos.tag(sttok.tokenize(u'我家没有电脑。'))
[('我家', 'NN'), ('没有', 'VE'), ('电脑', 'NN'), ('。', 'PU')]
# Without segmentation (input to`raw_string_parse()` is a list of single char strings)
>>> stner.tag(u'奥巴马与迈克尔·杰克逊一起去杂货店购物。')
[('奥', 'GPE'), ('巴', 'GPE'), ('马', 'GPE'), ('与', 'O'), ('迈', 'O'), ('克', 'PERSON'), ('尔', 'PERSON'), ('·', 'O'), ('杰', 'O'), ('克', 'O'), ('逊', 'O'), ('一', 'NUMBER'), ('起', 'O'), ('去', 'O'), ('杂', 'O'), ('货', 'O'), ('店', 'O'), ('购', 'O'), ('物', 'O'), ('。', 'O')]
# With segmentation
>>> stner.tag(sttok.tokenize(u'奥巴马与迈克尔·杰克逊一起去杂货店购物。'))
[('奥巴马', 'PERSON'), ('与', 'O'), ('迈克尔·杰克逊', 'PERSON'), ('一起', 'O'), ('去', 'O'), ('杂货店', 'O'), ('购物', 'O'), ('。', 'O')]
German
In terminal:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
wget http://nlp.stanford.edu/software/stanford-german-corenlp-2016-10-31-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-german.properties
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-german.properties \
-preload tokenize,ssplit,pos,ner,parse \
-status_port 9002 -port 9002 -timeout 15000
In Python:
>>> from nltk.tag.stanford import CoreNLPPOSTagger, CoreNLPNERTagger
>>> stpos, stner = CoreNLPPOSTagger('http://localhost:9002'), CoreNLPNERTagger('http://localhost:9002')
>>> stpos.tag('Ich bin schwanger'.split())
[('Ich', 'PPER'), ('bin', 'VAFIN'), ('schwanger', 'ADJD')]
>>> stner.tag('Donald Trump besuchte Angela Merkel in Berlin.'.split())
[('Donald', 'I-PER'), ('Trump', 'I-PER'), ('besuchte', 'O'), ('Angela', 'I-PER'), ('Merkel', 'I-PER'), ('in', 'O'), ('Berlin', 'I-LOC'), ('.', 'O')]
Spanish
In terminal:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
wget http://nlp.stanford.edu/software/stanford-spanish-corenlp-2016-10-31-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-spanish.properties
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-spanish.properties \
-preload tokenize,ssplit,pos,ner,parse \
-status_port 9003 -port 9003 -timeout 15000
In Python:
>>> from nltk.tag.stanford import CoreNLPPOSTagger, CoreNLPNERTagger
>>> stpos, stner = CoreNLPPOSTagger('http://localhost:9003'), CoreNLPNERTagger('http://localhost:9003')
>>> stner.tag(u'Barack Obama salió con Michael Jackson .'.split())
[(u'Barack', u'PERS'), (u'Obama', u'PERS'), (u'sali\xf3', u'O'), (u'con', u'O'), (u'Michael', u'PERS'), (u'Jackson', u'PERS'), (u'.', u'O')]
>>> stpos.tag(u'Barack Obama salió con Michael Jackson .'.split())
[(u'Barack', u'np00000'), (u'Obama', u'np00000'), (u'sali\xf3', u'vmis000'), (u'con', u'sp000'), (u'Michael', u'np00000'), (u'Jackson', u'np00000'), (u'.', u'fp')]
French
In terminal:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
wget http://nlp.stanford.edu/software/stanford-french-corenlp-2016-10-31-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-french.properties
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-french.properties \
-preload tokenize,ssplit,pos,parse \
-status_port 9004 -port 9004 -timeout 15000
In Python:
>>> from nltk.tag.stanford import CoreNLPPOSTagger
>>> stpos = CoreNLPPOSTagger('http://localhost:9004')
>>> stpos.tag('Je suis enceinte'.split())
[(u'Je', u'CLS'), (u'suis', u'V'), (u'enceinte', u'NC')]
Arabic
In terminal:
wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
unzip stanford-corenlp-full-2016-10-31.zip && cd stanford-corenlp-full-2016-10-31
wget http://nlp.stanford.edu/software/stanford-arabic-corenlp-2016-10-31-models.jar
wget https://raw.githubusercontent.com/stanfordnlp/CoreNLP/master/src/edu/stanford/nlp/pipeline/StanfordCoreNLP-arabic.properties
java -Xmx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer \
-serverProperties StanfordCoreNLP-french.properties \
-preload tokenize,ssplit,pos,parse \
-status_port 9005 -port 9005 -timeout 15000
In Python:
>>> from nltk.tag.stanford import CoreNLPPOSTagger
>>> from nltk.tokenize.stanford import CoreNLPTokenizer
>>> sttok = CoreNLPTokenizer('http://localhost:9005')
>>> stpos = CoreNLPPOSTagger('http://localhost:9005')
>>> text = u'انا حامل'
>>> stpos.tag(sttok.tokenize(text))
[('انا', 'DET'), ('حامل', 'NC')]
I have a jar application that has several functions, one of which is to convert from HTML to XML. When I try to run a simple command such as:
java -jar lt4el-cmd.jar send -l en "l2:https://en.wikipedia.org/wiki/Personal_computer"
I get the following errors:
ERROR [Thread-1]: html2base/html2base-wrapper.sh: Too late for "-C" option at html2base/html2xml.pl line 1.
/tmp/lpc.30872.html: failed
cat: /tmp/lpc.30872.xml: No such file or directory
(LpcControl.java:229)
ERROR [Thread-1]: ana2ont/ana2ont.sh ${lang}: -:1: parser error : Document is empty
-:1: parser error : Start tag expected, '<' not found
Tokenization/tagging failed
^
-:1: parser error : Document is empty
unable to parse -
-:1: parser error : Document is empty
unable to parse -
(LpcControl.java:229)
ERROR [Thread-1]: Error in conversion: Error running conversion script (ana2ont/ana2ont.sh ${lang}): 6 (AppInterface.java:159)
This is the html2base-wrapper.sh script which seems to be where the first error occurs.
#!/bin/bash
if [ "$1" == "check" ]; then
. common.sh
check_binary perl || exit 1
check_perl_module HTML::TreeBuilder || exit 1
check_perl_module XML::LibXML || exit 1
check_binary tidy || exit 1
check_binary xmllint || exit 1
check_binary xsltproc || exit 1
exit
fi
cat >"$TMPDIR/lpc.$$.html"
html2base/html2base.sh -d html2base/LT4ELBase.dtd -x html2base/LT4ELBase.xslt -t "$TMPDIR/lpc.$$.html" >&2
cat "$TMPDIR/lpc.$$.xml";
rm -f "$TMPDIR"/lpc.$$.{ht,x}ml
And the html2base.sh script:
#!/bin/bash
#
# Sample script for automated HTML -> XML conversion
#
# Miroslav Spousta <spousta#ufal.mff.cuni.cz>
# $Id: html2base.sh 462 2008-03-17 08:37:14Z qiq $
basedir=`dirname $0`;
# constants
HTML2XML_BIN=${basedir}/html2xml.pl
ICONV_BIN=iconv
TIDY_BIN=tidy
XMLLINT_BIN=xmllint
XSLTPROC_BIN=xsltproc
DTDPARSE_BIN=dtdparse
TMPDIR=/tmp
# default values
VERBOSE=0
ENCODING=
TIDY=0
VALIDATE=0
DTD=${basedir}/LT4ELBase.dtd
XSLT=${basedir}/LT4ELBase.xslt
usage()
{
echo "usage: html2base.sh [options] file(s)"
echo "XML -> HTML conversion script."
echo
echo " -e, --encoding=charset Convert input files from encoding to UTF-8 (none)"
echo " -d, --dtd=file DTD to be used for conversion and validation ($DTD)"
echo " -x, --xslt=file XSLT to be applied after conversion ($XSLT)"
echo " -t, --tidy Run HTMLTidy on input HTML files"
echo " -a, --validate Validate output XML files"
echo " -v, --verbose Be verbose"
echo " -h, --help Print this usage"
exit 1;
}
OPTIONS=`getopt -o e:d:x:tahv -l encoding:,dtd:,xlst,tidy,validate,verbose,help -n 'convert.sh' -- "$#"`
if [ $? != 0 ]; then
usage;
fi
eval set -- "$OPTIONS"
while true ; do
case "$1" in
-e | --encoding) ENCODING=$2; shift 2 ;;
-d | --dtd) DTD=$2; shift 2 ;;
-x | --xslt) XSLT=$2; shift 2 ;;
-t | --tidy) TIDY=1; shift 1;;
-a | --validate) VALIDATE=1; shift 1;;
-v | --verbose) VERBOSE=1; shift 1 ;;
-h | --help) usage; shift 1 ;;
--) shift ; break ;;
*) echo "Internal error!" ; echo $1; exit 1 ;;
esac
done
if [ $# -eq 0 ]; then
usage;
fi
DTD_XML=`echo "$DTD"|sed -e 's/\.dtd/.xml/'`
if [ "$VERBOSE" -eq 1 ]; then
VERBOSE=--verbose
else
VERBOSE=
fi
# create $DTD_XML if necessary
if [ ! -f "$DTD_XML" ]; then
if ! $DTDPARSE_BIN $DTD -o $DTD_XML 2>/dev/null; then
echo "cannot run dtdparse, cannot create $DTD_XML";
exit 1;
fi;
fi
# process file by file
total=0
nok=0
while [ -n "$1" ]; do
file=$1;
if [ -n "$VERBOSE" ]; then
echo "Processing $file..."
fi
f="$file";
result=0;
if [ -n "$ENCODING" ]; then
$ICONV_BIN -f "$ENCODING" -t utf-8 "$f" -o "$file.xtmp"
result=$?
error="encoding error"
f=$file.xtmp
fi
if [ "$result" -eq 0 ]; then
if [ "$TIDY" = '1' ]; then
$TIDY_BIN --force-output 1 -q -utf8 >"$file.xtmp2" "$f" 2>/dev/null
f=$file.xtmp2
fi
out=`echo $file|sed -e 's/\.x\?html\?$/.xml/'`
if [ "$out" = "$file" ]; then
out="$out.xml"
fi
$HTML2XML_BIN --simplify-ws $VERBOSE $DTD_XML -o "$out" "$f"
result=$?
error="failed"
fi
if [ "$result" -eq 0 ]; then
$XSLTPROC_BIN --path `dirname $DTD` $XSLT "$out" |$XMLLINT_BIN --noblanks --format -o "$out.tmp1" -
result=$?
error="failed"
mv "$out.tmp1" "$out"
if [ "$result" -eq 0 -a "$VALIDATE" = '1' ]; then
tmp=`dirname $file`/$DTD
delete=0
if [ ! -f $tmp ]; then
cp $DTD $tmp
delete=1
fi
$XMLLINT_BIN --path `dirname $DTD` --valid --noout "$out"
result=$?
error="validation error"
if [ "$delete" -eq 1 ]; then
rm -f $tmp
fi
fi
fi
if [ "$result" -eq 0 ]; then
if [ -n "$VERBOSE" ]; then
echo "OK"
fi
else
echo "$file: $error "
nok=`expr $nok + 1`
fi
total=`expr $total + 1`
rm -f $file.xtmp $file.xtmp2
shift;
done
if [ -n "$VERBOSE" ]; then
echo
echo "Total: $total, failed: $nok"
fi
And the beginning part of the html2xml.pl file:
#!/usr/bin/perl -W -C
# Simple HTML to XML (subset of XHTML) conversion tool. Should always produce a
# valid XML file according to the output DTD file specified.
#
# Miroslav Spousta <spousta#ufal.mff.cuni.cz>
# $Id: html2xml.pl 461 2008-03-09 09:49:42Z qiq $
use HTML::TreeBuilder;
use HTML::Element;
use HTML::Entities;
use XML::LibXML;
use Getopt::Long;
use Data::Dumper;
use strict;
I can't seem to figure where the problem is. And what exactly does ERROR [Thread-1] mean?
Thanks
The error comes from having -C on the shebang (#!) line of a Perl script, but not passing the -C to perl. This type of error happens when someone does
perl html2base/html2xml.pl ...
instead of
html2base/html2xml.pl ...
The error was from the the html2xml.pl script as other users rightly mentioned. I'm running ubuntu 16.04.2 system which comes with a default perl 5.22 version. And as this post mentions, using the -C option (as from perl 5.10.1) on the #! line requires you to also specify it on the command line at execution time, which I wasn't sure how to do because I was running a jar file. I installed perlbrew, instead, which I used to get an earlier version of perl and modified my perl script to:
#!/usr/bin/path/to/perlbrew/perl -W -C
# Simple HTML to XML (subset of XHTML) conversion tool. Should always produce a
# valid XML file according to the output DTD file specified.
#
# Miroslav Spousta <spousta#ufal.mff.cuni.cz>
# $Id: html2xml.pl 461 2008-03-09 09:49:42Z qiq $
This might also come in handy in setting up shell scripts when using perlbrew.
Thanks for the efforts in contribution.
This is a wired problem confusing me for days.I want to get a class's full class name from parse the java code file in shell.We can get package name from like:
package com.android.mail.ui;
and get class name from code file path,use shell command 'basename'.
below is my shell scripts:
#!/bin/bash
get_package_name(){
java_file=$1
if [ ! -f $file_path ]; then
echo "Sorry,the java file is not exist:$1,please check"
exit 1
fi
class_base_name=`basename "$java_file" .java`
echo "class_base_name:$class_base_name"
package_name=`grep $java_file -e "^package" | awk -F " " '{print $2}' | tr ';' ' ' | sed 's/ //g'`
echo "package_name get result:$?"
echo "package_name:$package_name"
method 1,use variable concat directly
classpath_name=$package_name.$class_base_name
echo "method 1 classpath_name:$classpath_name"
method 2,use sed replace to get concat indirectly
classpath_name2=`echo "aa.bb" | sed "s/aa/$package_name/" | sed "s/bb/$class_base_name/"`
echo "method 2 classpath_name2:$classpath_name2"
}
The problem is:for some code file the result is ok,like:
"class_base_name:MailTransport package_name get result:0
package_name:com.android.email.mail.transport method 1
classpath_name:com.android.email.mail.transport.MailTransport method 2
classpath_name2:com.android.email.mail.transport.MailTransport"
for others it's output is : "class_base_name:EmailApplication
package_name get result:0 package_name:com.android.email
.EmailApplicationh_name:com.android.email
.EmailApplicationh_name2:com.android.email"
the result is totally messing and wrong.I doubt it relates the code
content,that really make sense for the result?
This happens because some of your files use Windows style CRLF (\r\n) line terminators.
Here's an example where it works, a normal Unix style LF (\n) terminated file:
$ file WorkingFile.java
WorkingFile.java: ASCII text
$ cat -v WorkingFile.java
package foo.bar.baz;
$ get_package_name WorkingFile.java
class_base_name:WorkingFile
package_name get result:0
package_name:foo.bar.baz
method 1 classpath_name:foo.bar.baz.WorkingFile
Here's an example where it fails, with CRLF line terminators:
$ file FailingFile.java
FailingFile.java: ASCII text, with CRLF line terminators
$ cat -v FailingFile.java
package foo.bar.baz;^M <--- note hidden control char revealed by -v
$ get_package_name FailingFile.java
class_base_name:FailingFile
package_name get result:0
package_name:foo.bar.baz
.FailingFilesspath_name:foo.bar.baz
To fix it, you can delete the extra carriage returns using tr -d '\r'. I switched from legacy backticks to modern $() to avoid problems with backslashes:
package_name=$(grep $java_file -e "^package" | awk -F " " '{print $2}' | tr ';' ' ' | sed 's/ //g' | tr -d '\r')
For more information, see this relevant post.
Using Java schemacrawler, why is it scanning every table in my database? Shouldn't it just be scanning the database I specified on the command line: -database=openfire ???
:: schemacrawler batch launcher
#echo off
C:\JDK\bin\java.exe -classpath jtds-1.2.4.jar;schemacrawler-8.8.jar; \
schemacrawler-sqlserver-8.8.jar schemacrawler.tools.sqlserver.Main \
-user=sa -password=password -database=openfire -port=1433 -host=localhost \
-table_types=TABLE -command=schema -schemas=.*\.dbo.* -infolevel=standard \
-loglevel=FINE
Ok, I figured it out. The -database flag only gives the sql driver a connect path, it doesn't affect the filter for the database name. In other words:
-database=openfire DOES NOT EQUAL -schemas=openfire.dbo.*
So, the answer is:
:: schemacrawler batch launcher
#echo off
C:\JDK\bin\java.exe -classpath jtds-1.2.4.jar;schemacrawler-8.8.jar; \
schemacrawler-sqlserver-8.8.jar schemacrawler.tools.sqlserver.Main \
-user=sa -password=password -database=openfire -schemas=openfire.dbo.* \
-port=1433 -host=localhost -table_types=TABLE -command=schema \
-infolevel=standard -loglevel=FINE
My finished batch script:
#ECHO OFF
SETLOCAL ENABLEDELAYEDEXPANSION
:: first get timestamp of this script
SETLOCAL
FOR /F "skip=1 tokens=2-4 delims=(-)" %%a IN ('"echo.|date"') DO (
FOR /F "tokens=1-3 delims=/.- " %%A IN ("%DATE:* =%") DO (
SET %%a=%%A&SET %%b=%%B&SET %%c=%%C))
SET /A "yy=10000%yy% %%10000,mm=100%mm% %% 100,dd=100%dd% %% 100"
FOR /F "tokens=1-4 delims=:. " %%A IN ("%time: =0%") DO #SET \
UNIQUE=%yy%%mm%%dd%-%%A%%B
SET TITLE=Schema Crawler
TITLE=%TITLE%
:: supports DBNAME as argument
IF NOT "%1"=="" (
SET DBNAME=%1
C:\JDK\bin\java.exe -classpath jtds-1.2.4.jar;schemacrawler-8.8.jar; \
schemacrawler-sqlserver-8.8.jar schemacrawler.tools.sqlserver.Main -user=sa \
-password=password -database=!DBNAME! -schemas=!DBNAME!.dbo.* -port=1433 \
-host=localhost -table_types=TABLE -command=schema -procedures= \
-infolevel=lint -loglevel=OFF > !DBNAME!_schema_!UNIQUE!.txt
GOTO :END
)
:: run minimized
::IF NOT DEFINED PIL (
:: SET PIL=1
:: START /MIN "" %~0 %1
:: EXIT /B
::)
:: script start
ECHO Working...
OSQL.exe -E -Slocalhost -h-1 -Q"SET NOCOUNT ON;SELECT LTRIM(RTRIM(name)) \
FROM sysdatabases WHERE name NOT IN ('master','tempdb','model','msdb');" \
>dblist.txt
FOR /F "tokens=* delims= " %%I IN (dblist.txt) DO (
IF NOT "%%I"==" " (
SET DBNAME=%%I
SET DBNAME=!DBNAME: =!
ECHO !DBNAME!
C:\JDK\bin\java.exe -classpath jtds-1.2.4.jar;schemacrawler-8.8.jar; \
schemacrawler-sqlserver-8.8.jar schemacrawler.tools.sqlserver.Main -user=sa \
-password=password -database=!DBNAME! -schemas=!DBNAME!.dbo.* -port=1433 \
-host=localhost -table_types=TABLE -command=schema -procedures= \
-infolevel=lint -loglevel=OFF > !DBNAME!_schema_!UNIQUE!.txt
)
)
DEL /Q dblist.txt
GOTO :EXIT
:END
ECHO Finished processing %1 . Closing in 20 seconds...
ECHO.
FOR /l %%a in (20,-1,1) do (TITLE %TITLE% -- closing in %%as&ping \
-n 2 -w 1 127.0.0.1>NUL)
:EXIT
EXIT