首页 文章

希腊语单词识别,同时阅读网址与python

提问于
浏览
0

我是新的python程序员 . 我写了一个简单的脚本,它正在执行以下操作:

  • 询问用户的url

  • 读取url(urlopen(url).read())

  • 将上述命令的结果标记为

我在两个文件中获取标记化的结果 . 一个人有拉丁字符(英语,西班牙语等),另一个有其他人(希腊语等) .

问题在于,当我打开一个希腊网址时,我从中取出希腊语,但我认为它是一系列字符,而不是单词(就像拉丁文中的情况一样) .

我希望得到一个单词列表( μαριαγιωργοςπαιδι )(项目数量3),但我所采取的是 ('μ','α','ρ','ι', 'α'........) 与字母一样多的项目数量

我该怎么办? (编码为utf-8)

遵循代码:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

#Importing useful libraries 
#NOTE: Nltk should be installed first!!!
import nltk
import urllib #mporei na einai kai urllib
import re
import lxml.html.clean
import unicodedata
from urllib import urlopen

http = "http://"
www = "www."
#pattern = r'[^\a-z0-9]'

#Demand url from the user
url=str(raw_input("Please, give a url and then press ENTER: \n"))


#Construct a valid url syntax
if (url.startswith("http://"))==False:
    if(url.startswith("www"))==False:
        msg=str(raw_input("Does it need 'www'? Y/N \n"))
        if (msg=='Y') | (msg=='y'):
            url=http+www+url
        elif (msg=='N') | (msg=='n'):
            url=http+url
        else:
            print "You should type 'y' or 'n'"
    else:
        url=http+url

latin_file = open("Latin_words.txt", "w")
greek_file = open("Other_chars.txt", "w")
latin_file.write(url + '\n')
latin_file.write("The latin words of the above url are the following:" + '\n')
greek_file.write("Οι ελληνικές λέξεις καθώς και απροσδιόριστοι χαρακτήρες")

#Reading the given url

raw=urllib.urlopen(url).read()

#Retrieve the html body from the url. Clean it from html special characters
pure = nltk.clean_html(raw)
text = pure

#Retrieve the words (tokens) of the html body in a list
tokens = nltk.word_tokenize(text)

counter=0
greeks=0
for i in tokens:
    if re.search('[^a-zA-Z]', i):
        #greeks+=1
        greek_file.write(i)
    else:
        if len(i)>=4:
            print i
            counter+=1
            latin_file.write(i + '\n')
        else:
            del i


#Print the number of words that I shall take as a result
print "The number of latin tokens is: %d" %counter

latin_file.write("The number of latin tokens is: %d and the number of other characters is: %d" %(counter, greeks))
latin_file.close()
greek_file.close()

我在很多方面检查了它,并且,据我所知,该程序只识别希腊字符,但未能识别希腊字,意思是,与女巫我们分开单词的空间!

如果我在终端中输入带空格的希腊语句子,它就会正确显示 . 当我读到某些内容时(例如来自html页面的正文)会出现问题

另外,在text_file.write(i)中,关于希腊语i,如果我写text_file.write(i'\ n'),结果是未识别的字符,又名,我丢失了我的编码!

关于上述的任何想法?

3 回答

  • 0

    下面是代码的简化版本,使用优秀的requests library获取URL,with statement自动关闭文件,io帮助utf8 .

    import io
    import nltk
    import requests
    import string
    
    url = raw_input("Please, give a url and then press ENTER: \n")
    if not url.startswith('http://'):
       url = 'http://'+url
    page_text = requests.get(url).text
    tokens = nltk.word_tokenize(page_text)
    
    latin_words = [w for w in tokens if w.isalpha()]
    greek_words = [w for w in tokens if w not in latin_words]
    
    print 'The number of latin tokens is {0}'.format(len(latin_words))
    
    with (io.open('latin_words.txt','w',encoding='utf8') as latin_file,
          io.open('greek_words.txt','w',encoding='utf8') as greek_file):
    
        greek_file.writelines(greek_words)
        latin_file.writelines(latin_words)
    
        latin_file.write('The number of latin words is {0} and the number of others {1}\n'.format(len(latin_words),len(greek_words))
    

    我简化了检查URL的部分;这样无法读取无效的URL .

  • 0

    Python re 模块因其弱的unicode支持而臭名昭着 . 对于严重的unicode工作,请考虑替代regex module,它完全支持unicode脚本和属性 . 例:

    text = u"""
    Some latin words, for example: cat niño määh fuß
    Οι ελληνικές λέξεις καθώς και απροσδιόριστοι χαρακτήρες
    """
    
    import regex
    
    latin_words = regex.findall(ur'\p{Latin}+', text)
    greek_words = regex.findall(ur'\p{Greek}+', text)
    
  • 0

    在这里我认为你正在寻找子串而不是字符串 if re.search('[^a-zA-Z]', i) 你可以通过循环列表从列表中获取单词 token

相关问题