首页 文章

如何在列表中找到重复项并使用它们创建另一个列表?

提问于
浏览
296

如何在Python列表中找到重复项并创建另一个重复项列表?该列表仅包含整数 .

25 回答

  • 68

    这是一个快速生成器,它使用dict将每个元素存储为一个带有布尔值的键,用于检查是否已经生成了重复项 .

    对于包含可清除类型的所有元素的列表:

    def gen_dupes(array):
        unique = {}
        for value in array:
            if value in unique and unique[value]:
                unique[value] = False
                yield value
            else:
                unique[value] = True
    
    array = [1, 2, 2, 3, 4, 1, 5, 2, 6, 6]
    print(list(gen_dupes(array)))
    # => [2, 1, 6]
    

    对于可能包含列表的列表:

    def gen_dupes(array):
        unique = {}
        for value in array:
            is_list = False
            if type(value) is list:
                value = tuple(value)
                is_list = True
    
            if value in unique and unique[value]:
                unique[value] = False
                if is_list:
                    value = list(value)
    
                yield value
            else:
                unique[value] = True
    
    array = [1, 2, 2, [1, 2], 3, 4, [1, 2], 5, 2, 6, 6]
    print(list(gen_dupes(array)))
    # => [2, [1, 2], 6]
    
  • 0

    我们可以使用itertools.groupby来查找所有具有重复项的项目:

    from itertools import groupby
    
    myList  = [2, 4, 6, 8, 4, 6, 12]
    # when the list is sorted, groupby groups by consecutive elements which are similar
    for x, y in groupby(sorted(myList)):
        #  list(y) returns all the occurences of item x
        if len(list(y)) > 1:
            print x
    

    输出将是:

    4
    6
    
  • 2

    collections.Counter是python 2.7中的新功能:

    Python 2.5.4 (r254:67916, May 31 2010, 15:03:39) 
    [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
    a = [1,2,3,2,1,5,6,5,5,5]
    import collections
    print [x for x, y in collections.Counter(a).items() if y > 1]
    Type "help", "copyright", "credits" or "license" for more information.
      File "", line 1, in 
    AttributeError: 'module' object has no attribute 'Counter'
    >>>
    

    在早期版本中,您可以使用传统的dict代替:

    a = [1,2,3,2,1,5,6,5,5,5]
    d = {}
    for elem in a:
        if elem in d:
            d[elem] += 1
        else:
            d[elem] = 1
    
    print [x for x, y in d.items() if y > 1]
    
  • 0
    >>> l = [1,2,3,4,4,5,5,6,1]
    >>> set([x for x in l if l.count(x) > 1])
    set([1, 4, 5])
    
  • 25

    您不需要计数,只需要查看之前是否看过该项目 . 将that answer改编为此问题:

    def list_duplicates(seq):
      seen = set()
      seen_add = seen.add
      # adds all elements it doesn't know yet to seen and all other to seen_twice
      seen_twice = set( x for x in seq if x in seen or seen_add(x) )
      # turn the set into a list (as requested)
      return list( seen_twice )
    
    a = [1,2,3,2,1,5,6,5,5,5]
    list_duplicates(a) # yields [1, 2, 5]
    

    为了防止速度问题,这里有一些时间:

    # file: test.py
    import collections
    
    def thg435(l):
        return [x for x, y in collections.Counter(l).items() if y > 1]
    
    def moooeeeep(l):
        seen = set()
        seen_add = seen.add
        # adds all elements it doesn't know yet to seen and all other to seen_twice
        seen_twice = set( x for x in l if x in seen or seen_add(x) )
        # turn the set into a list (as requested)
        return list( seen_twice )
    
    def RiteshKumar(l):
        return list(set([x for x in l if l.count(x) > 1]))
    
    def JohnLaRooy(L):
        seen = set()
        seen2 = set()
        seen_add = seen.add
        seen2_add = seen2.add
        for item in L:
            if item in seen:
                seen2_add(item)
            else:
                seen_add(item)
        return list(seen2)
    
    l = [1,2,3,2,1,5,6,5,5,5]*100
    

    以下是结果:(做得好@JohnLaRooy!)

    $ python -mtimeit -s 'import test' 'test.JohnLaRooy(test.l)'
    10000 loops, best of 3: 74.6 usec per loop
    $ python -mtimeit -s 'import test' 'test.moooeeeep(test.l)'
    10000 loops, best of 3: 91.3 usec per loop
    $ python -mtimeit -s 'import test' 'test.thg435(test.l)'
    1000 loops, best of 3: 266 usec per loop
    $ python -mtimeit -s 'import test' 'test.RiteshKumar(test.l)'
    100 loops, best of 3: 8.35 msec per loop
    

    有趣的是,除了时间本身之外,当使用pypy时,排名也略有变化 . 最有趣的是,基于计数器的方法从pypy的优化中获益匪浅,而我所建议的方法缓存方法似乎几乎没有效果 .

    $ pypy -mtimeit -s 'import test' 'test.JohnLaRooy(test.l)'
    100000 loops, best of 3: 17.8 usec per loop
    $ pypy -mtimeit -s 'import test' 'test.thg435(test.l)'
    10000 loops, best of 3: 23 usec per loop
    $ pypy -mtimeit -s 'import test' 'test.moooeeeep(test.l)'
    10000 loops, best of 3: 39.3 usec per loop
    

    显然这种效果与输入数据的"duplicatedness"有关 . 我已设置 l = [random.randrange(1000000) for i in xrange(10000)] 并获得以下结果:

    $ pypy -mtimeit -s 'import test' 'test.moooeeeep(test.l)'
    1000 loops, best of 3: 495 usec per loop
    $ pypy -mtimeit -s 'import test' 'test.JohnLaRooy(test.l)'
    1000 loops, best of 3: 499 usec per loop
    $ pypy -mtimeit -s 'import test' 'test.thg435(test.l)'
    1000 loops, best of 3: 1.68 msec per loop
    
  • 254

    使用熊猫:

    >>> import pandas as pd
    >>> a = [1, 2, 1, 3, 3, 3, 0]
    >>> pd.Series(a)[pd.Series(a).duplicated()].values
    array([1, 3, 3])
    
  • 1
    def removeduplicates(a):
      seen = set()
    
      for i in a:
        if i not in seen:
          seen.add(i)
      return seen 
    
    print(removeduplicates([1,1,2,2]))
    
  • 4

    接受的答案的第三个例子给出了错误的答案,并没有试图给出重复 . 这是正确的版本:

    number_lst = [1, 1, 2, 3, 5, ...]
    
    seen_set = set()
    duplicate_set = set(x for x in number_lst if x in seen_set or seen_set.add(x))
    unique_set = seen_set - duplicate_set
    
  • 3

    我会用熊猫做这件事,因为我经常使用熊猫

    import pandas as pd
    a = [1,2,3,3,3,4,5,6,6,7]
    vc = pd.Series(a).value_counts()
    vc[vc > 1].index.tolist()
    

    [3,6]
    

    可能效率不高,但肯定的代码少于其他很多答案,所以我认为我会做出贡献

  • 5

    有点晚了,但对某些人可能有帮助 . 对于一个较大的清单,我发现这对我有用 .

    l=[1,2,3,5,4,1,3,1]
    s=set(l)
    d=[]
    for x in l:
        if x in s:
            s.remove(x)
        else:
            d.append(x)
    d
    [1,3,1]
    

    显示刚刚和所有重复项并保留顺序 .

  • 6

    这里有很多答案,但我认为这是一种非常易读且易于理解的方法:

    def get_duplicates(sorted_list):
        duplicates = []
        last = sorted_list[0]
        for x in sorted_list[1:]:
            if x == last:
                duplicates.append(x)
            last = x
        return set(duplicates)
    

    笔记:

    • 如果您希望保留重复计数,请删除底部的转换为'set'以获取完整列表

    • 如果您更喜欢使用生成器,请使用yield x替换duplicates.append(x)并在底部替换return语句(您可以转换为稍后设置)

  • 1

    使用 sort() 功能 . 可以通过循环并检查 l1[i] == l1[i+1] 来识别重复项 .

  • 7

    没有转换为列表,可能最简单的方式将是如下所示 . This may be useful during a interview when they ask not to use sets

    a=[1,2,3,3,3]
    dup=[]
    for each in a:
      if each not in dup:
        dup.append(each)
    print(dup)
    

    ======= else to get 2 separate lists of unique values and duplicate values

    a=[1,2,3,3,3]
    uniques=[]
    dups=[]
    
    for each in a:
      if each not in uniques:
        uniques.append(each)
      else:
        dups.append(each)
    print("Unique values are below:")
    print(uniques)
    print("Duplicate values are below:")
    print(dups)
    
  • 1

    使用_266068时:

    from toolz import frequencies, valfilter
    
    a = [1,2,2,3,4,5,4]
    >>> list(valfilter(lambda count: count > 1, frequencies(a)).keys())
    [2,4]
    
  • 5

    这是我必须这样做的方式因为我挑战自己不使用其他方法:

    def dupList(oldlist):
        if type(oldlist)==type((2,2)):
            oldlist=[x for x in oldlist]
        newList=[]
        newList=newList+oldlist
        oldlist=oldlist
        forbidden=[]
        checkPoint=0
        for i in range(len(oldlist)):
            #print 'start i', i
            if i in forbidden:
                continue
            else:
                for j in range(len(oldlist)):
                    #print 'start j', j
                    if j in forbidden:
                        continue
                    else:
                        #print 'after Else'
                        if i!=j: 
                            #print 'i,j', i,j
                            #print oldlist
                            #print newList
                            if oldlist[j]==oldlist[i]:
                                #print 'oldlist[i],oldlist[j]', oldlist[i],oldlist[j]
                                forbidden.append(j)
                                #print 'forbidden', forbidden
                                del newList[j-checkPoint]
                                #print newList
                                checkPoint=checkPoint+1
        return newList
    

    所以你的样本如下:

    >>>a = [1,2,3,3,3,4,5,6,6,7]
    >>>dupList(a)
    [1, 2, 3, 4, 5, 6, 7]
    
  • 16

    我在这次讨论中进入的时间很晚 . 尽管如此,我想用一个衬垫处理这个问题 . 因为这是Python的魅力所在 . 如果我们只想将重复项放到单独的列表(或任何集合)中,我建议如下所示 . 我们有一个重复的列表,我们称之为'目标'

    target=[1,2,3,4,4,4,3,5,6,8,4,3]
    

    现在,如果我们想要获得重复项,我们可以使用如下所示的单行:

    duplicates=dict(set((x,target.count(x)) for x in filter(lambda rec : target.count(rec)>1,target)))
    

    此代码将重复记录作为键并计入值字典'duplicates' . 'duplicate'字典将如下所示:

    {3: 3, 4: 4} #it saying 3 is repeated 3 times and 4 is 4 times
    

    如果您只想在列表中单独使用重复项的所有记录,那么它的代码也会更短:

    duplicates=filter(lambda rec : target.count(rec)>1,target)
    

    输出将是:

    [3, 4, 4, 4, 3, 4, 3]
    

    这在python 2.7.x版本中完美运行

  • 1

    要删除重复项,请使用 set(a) 来打印重复项 - 类似于

    a = [1,2,3,2,1,5,6,5,5,5]
    
    import collections
    print [item for item, count in collections.Counter(a).items() if count > 1]
    
    ## [1, 2, 5]
    

    请注意 Counter 不是特别有效(timings)并且可能在这里有点过分, set 会表现得更好 . 此代码计算源顺序中的唯一元素列表:

    seen = set()
    uniq = []
    for x in a:
        if x not in seen:
            uniq.append(x)
            seen.add(x)
    

    或者,更简洁地说:

    seen = set()
    uniq = [x for x in a if x not in seen and not seen.add(x)]
    

    我不推荐后一种风格,因为 not seen.add(x) 正在做什么并不明显(set add() 方法总是返回 None ,因此需要 not ) .

    要计算没有库的重复元素列表,

    seen = {}
    dupes = []
    
    for x in a:
        if x not in seen:
            seen[x] = 1
        else:
            if seen[x] == 1:
                dupes.append(x)
            seen[x] += 1
    

    如果列表元素不可清除,则不能使用set / dicts并且必须求助于二次时间解决方案(比较每个解决方案),例如:

    a = [ [1], [2], [3], [1], [5], [3] ]
    
    no_dupes = [x for n, x in enumerate(a) if x not in a[:n]]
    print no_dupes # [[1], [2], [3], [5]]
    
    dupes = [x for n, x in enumerate(a) if x in a[:n]]
    print dupes # [[1], [3]]
    
  • 0

    我在查看相关内容时遇到了这个问题 - 并想知道为什么没有人提供基于发电机的解决方案?解决这个问题将是:

    >>> print list(getDupes_9([1,2,3,2,1,5,6,5,5,5]))
    [1, 2, 5]
    

    我关注可伸缩性,因此测试了几种方法,包括在小列表上运行良好的天真项目,但随着列表变大,可扩展性很大(注意 - 使用timeit会更好,但这是说明性的) .

    我将@moooeeeep包含在内进行比较(它的速度非常快:如果输入列表完全是随机的,速度最快)和迭代的方法,对于大多数排序来说再次更快列表...现在包括来自@firelynx的熊猫方法 - 慢,但不是非常可怕,而且很简单 . 注意 - 对于大多数有序列表来说,排序/开球/拉链方法在我的机器上始终是最快的,moooeeeep对于洗牌列表来说速度最快,但您的里程可能会有所不同 .

    Advantages

    • 使用相同的代码非常快速地测试'any'重复项

    Assumptions

    • 重复项仅应报告一次

    • 不需要保留重复订单

    • 重复可能在列表中的任何位置


    最快的解决方案,1米条目:

    def getDupes(c):
            '''sort/tee/izip'''
            a, b = itertools.tee(sorted(c))
            next(b, None)
            r = None
            for k, g in itertools.izip(a, b):
                if k != g: continue
                if k != r:
                    yield k
                    r = k
    

    Approaches tested

    import itertools
    import time
    import random
    
    def getDupes_1(c):
        '''naive'''
        for i in xrange(0, len(c)):
            if c[i] in c[:i]:
                yield c[i]
    
    def getDupes_2(c):
        '''set len change'''
        s = set()
        for i in c:
            l = len(s)
            s.add(i)
            if len(s) == l:
                yield i
    
    def getDupes_3(c):
        '''in dict'''
        d = {}
        for i in c:
            if i in d:
                if d[i]:
                    yield i
                    d[i] = False
            else:
                d[i] = True
    
    def getDupes_4(c):
        '''in set'''
        s,r = set(),set()
        for i in c:
            if i not in s:
                s.add(i)
            elif i not in r:
                r.add(i)
                yield i
    
    def getDupes_5(c):
        '''sort/adjacent'''
        c = sorted(c)
        r = None
        for i in xrange(1, len(c)):
            if c[i] == c[i - 1]:
                if c[i] != r:
                    yield c[i]
                    r = c[i]
    
    def getDupes_6(c):
        '''sort/groupby'''
        def multiple(x):
            try:
                x.next()
                x.next()
                return True
            except:
                return False
        for k, g in itertools.ifilter(lambda x: multiple(x[1]), itertools.groupby(sorted(c))):
            yield k
    
    def getDupes_7(c):
        '''sort/zip'''
        c = sorted(c)
        r = None
        for k, g in zip(c[:-1],c[1:]):
            if k == g:
                if k != r:
                    yield k
                    r = k
    
    def getDupes_8(c):
        '''sort/izip'''
        c = sorted(c)
        r = None
        for k, g in itertools.izip(c[:-1],c[1:]):
            if k == g:
                if k != r:
                    yield k
                    r = k
    
    def getDupes_9(c):
        '''sort/tee/izip'''
        a, b = itertools.tee(sorted(c))
        next(b, None)
        r = None
        for k, g in itertools.izip(a, b):
            if k != g: continue
            if k != r:
                yield k
                r = k
    
    def getDupes_a(l):
        '''moooeeeep'''
        seen = set()
        seen_add = seen.add
        # adds all elements it doesn't know yet to seen and all other to seen_twice
        for x in l:
            if x in seen or seen_add(x):
                yield x
    
    def getDupes_b(x):
        '''iter*/sorted'''
        x = sorted(x)
        def _matches():
            for k,g in itertools.izip(x[:-1],x[1:]):
                if k == g:
                    yield k
        for k, n in itertools.groupby(_matches()):
            yield k
    
    def getDupes_c(a):
        '''pandas'''
        import pandas as pd
        vc = pd.Series(a).value_counts()
        i = vc[vc > 1].index
        for _ in i:
            yield _
    
    def hasDupes(fn,c):
        try:
            if fn(c).next(): return True    # Found a dupe
        except StopIteration:
            pass
        return False
    
    def getDupes(fn,c):
        return list(fn(c))
    
    STABLE = True
    if STABLE:
        print 'Finding FIRST then ALL duplicates, single dupe of "nth" placed element in 1m element array'
    else:
        print 'Finding FIRST then ALL duplicates, single dupe of "n" included in randomised 1m element array'
    for location in (50,250000,500000,750000,999999):
        for test in (getDupes_2, getDupes_3, getDupes_4, getDupes_5, getDupes_6,
                     getDupes_8, getDupes_9, getDupes_a, getDupes_b, getDupes_c):
            print 'Test %-15s:%10d - '%(test.__doc__ or test.__name__,location),
            deltas = []
            for FIRST in (True,False):
                for i in xrange(0, 5):
                    c = range(0,1000000)
                    if STABLE:
                        c[0] = location
                    else:
                        c.append(location)
                        random.shuffle(c)
                    start = time.time()
                    if FIRST:
                        print '.' if location == test(c).next() else '!',
                    else:
                        print '.' if [location] == list(test(c)) else '!',
                    deltas.append(time.time()-start)
                print ' -- %0.3f  '%(sum(deltas)/len(deltas)),
            print
        print
    

    'all dupes'测试的结果是一致的,在这个数组中发现“第一个”重复然后“全部”重复:

    Finding FIRST then ALL duplicates, single dupe of "nth" placed element in 1m element array
    Test set len change :    500000 -  . . . . .  -- 0.264   . . . . .  -- 0.402  
    Test in dict        :    500000 -  . . . . .  -- 0.163   . . . . .  -- 0.250  
    Test in set         :    500000 -  . . . . .  -- 0.163   . . . . .  -- 0.249  
    Test sort/adjacent  :    500000 -  . . . . .  -- 0.159   . . . . .  -- 0.229  
    Test sort/groupby   :    500000 -  . . . . .  -- 0.860   . . . . .  -- 1.286  
    Test sort/izip      :    500000 -  . . . . .  -- 0.165   . . . . .  -- 0.229  
    Test sort/tee/izip  :    500000 -  . . . . .  -- 0.145   . . . . .  -- 0.206  *
    Test moooeeeep      :    500000 -  . . . . .  -- 0.149   . . . . .  -- 0.232  
    Test iter*/sorted   :    500000 -  . . . . .  -- 0.160   . . . . .  -- 0.221  
    Test pandas         :    500000 -  . . . . .  -- 0.493   . . . . .  -- 0.499
    

    当列表首先被洗牌时,排序的价格变得明显 - 效率明显下降并且@moooeeeep方法占主导地位,set和dict方法类似但是表现不佳:

    Finding FIRST then ALL duplicates, single dupe of "n" included in randomised 1m element array
    Test set len change :    500000 -  . . . . .  -- 0.321   . . . . .  -- 0.473  
    Test in dict        :    500000 -  . . . . .  -- 0.285   . . . . .  -- 0.360  
    Test in set         :    500000 -  . . . . .  -- 0.309   . . . . .  -- 0.365  
    Test sort/adjacent  :    500000 -  . . . . .  -- 0.756   . . . . .  -- 0.823  
    Test sort/groupby   :    500000 -  . . . . .  -- 1.459   . . . . .  -- 1.896  
    Test sort/izip      :    500000 -  . . . . .  -- 0.786   . . . . .  -- 0.845  
    Test sort/tee/izip  :    500000 -  . . . . .  -- 0.743   . . . . .  -- 0.804  
    Test moooeeeep      :    500000 -  . . . . .  -- 0.234   . . . . .  -- 0.311  *
    Test iter*/sorted   :    500000 -  . . . . .  -- 0.776   . . . . .  -- 0.840  
    Test pandas         :    500000 -  . . . . .  -- 0.539   . . . . .  -- 0.540
    
  • -3

    一线解决方案:

    set([i for i in list if sum([1 for a in list if a == i]) > 1])
    
  • 1

    这是一个简洁明了的解决方案 -

    for x in set(li):
        li.remove(x)
    
    li = list(set(li))
    
  • 1

    你可以使用iteration_utilities.duplicates

    >>> from iteration_utilities import duplicates
    
    >>> list(duplicates([1,1,2,1,2,3,4,2]))
    [1, 1, 2, 2]
    

    或者如果您只想要每个副本中的一个,则可以与iteration_utilities.unique_everseen组合:

    >>> from iteration_utilities import unique_everseen
    
    >>> list(unique_everseen(duplicates([1,1,2,1,2,3,4,2])))
    [1, 2]
    

    它还可以处理不可用的元素(但是以性能为代价):

    >>> list(duplicates([[1], [2], [1], [3], [1]]))
    [[1], [1]]
    
    >>> list(unique_everseen(duplicates([[1], [2], [1], [3], [1]])))
    [[1]]
    

    这是其中只有少数其他方法可以处理的东西 .

    基准

    我做了一个包含大多数(但不是全部)方法的快速基准测试 .

    第一个基准仅包括一小部分列表长度,因为某些方法具有 O(n**2) 行为 .

    在图中,y轴表示时间,因此较低的值表示更好 . 它还绘制了log-log,因此可以更好地显示各种值:

    删除 O(n**2) 方法我在列表中做了另外50个元素的基准测试:

    正如您所看到的, iteration_utilities.duplicates 方法比任何其他方法都快,甚至链接 unique_everseen(duplicates(...)) 比其他方法更快或更快 .

    另外一个值得注意的有趣的事情是,对于小型列表,熊猫方法非常慢,但可以轻松竞争更长的列表 .

    然而,由于这些基准测试显示大多数方法的执行大致相同,因此使用哪一个并不重要(除了具有 O(n**2) 运行时的3) .

    from iteration_utilities import duplicates, unique_everseen
    from collections import Counter
    import pandas as pd
    import itertools
    
    def georg_counter(it):
        return [item for item, count in Counter(it).items() if count > 1]
    
    def georg_set(it):
        seen = set()
        uniq = []
        for x in it:
            if x not in seen:
                uniq.append(x)
                seen.add(x)
    
    def georg_set2(it):
        seen = set()
        return [x for x in it if x not in seen and not seen.add(x)]   
    
    def georg_set3(it):
        seen = {}
        dupes = []
    
        for x in it:
            if x not in seen:
                seen[x] = 1
            else:
                if seen[x] == 1:
                    dupes.append(x)
                seen[x] += 1
    
    def RiteshKumar_count(l):
        return set([x for x in l if l.count(x) > 1])
    
    def moooeeeep(seq):
        seen = set()
        seen_add = seen.add
        # adds all elements it doesn't know yet to seen and all other to seen_twice
        seen_twice = set( x for x in seq if x in seen or seen_add(x) )
        # turn the set into a list (as requested)
        return list( seen_twice )
    
    def F1Rumors_implementation(c):
        a, b = itertools.tee(sorted(c))
        next(b, None)
        r = None
        for k, g in zip(a, b):
            if k != g: continue
            if k != r:
                yield k
                r = k
    
    def F1Rumors(c):
        return list(F1Rumors_implementation(c))
    
    def Edward(a):
        d = {}
        for elem in a:
            if elem in d:
                d[elem] += 1
            else:
                d[elem] = 1
        return [x for x, y in d.items() if y > 1]
    
    def wordsmith(a):
        return pd.Series(a)[pd.Series(a).duplicated()].values
    
    def NikhilPrabhu(li):
        li = li.copy()
        for x in set(li):
            li.remove(x)
    
        return list(set(li))
    
    def firelynx(a):
        vc = pd.Series(a).value_counts()
        return vc[vc > 1].index.tolist()
    
    def HenryDev(myList):
        newList = set()
    
        for i in myList:
            if myList.count(i) >= 2:
                newList.add(i)
    
        return list(newList)
    
    def yota(number_lst):
        seen_set = set()
        duplicate_set = set(x for x in number_lst if x in seen_set or seen_set.add(x))
        return seen_set - duplicate_set
    
    def IgorVishnevskiy(l):
        s=set(l)
        d=[]
        for x in l:
            if x in s:
                s.remove(x)
            else:
                d.append(x)
        return d
    
    def it_duplicates(l):
        return list(duplicates(l))
    
    def it_unique_duplicates(l):
        return list(unique_everseen(duplicates(l)))
    

    基准1

    from simple_benchmark import benchmark
    import random
    
    funcs = [
        georg_counter, georg_set, georg_set2, georg_set3, RiteshKumar_count, moooeeeep, 
        F1Rumors, Edward, wordsmith, NikhilPrabhu, firelynx,
        HenryDev, yota, IgorVishnevskiy, it_duplicates, it_unique_duplicates
    ]
    
    args = {2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(2, 12)}
    
    b = benchmark(funcs, args, 'list size')
    
    b.plot()
    

    基准2

    funcs = [
        georg_counter, georg_set, georg_set2, georg_set3, moooeeeep, 
        F1Rumors, Edward, wordsmith, firelynx,
        yota, IgorVishnevskiy, it_duplicates, it_unique_duplicates
    ]
    
    args = {2**i: [random.randint(0, 2**(i-1)) for _ in range(2**i)] for i in range(2, 20)}
    
    b = benchmark(funcs, args, 'list size')
    b.plot()
    

    免责声明

    1这是我写的第三方库:iteration_utilities .

  • 3
    list2 = [1, 2, 3, 4, 1, 2, 3]
    lset = set()
    [(lset.add(item), list2.append(item))
     for item in list2 if item not in lset]
    print list(lset)
    
  • 356

    其他一些测试 . 当然要做......

    set([x for x in l if l.count(x) > 1])
    

    ......太昂贵了 . 使用下一个最终方法的速度要快500倍(更长的数组会产生更好的结果):

    def dups_count_dict(l):
        d = {}
    
        for item in l:
            if item not in d:
                d[item] = 0
    
            d[item] += 1
    
        result_d = {key: val for key, val in d.iteritems() if val > 1}
    
        return result_d.keys()
    

    只有2个循环,没有非常昂贵的 l.count() 操作 .

    这是一个比较方法的代码 . 代码如下,这是输出:

    dups_count: 13.368s # this is a function which uses l.count()
    dups_count_dict: 0.014s # this is a final best function (of the 3 functions)
    dups_count_counter: 0.024s # collections.Counter
    

    测试代码:

    import numpy as np
    from time import time
    from collections import Counter
    
    class TimerCounter(object):
        def __init__(self):
            self._time_sum = 0
    
        def start(self):
            self.time = time()
    
        def stop(self):
            self._time_sum += time() - self.time
    
        def get_time_sum(self):
            return self._time_sum
    
    
    def dups_count(l):
        return set([x for x in l if l.count(x) > 1])
    
    
    def dups_count_dict(l):
        d = {}
    
        for item in l:
            if item not in d:
                d[item] = 0
    
            d[item] += 1
    
        result_d = {key: val for key, val in d.iteritems() if val > 1}
    
        return result_d.keys()
    
    
    def dups_counter(l):
        counter = Counter(l)    
    
        result_d = {key: val for key, val in counter.iteritems() if val > 1}
    
        return result_d.keys()
    
    
    
    def gen_array():
        np.random.seed(17)
        return list(np.random.randint(0, 5000, 10000))
    
    
    def assert_equal_results(*results):
        primary_result = results[0]
        other_results = results[1:]
    
        for other_result in other_results:
            assert set(primary_result) == set(other_result) and len(primary_result) == len(other_result)
    
    
    if __name__ == '__main__':
        dups_count_time = TimerCounter()
        dups_count_dict_time = TimerCounter()
        dups_count_counter = TimerCounter()
    
        l = gen_array()
    
        for i in range(3):
            dups_count_time.start()
            result1 = dups_count(l)
            dups_count_time.stop()
    
            dups_count_dict_time.start()
            result2 = dups_count_dict(l)
            dups_count_dict_time.stop()
    
            dups_count_counter.start()
            result3 = dups_counter(l)
            dups_count_counter.stop()
    
            assert_equal_results(result1, result2, result3)
    
        print 'dups_count: %.3f' % dups_count_time.get_time_sum()
        print 'dups_count_dict: %.3f' % dups_count_dict_time.get_time_sum()
        print 'dups_count_counter: %.3f' % dups_count_counter.get_time_sum()
    
  • 11

    如何通过检查出现次数简单地遍历列表中的每个元素,然后将它们添加到一个集合中,然后打印重复项 . 希望这有助于那里的人 .

    myList  = [2 ,4 , 6, 8, 4, 6, 12];
    newList = set()
    
    for i in myList:
        if myList.count(i) >= 2:
            newList.add(i)
    
    print(list(newList))
    ## [4 , 6]
    
  • 8

    在Python中通过一次迭代找到欺骗的非常简单快捷的方法是:

    testList = ['red', 'blue', 'red', 'green', 'blue', 'blue']
    
    testListDict = {}
    
    for item in testList:
      try:
        testListDict[item] += 1
      except:
        testListDict[item] = 1
    
    print testListDict
    

    输出如下:

    >>> print testListDict
    {'blue': 3, 'green': 1, 'red': 2}
    

    这个和我的博客更多http://www.howtoprogramwithpython.com

相关问题