qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool that automatically downloads the HTML of given immobilienscout24.de sites. I have tried to use beautifulsoup for this, however, the parsed HTML doesn't show the content but asks if I am a robot etc., meaning my webscraper got detected and blocked (I can access the site in Firefox just fine). I have set a referer, a delay and a user agent. What else can I do to avoid being detected (i.e. rotating proxies, rotating user agents, random clicks, other webscraping tools that don't get detected...)? I have tried to use my phones IP but got the same result. A GUI webscraping tool is not an option as I need to control it with python. Please give some implementable code if possible. Here is my code so far: ``` import urllib.request from bs4 import BeautifulSoup import requests import time import numpy url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" req = urllib.request.Request(url, data=None, headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36' }) req.add_header('Referer', 'https://www.google.de/search?q=immoscout24) delays = [3, 2, 4, 6, 7, 10, 11, 17] time.sleep(numpy.random.choice(delays)) # I want to implement delays like this page = urllib.request.urlopen(req) soup = BeautifulSoup(page, 'html.parser') print(soup.prettify) ``` ``` username:~/Desktop$ uname -a Linux username 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ``` Thank you!
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
I'm the developer of Fredy (<https://github.com/orangecoding/fredy>). I came across the same issue. After digging into the issue, I found how they check if you're a robot. First they set a localstorage value. ``` localstorageAvailable: true ``` And if it's available, they set a value: ``` testLocalStorage: 1 ``` If both worked, a cookie is set called `reese84=xxx`. This is what you want. If you send this cookie with your request, it should work. I've tested it a few times. Note: This is not yet implemented in Fredy, thus immoscout still doesn't work on the live source as I'm currently rewriting the code.
Maybe have a go with [requests](https://requests.readthedocs.io/en/master/), the code below seems to work fine for me: ``` import requests from bs4 import BeautifulSoup r = requests.get('https://www.immobilienscout24.de/') soup = BeautifulSoup(r.text, 'html.parser') print(soup.prettify) ``` Another approach is to use [selenium](https://selenium-python.readthedocs.io/); it's powerful but maybe a bit more complicated. Edit: A possible solution using Selenium (it seems to work for me for the link you provided in the comment): ``` from selenium import webdriver from bs4 import BeautifulSoup driver = webdriver.Chrome('path/to/chromedriver') # it can also work with Firefox, Safari, etc driver.get('some_url') soup = BeautifulSoup(driver.page_source, 'html.parser') ``` If you haven't used selenium before, have a look [here](https://selenium-python.readthedocs.io/installation.html) first on how to get started.
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool that automatically downloads the HTML of given immobilienscout24.de sites. I have tried to use beautifulsoup for this, however, the parsed HTML doesn't show the content but asks if I am a robot etc., meaning my webscraper got detected and blocked (I can access the site in Firefox just fine). I have set a referer, a delay and a user agent. What else can I do to avoid being detected (i.e. rotating proxies, rotating user agents, random clicks, other webscraping tools that don't get detected...)? I have tried to use my phones IP but got the same result. A GUI webscraping tool is not an option as I need to control it with python. Please give some implementable code if possible. Here is my code so far: ``` import urllib.request from bs4 import BeautifulSoup import requests import time import numpy url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" req = urllib.request.Request(url, data=None, headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36' }) req.add_header('Referer', 'https://www.google.de/search?q=immoscout24) delays = [3, 2, 4, 6, 7, 10, 11, 17] time.sleep(numpy.random.choice(delays)) # I want to implement delays like this page = urllib.request.urlopen(req) soup = BeautifulSoup(page, 'html.parser') print(soup.prettify) ``` ``` username:~/Desktop$ uname -a Linux username 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ``` Thank you!
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
Coming back to this question after a while... For your information, I brought back support for Immoscout in Fredy. Have a look here. <https://github.com/orangecoding/fredy#immoscout>
Maybe have a go with [requests](https://requests.readthedocs.io/en/master/), the code below seems to work fine for me: ``` import requests from bs4 import BeautifulSoup r = requests.get('https://www.immobilienscout24.de/') soup = BeautifulSoup(r.text, 'html.parser') print(soup.prettify) ``` Another approach is to use [selenium](https://selenium-python.readthedocs.io/); it's powerful but maybe a bit more complicated. Edit: A possible solution using Selenium (it seems to work for me for the link you provided in the comment): ``` from selenium import webdriver from bs4 import BeautifulSoup driver = webdriver.Chrome('path/to/chromedriver') # it can also work with Firefox, Safari, etc driver.get('some_url') soup = BeautifulSoup(driver.page_source, 'html.parser') ``` If you haven't used selenium before, have a look [here](https://selenium-python.readthedocs.io/installation.html) first on how to get started.
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool that automatically downloads the HTML of given immobilienscout24.de sites. I have tried to use beautifulsoup for this, however, the parsed HTML doesn't show the content but asks if I am a robot etc., meaning my webscraper got detected and blocked (I can access the site in Firefox just fine). I have set a referer, a delay and a user agent. What else can I do to avoid being detected (i.e. rotating proxies, rotating user agents, random clicks, other webscraping tools that don't get detected...)? I have tried to use my phones IP but got the same result. A GUI webscraping tool is not an option as I need to control it with python. Please give some implementable code if possible. Here is my code so far: ``` import urllib.request from bs4 import BeautifulSoup import requests import time import numpy url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" req = urllib.request.Request(url, data=None, headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36' }) req.add_header('Referer', 'https://www.google.de/search?q=immoscout24) delays = [3, 2, 4, 6, 7, 10, 11, 17] time.sleep(numpy.random.choice(delays)) # I want to implement delays like this page = urllib.request.urlopen(req) soup = BeautifulSoup(page, 'html.parser') print(soup.prettify) ``` ``` username:~/Desktop$ uname -a Linux username 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ``` Thank you!
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
I'm the developer of Fredy (<https://github.com/orangecoding/fredy>). I came across the same issue. After digging into the issue, I found how they check if you're a robot. First they set a localstorage value. ``` localstorageAvailable: true ``` And if it's available, they set a value: ``` testLocalStorage: 1 ``` If both worked, a cookie is set called `reese84=xxx`. This is what you want. If you send this cookie with your request, it should work. I've tested it a few times. Note: This is not yet implemented in Fredy, thus immoscout still doesn't work on the live source as I'm currently rewriting the code.
Try to set `Accept-Language` HTTP header (this worked for me to get correct response from server): ``` import requests from bs4 import BeautifulSoup url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" headers = { 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:82.0) Gecko/20100101 Firefox/82.0', 'Accept-Language': 'en-US,en;q=0.5' } soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser') for h5 in soup.select('h5'): print(h5.get_text(strip=True, separator=' ')) ``` Prints: ``` NEU Albertstadt: Praktisch geschnitten und großer Balkon NEU Sehr geräumige 3-Raum-Wohnung in der Eisenacher Weststadt NEU Gepflegte 3-Zimmer-Wohnung am Rosenberg in Hofheim a.Taunus NEU ERSTBEZUG: Wohnung neu renoviert NEU Freundliche 3,5-Zimmer-Wohnung mit Balkon und EBK in Rheinfelden NEU Für Singles und Studenten! 2 ZKB mit EBK und Balkon NEU Schöne 3-Zimmer-Wohnung mit 2 Balkonen im Parkend NEU Öffentlich geförderte 3-Zimmer-Neubau-Wohnung für die kleine Familie in Iserbrook! NEU Komfortable, neuwertige Erdgeschosswohnung in gefragter Lage am Wall NEU Möbliertes, freundliches Appartem. TOP LAGE, S-Balkon, EBK, ruhig, Schwabing Nord/, Milbertshofen NEU Extravagant & frisch saniert! 2,5-Zimmer DG-Wohnung in Duisburg-Neumühl NEU wunderschöne 3 Zimmer Dachgeschosswohnung mit Einbauküche. 2er WG-tauglich. NEU Erstbezug nach Sanierung: Helle 3-Zimmer-Wohnung mit Balkon in Monheim am Rhein NEU Morgen schon im neuen Zuhause mit der ganzen Familie! 3,5 Raum zur Miete in DUI-Overbruch NEU Erstbezug: ansprechende 2-Zimmer-EG-Wohnung in Bad Düben NEU CALENBERGER NEUSTADT | 3-Zimmer-Wohnung mit großem Süd-Balkon NEU Wohnen und Arbeiten in Bestlage von HH-Lokstedt ! NEU Erstbezug: Wohlfühlwohnen in modernem Dachgeschoss nach kompletter Sanierung! NEU CASACONCEPT Stilaltbau-Wohnung München-Bogenhausen nahe Prinzregentenplatz NEU schöne Wohnung mit Balkon und Laminatboden ```
64,647,954
I want to webscrape german real estate website immobilienscout24.de. I would like to download the HTML of a given URL and then work with the HTML offline. It is not intended for commercial use or publication and I do not intend on spamming the site, it is merely for coding practice. I would like to write a python tool that automatically downloads the HTML of given immobilienscout24.de sites. I have tried to use beautifulsoup for this, however, the parsed HTML doesn't show the content but asks if I am a robot etc., meaning my webscraper got detected and blocked (I can access the site in Firefox just fine). I have set a referer, a delay and a user agent. What else can I do to avoid being detected (i.e. rotating proxies, rotating user agents, random clicks, other webscraping tools that don't get detected...)? I have tried to use my phones IP but got the same result. A GUI webscraping tool is not an option as I need to control it with python. Please give some implementable code if possible. Here is my code so far: ``` import urllib.request from bs4 import BeautifulSoup import requests import time import numpy url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" req = urllib.request.Request(url, data=None, headers={ 'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36' }) req.add_header('Referer', 'https://www.google.de/search?q=immoscout24) delays = [3, 2, 4, 6, 7, 10, 11, 17] time.sleep(numpy.random.choice(delays)) # I want to implement delays like this page = urllib.request.urlopen(req) soup = BeautifulSoup(page, 'html.parser') print(soup.prettify) ``` ``` username:~/Desktop$ uname -a Linux username 5.4.0-52-generic #57-Ubuntu SMP Thu Oct 15 10:57:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ``` Thank you!
2020/11/02
[ "https://Stackoverflow.com/questions/64647954", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14361382/" ]
Coming back to this question after a while... For your information, I brought back support for Immoscout in Fredy. Have a look here. <https://github.com/orangecoding/fredy#immoscout>
Try to set `Accept-Language` HTTP header (this worked for me to get correct response from server): ``` import requests from bs4 import BeautifulSoup url = "https://www.immobilienscout24.de/Suche/de/wohnung-mieten?sorting=2#" headers = { 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:82.0) Gecko/20100101 Firefox/82.0', 'Accept-Language': 'en-US,en;q=0.5' } soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser') for h5 in soup.select('h5'): print(h5.get_text(strip=True, separator=' ')) ``` Prints: ``` NEU Albertstadt: Praktisch geschnitten und großer Balkon NEU Sehr geräumige 3-Raum-Wohnung in der Eisenacher Weststadt NEU Gepflegte 3-Zimmer-Wohnung am Rosenberg in Hofheim a.Taunus NEU ERSTBEZUG: Wohnung neu renoviert NEU Freundliche 3,5-Zimmer-Wohnung mit Balkon und EBK in Rheinfelden NEU Für Singles und Studenten! 2 ZKB mit EBK und Balkon NEU Schöne 3-Zimmer-Wohnung mit 2 Balkonen im Parkend NEU Öffentlich geförderte 3-Zimmer-Neubau-Wohnung für die kleine Familie in Iserbrook! NEU Komfortable, neuwertige Erdgeschosswohnung in gefragter Lage am Wall NEU Möbliertes, freundliches Appartem. TOP LAGE, S-Balkon, EBK, ruhig, Schwabing Nord/, Milbertshofen NEU Extravagant & frisch saniert! 2,5-Zimmer DG-Wohnung in Duisburg-Neumühl NEU wunderschöne 3 Zimmer Dachgeschosswohnung mit Einbauküche. 2er WG-tauglich. NEU Erstbezug nach Sanierung: Helle 3-Zimmer-Wohnung mit Balkon in Monheim am Rhein NEU Morgen schon im neuen Zuhause mit der ganzen Familie! 3,5 Raum zur Miete in DUI-Overbruch NEU Erstbezug: ansprechende 2-Zimmer-EG-Wohnung in Bad Düben NEU CALENBERGER NEUSTADT | 3-Zimmer-Wohnung mit großem Süd-Balkon NEU Wohnen und Arbeiten in Bestlage von HH-Lokstedt ! NEU Erstbezug: Wohlfühlwohnen in modernem Dachgeschoss nach kompletter Sanierung! NEU CASACONCEPT Stilaltbau-Wohnung München-Bogenhausen nahe Prinzregentenplatz NEU schöne Wohnung mit Balkon und Laminatboden ```
15,011,674
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example: ``` dereference(id(a)) == a ``` I want to know from an academic standpoint; I understand that there are more practical methods.
2013/02/21
[ "https://Stackoverflow.com/questions/15011674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248775/" ]
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3. **Disclaimer:** If you read the the linked discussion, you'll find that some folks think this is so unsafe that it should *never* be used (as likewise mentioned in some of the comments below). I don't agree with that assessment but feel I should at least mention that there's some debate about using it. ``` import _ctypes def di(obj_id): """ Inverse of id() function. """ return _ctypes.PyObj_FromPtr(obj_id) if __name__ == '__main__': a = 42 b = 'answer' print(di(id(a))) # -> 42 print(di(id(b))) # -> answer ```
Not easily. You could recurse through the [`gc.get_objects()`](http://docs.python.org/2/library/gc.html#gc.get_objects) list, testing each and every object if it has the same `id()` but that's not very practical. The `id()` function is *not intended* to be dereferenceable; the fact that it is based on the memory address is a CPython implementation detail, that other Python implementations do not follow.
15,011,674
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example: ``` dereference(id(a)) == a ``` I want to know from an academic standpoint; I understand that there are more practical methods.
2013/02/21
[ "https://Stackoverflow.com/questions/15011674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248775/" ]
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3. **Disclaimer:** If you read the the linked discussion, you'll find that some folks think this is so unsafe that it should *never* be used (as likewise mentioned in some of the comments below). I don't agree with that assessment but feel I should at least mention that there's some debate about using it. ``` import _ctypes def di(obj_id): """ Inverse of id() function. """ return _ctypes.PyObj_FromPtr(obj_id) if __name__ == '__main__': a = 42 b = 'answer' print(di(id(a))) # -> 42 print(di(id(b))) # -> answer ```
There are several ways and it's not difficult to do: In O(n) ``` In [1]: def deref(id_): ....: f = {id(x):x for x in gc.get_objects()} ....: return f[id_] In [2]: foo = [1,2,3] In [3]: bar = id(foo) In [4]: deref(bar) Out[4]: [1, 2, 3] ``` A faster way on average, from the comments (thanks @Martijn Pieters): ``` def deref_fast(id_): return next(ob for ob in gc.get_objects() if id(ob) == id_) ``` The fastest solution is in the answer from @martineau, but does require exposing python internals. The solutions above use standard python constructs.
15,011,674
Can you dereference a variable id retrieved from the [`id`](https://docs.python.org/library/functions.html#id) function in Python? For example: ``` dereference(id(a)) == a ``` I want to know from an academic standpoint; I understand that there are more practical methods.
2013/02/21
[ "https://Stackoverflow.com/questions/15011674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1248775/" ]
Here's a utility function based on a (now-deleted) comment made by "Tiran" in a weblog [discussion](https://web.archive.org/web/20090404012257/http://www.friday.com/bbum/2007/08/24/python-di/) @Hophat Abc references in [his own answer](https://stackoverflow.com/a/15012076/355230) that will work in both Python 2 and 3. **Disclaimer:** If you read the the linked discussion, you'll find that some folks think this is so unsafe that it should *never* be used (as likewise mentioned in some of the comments below). I don't agree with that assessment but feel I should at least mention that there's some debate about using it. ``` import _ctypes def di(obj_id): """ Inverse of id() function. """ return _ctypes.PyObj_FromPtr(obj_id) if __name__ == '__main__': a = 42 b = 'answer' print(di(id(a))) # -> 42 print(di(id(b))) # -> answer ```
**Note:** Updated to Python 3. Here's yet another answer adapted from a yet another [comment](https://web.archive.org/web/20070905071909/http://www.friday.com/bbum/2007/08/24/python-di/#comment-145316), this one by "Peter Fein", in the discussion @Hophat Abc referenced in his own answer to his own question. Though not a general answer, but might still be useful in cases where you know something about the class of the objects whose ids you want to lookup — as opposed to them being the ids of *anything*. I felt this might be a worthwhile technique even with that limitation (and sans the safety issues my [other answer](https://stackoverflow.com/a/15012814/355230) has). The basic idea is to make a class which keeps track of instances and subclasses of itself. ``` #!/usr/bin/env python3 import weakref class MetaInstanceTracker(type): """ Metaclass for InstanceTracker. """ def __new__(cls, name, bases, dic): cls = super().__new__(cls, name, bases, dic) cls.__instances__ = weakref.WeakValueDictionary() return cls class InstanceTracker(object, metaclass=MetaInstanceTracker): """ Base class that tracks instances of its subclasses using weakreferences. """ def __init__(self, *args, **kwargs): self.__instances__[id(self)]=self super().__init__(*args, **kwargs) @classmethod def find_instance(cls, obj_id): return cls.__instances__.get(obj_id, None) if __name__ == '__main__': class MyClass(InstanceTracker): def __init__(self, name): super(MyClass, self).__init__() self.name = name def __repr__(self): return '{}({!r})'.format(self.__class__.__name__, self.name) obj1 = MyClass('Bob') obj2 = MyClass('Sue') print(MyClass.find_instance(id(obj1))) # -> MyClass('Bob') print(MyClass.find_instance(id(obj2))) # -> MyClass('Sue') print(MyClass.find_instance(42)) # -> None ```
18,394,350
Today, I wrote a short script for a prime sieve, and I am looking to improve on it. I am rather new to python and programming in general, and so I am wondering: what is a good way to reduce memory usage in a program where large lists of numbers are involved? Here is my example script: ``` def ES(n): A = list(range(2, n+1)) for i in range(2, n+1): for k in range(2, (n+i)//i): A[i*k-2] = str(i*k) A = [x for x in A if isinstance(x, int)] return A ``` This script turns all composites in the list, A, into strings and then returns the list of remaining integers, which are all prime, yet it runs the A[i\*k-2] = str(i\*k) three times for the number 12, as it goes through all multiples of 2, then 3 and again for 6. With something like that happening, while storing such a large list, I hit a brick wall rather soon and it crashes. Any advice would be greatly appreciated! Thanks in advance. EDIT: I don't know if this makes a difference, but I'm using Python 3.3
2013/08/23
[ "https://Stackoverflow.com/questions/18394350", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2467476/" ]
First, you're using a really weird, inefficient way to record whether something is composite. You don't need to store string representations of the numbers, or even the numbers themselves. You can just use a big list of booleans, where `prime[n]` is true if `n` is prime. Second, there's no reason to worry about wasting a bit of space at the beginning of the list if it simplifies indexing. It's tiny compared to the space taken by the rest of the list, let alone all the strings and ints and stuff you were using. It's like saving $3 on paint for your $300,000 car. Third, `range` takes a `step` parameter you can use to simplify your loops. ``` def sieve(n): """Returns a list of primes less than n.""" # n-long list, all entries initially True # prime[n] is True if we haven't found a factor of n yet. prime = [True]*n # 0 and 1 aren't prime prime[0], prime[1] = False, False for i in range(2, n): if prime[i]: # Loop from i*2 to n, in increments of i. # In other words, go through the multiples of i. for j in range(i*2, n, i): prime[j] = False return [x for x, x_is_prime in enumerate(prime) if x_is_prime] ```
Rather than using large lists, you can use generators. ``` def ESgen(): d = {} q = 2 while 1: if q not in d: yield q d[q*q] = [q] else: for p in d[q]: d.setdefault(p+q,[]).append(p) del d[q] q += 1 ``` To get the list of first `n` primes using the generator, you can do this: ``` from itertools import islice gen = ESgen() list(islice(gen, n)) ``` If you only want to check if a number is prime, `set` is faster than `list`: ``` from itertools import islice gen = ESgen() set(islice(gen, n)) ```
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` I also tried: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})',(device_var,number1_var, number2_var)) ``` And ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})'.format (device_var,number1_var, number2_var)) ``` ERROR:OperationalError: (1054, "Unknown column 'device\_var\_content' in 'field list'") I aslo tried this to see if there is a problem in the table but this works OK: cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("dev1",1,2.4)' Thanks for your time **SOLVED:** ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("{}",{},{})'.format (string_var,number1_var, number2_var)) ``` Thanks for your help, your answers give me the way where keep looking.
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed. ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4)) ```
Check if you're calling commit() post execution of query or also you can enable autocommit for connection. ``` connection_obj.commit() ```
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` I also tried: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})',(device_var,number1_var, number2_var)) ``` And ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})'.format (device_var,number1_var, number2_var)) ``` ERROR:OperationalError: (1054, "Unknown column 'device\_var\_content' in 'field list'") I aslo tried this to see if there is a problem in the table but this works OK: cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("dev1",1,2.4)' Thanks for your time **SOLVED:** ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("{}",{},{})'.format (string_var,number1_var, number2_var)) ``` Thanks for your help, your answers give me the way where keep looking.
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed. ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4)) ```
Firstly, ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` seems wrong. Similarly second version is wrong too. Third with **.format** is correct. Then why did query not run? **Because of the reason below**: Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you'll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database. Conversely, you can also use connection.rollback() to throw away any changes you've made since the last commit. Commit before closing your connection. 1. db=mysql.connect(user="root",passwd="",db="some\_db",unix\_socket="/opt/lampp/var/mysql/mysql.sock") 2. cursor=db.cursor() 3. cursor.execute("insert into my table ..") 4. db.commit()
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` I also tried: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})',(device_var,number1_var, number2_var)) ``` And ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})'.format (device_var,number1_var, number2_var)) ``` ERROR:OperationalError: (1054, "Unknown column 'device\_var\_content' in 'field list'") I aslo tried this to see if there is a problem in the table but this works OK: cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("dev1",1,2.4)' Thanks for your time **SOLVED:** ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("{}",{},{})'.format (string_var,number1_var, number2_var)) ``` Thanks for your help, your answers give me the way where keep looking.
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
you can use parameter binding. you don't need to worry about the datatypes of the variables that being passed. ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (?, ?, ?)', ("dev1",1,2.4)) ```
> > The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working > > > Have you read the db-api specs ? The placeholder in the query are not python formatting instructions but just plain placeholders - the db connector will take care of type conversion, escaping, and even a bit of sanitization. FWIW it's a pity the MySQLdb authors choosed the python string format placeholder (`%s`) as it causes a lot of confusion... This would be clearer if they had choosed something like "?" instead. Anyway - the "right way" to write your query is: ``` cursor.execute( 'INSERT INTO table(device, number1, number2) VALUES (%s,%s,%s)', (device_var,number1_var, number2_var) ) ``` And of course do no forget to commit your transaction at some point...
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` I also tried: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})',(device_var,number1_var, number2_var)) ``` And ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})'.format (device_var,number1_var, number2_var)) ``` ERROR:OperationalError: (1054, "Unknown column 'device\_var\_content' in 'field list'") I aslo tried this to see if there is a problem in the table but this works OK: cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("dev1",1,2.4)' Thanks for your time **SOLVED:** ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("{}",{},{})'.format (string_var,number1_var, number2_var)) ``` Thanks for your help, your answers give me the way where keep looking.
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
Check if you're calling commit() post execution of query or also you can enable autocommit for connection. ``` connection_obj.commit() ```
Firstly, ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` seems wrong. Similarly second version is wrong too. Third with **.format** is correct. Then why did query not run? **Because of the reason below**: Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you'll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database. Conversely, you can also use connection.rollback() to throw away any changes you've made since the last commit. Commit before closing your connection. 1. db=mysql.connect(user="root",passwd="",db="some\_db",unix\_socket="/opt/lampp/var/mysql/mysql.sock") 2. cursor=db.cursor() 3. cursor.execute("insert into my table ..") 4. db.commit()
46,909,017
I want to insert a row in a table of a database from a python script. The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working I tried this options: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` I also tried: ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})',(device_var,number1_var, number2_var)) ``` And ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ({0},{1},{2})'.format (device_var,number1_var, number2_var)) ``` ERROR:OperationalError: (1054, "Unknown column 'device\_var\_content' in 'field list'") I aslo tried this to see if there is a problem in the table but this works OK: cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("dev1",1,2.4)' Thanks for your time **SOLVED:** ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES ("{}",{},{})'.format (string_var,number1_var, number2_var)) ``` Thanks for your help, your answers give me the way where keep looking.
2017/10/24
[ "https://Stackoverflow.com/questions/46909017", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3278790/" ]
> > The column fiels values are saved in variables of different formats: strings, integers, and floats. I search in forums, I tried differente options but no one is working > > > Have you read the db-api specs ? The placeholder in the query are not python formatting instructions but just plain placeholders - the db connector will take care of type conversion, escaping, and even a bit of sanitization. FWIW it's a pity the MySQLdb authors choosed the python string format placeholder (`%s`) as it causes a lot of confusion... This would be clearer if they had choosed something like "?" instead. Anyway - the "right way" to write your query is: ``` cursor.execute( 'INSERT INTO table(device, number1, number2) VALUES (%s,%s,%s)', (device_var,number1_var, number2_var) ) ``` And of course do no forget to commit your transaction at some point...
Firstly, ``` cursor.execute('INSERT INTO table(device, number1, number2) VALUES (%s,%d,%f)',(device_var,number1_var, number2_var)) ``` seems wrong. Similarly second version is wrong too. Third with **.format** is correct. Then why did query not run? **Because of the reason below**: Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you'll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database. Conversely, you can also use connection.rollback() to throw away any changes you've made since the last commit. Commit before closing your connection. 1. db=mysql.connect(user="root",passwd="",db="some\_db",unix\_socket="/opt/lampp/var/mysql/mysql.sock") 2. cursor=db.cursor() 3. cursor.execute("insert into my table ..") 4. db.commit()
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pip install -r requirements.txt CMD [ "python", "./manager.py" ] ``` I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
Yes, there are a couple of ways you can do this. Docker Compose -------------- In Docker Compose, you can supply environment variables in the file itself, or point to an external env file: ``` # docker-compose.yml version: '2' services: service-name: image: service-app environment: - GREETING=hello env_file: - .env ``` Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment): ``` # docker-compose-dev.yml version: '2' services: service-name: environment: - GREETING=goodbye ``` You can then run it thus: ``` docker-compose -f docker-compose.yml -f docker-compose-dev.yml up ``` Docker only ----------- To do this in Docker only, use your entrypoint or command to run an intermediate script, thus: ``` #Dockerfile .... ENTRYPOINT ["sh", "bin/start.sh"] ``` And then in your start script: ``` #!/bin/sh source .env python /manager.py ``` I've [used this related answer](https://stackoverflow.com/a/33186458/472495) as a helpful reference for myself in the past. Update on PID 1 --------------- To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that [Unix signals](https://en.wikipedia.org/wiki/Signal_(IPC)) (stop, kill, etc) will not be passed onto your process. This is because that script will become [process ID 1](https://en.wikipedia.org/wiki/Init), which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen. To rectify this, you can install an init system. I use [dumb-init from Yelp](https://github.com/Yelp/dumb-init). This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process. If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pip install -r requirements.txt CMD [ "python", "./manager.py" ] ``` I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
Yes, there are a couple of ways you can do this. Docker Compose -------------- In Docker Compose, you can supply environment variables in the file itself, or point to an external env file: ``` # docker-compose.yml version: '2' services: service-name: image: service-app environment: - GREETING=hello env_file: - .env ``` Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment): ``` # docker-compose-dev.yml version: '2' services: service-name: environment: - GREETING=goodbye ``` You can then run it thus: ``` docker-compose -f docker-compose.yml -f docker-compose-dev.yml up ``` Docker only ----------- To do this in Docker only, use your entrypoint or command to run an intermediate script, thus: ``` #Dockerfile .... ENTRYPOINT ["sh", "bin/start.sh"] ``` And then in your start script: ``` #!/bin/sh source .env python /manager.py ``` I've [used this related answer](https://stackoverflow.com/a/33186458/472495) as a helpful reference for myself in the past. Update on PID 1 --------------- To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that [Unix signals](https://en.wikipedia.org/wiki/Signal_(IPC)) (stop, kill, etc) will not be passed onto your process. This is because that script will become [process ID 1](https://en.wikipedia.org/wiki/Init), which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen. To rectify this, you can install an init system. I use [dumb-init from Yelp](https://github.com/Yelp/dumb-init). This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".
There are various options: <https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file> ``` docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash ``` (You can also just reference previously `exported` variables, see `USER` below.) The one answering your question about an .env file is: ``` cat env.list # This is a comment VAR1=value1 VAR2=value2 USER docker run --env-file env.list ubuntu env | grep VAR VAR1=value1 VAR2=value2 docker run --env-file env.list ubuntu env | grep USER USER=denis ``` You can also load the environment variables from a file. This file should use the syntax `variable=value` (which sets the variable to the given value) or `variable` (which takes the **value from the local environment**), and # for comments. Regarding the difference between variables needed at (image) build time or (container) runtime and how to combine `ENV` and `ARG` for dynamic build arguments you might try this: [ARG or ENV, which one to use in this case?](https://stackoverflow.com/questions/41916386/arg-or-env-which-one-to-use-in-this-case)
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pip install -r requirements.txt CMD [ "python", "./manager.py" ] ``` I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
Yes, there are a couple of ways you can do this. Docker Compose -------------- In Docker Compose, you can supply environment variables in the file itself, or point to an external env file: ``` # docker-compose.yml version: '2' services: service-name: image: service-app environment: - GREETING=hello env_file: - .env ``` Incidentally, one nice feature that is somewhat related is that you can use multiple Compose files, with each subsequent one adding to the other. So if the above were to define a base, you can then do this (e.g. per run-time environment): ``` # docker-compose-dev.yml version: '2' services: service-name: environment: - GREETING=goodbye ``` You can then run it thus: ``` docker-compose -f docker-compose.yml -f docker-compose-dev.yml up ``` Docker only ----------- To do this in Docker only, use your entrypoint or command to run an intermediate script, thus: ``` #Dockerfile .... ENTRYPOINT ["sh", "bin/start.sh"] ``` And then in your start script: ``` #!/bin/sh source .env python /manager.py ``` I've [used this related answer](https://stackoverflow.com/a/33186458/472495) as a helpful reference for myself in the past. Update on PID 1 --------------- To amplify my remark in the comments, if you make your entry point a shell or Python script, it is likely that [Unix signals](https://en.wikipedia.org/wiki/Signal_(IPC)) (stop, kill, etc) will not be passed onto your process. This is because that script will become [process ID 1](https://en.wikipedia.org/wiki/Init), which makes it the parent process of all other processes in the container - in Linux/Unix there is an expectation that this PID will forward signals to its children, but unless you explicitly implement that, it won't happen. To rectify this, you can install an init system. I use [dumb-init from Yelp](https://github.com/Yelp/dumb-init). This repo also features plenty of detail if you want to understand it a bit better, or simple install instructions if you just want to "install and forget".
I really like @halfers approach, but this could also work. `docker run` takes an optional parameter called `--env-file` which is super helpful. So your docker file could look like this. ``` COPY .env .env ``` and then in a build script use: ```sh docker build -t my_docker_image . && docker run --env-file .env my_docker_image ```
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pip install -r requirements.txt CMD [ "python", "./manager.py" ] ``` I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
There are various options: <https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file> ``` docker run -e MYVAR1 --env MYVAR2=foo --env-file ./env.list ubuntu bash ``` (You can also just reference previously `exported` variables, see `USER` below.) The one answering your question about an .env file is: ``` cat env.list # This is a comment VAR1=value1 VAR2=value2 USER docker run --env-file env.list ubuntu env | grep VAR VAR1=value1 VAR2=value2 docker run --env-file env.list ubuntu env | grep USER USER=denis ``` You can also load the environment variables from a file. This file should use the syntax `variable=value` (which sets the variable to the given value) or `variable` (which takes the **value from the local environment**), and # for comments. Regarding the difference between variables needed at (image) build time or (container) runtime and how to combine `ENV` and `ARG` for dynamic build arguments you might try this: [ARG or ENV, which one to use in this case?](https://stackoverflow.com/questions/41916386/arg-or-env-which-one-to-use-in-this-case)
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process. If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
46,917,831
I am working on a simple Docker image that has a large number of environment variables. Are you able to import an environment variable file like with docker-compose? I cannot find anything about this in the docker file documentation. **Dockerfile** ``` FROM python:3.6 ENV ENV1 9.3 ENV ENV2 9.3.4 ... ADD . / RUN pip install -r requirements.txt CMD [ "python", "./manager.py" ] ``` I guess a good way to rephrase the question would be: how do you efficiently load multiple environment variables in a Dockerfile? If you are not able to load a file, you would not be able to commit a docker file to GitHub.
2017/10/24
[ "https://Stackoverflow.com/questions/46917831", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3361028/" ]
I really like @halfers approach, but this could also work. `docker run` takes an optional parameter called `--env-file` which is super helpful. So your docker file could look like this. ``` COPY .env .env ``` and then in a build script use: ```sh docker build -t my_docker_image . && docker run --env-file .env my_docker_image ```
If you need environment variables runtime, it's easiest to create a launcher script that sets up the environment with multiple `export` statements and then launches your process. If you need them build time, have a look at the `ARG` and `ENV` statements. You'll need one per variable.
59,868,524
My goal is to get a list of the names of all the new items that have been posted on <https://www.prusaprinters.org/prints> during the full 24 hours of a given day. Through a bit of reading I've learned that I should be using Selenium because the site I'm scraping is dynamic (loads more objects as the user scrolls). Trouble is, I can't seem to get anything but an empty list from `webdriver.find_elements_by_` with any of the suffixes listed at <https://selenium-python.readthedocs.io/locating-elements.html>. On the site, I see `"class = name"` and `"class = clamp-two-lines"` when I inspect the element I want to get the title of (see screenshot), but I can't seem to return a list of all the elements on the page with that `name` class or the `clamp-two-lines` class. [![prusaprinters inspect element](https://i.stack.imgur.com/A74ZM.png)](https://i.stack.imgur.com/A74ZM.png) Here's the code I have so far (the lines commented out are failed attempts): ``` from timeit import default_timer as timer start_time = timer() print("Script Started") import bs4, selenium, smtplib, time from bs4 import BeautifulSoup from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By driver = webdriver.Chrome(r'D:\PortableApps\Python Peripherals\chromedriver.exe') url = 'https://www.prusaprinters.org/prints' driver.get(url) # foo = driver.find_elements_by_name('name') # foo = driver.find_elements_by_xpath('name') # foo = driver.find_elements_by_class_name('name') # foo = driver.find_elements_by_tag_name('name') # foo = [i.get_attribute('href') for i in driver.find_elements_by_css_selector('[id*=name]')] # foo = [i.get_attribute('href') for i in driver.find_elements_by_css_selector('[class*=name]')] # foo = [i.get_attribute('href') for i in driver.find_elements_by_css_selector('[id*=clamp-two-lines]')] # foo = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.XPATH, '//*[@id="printListOuter"]//ul[@class="clamp-two-lines"]/li'))) print(foo) driver.quit() print("Time to run: " + str(round(timer() - start_time,4)) + "s") ``` My research: 1. [Selenium only returns an empty list](https://stackoverflow.com/questions/58260935/selenium-only-returns-an-empty-list) 2. [Selenium find\_elements\_by\_css\_selector returns an empty list](https://stackoverflow.com/questions/34469504/selenium-find-elements-by-css-selector-returns-an-empty-list) 3. [Web Scraping Python (BeautifulSoup,Requests)](https://stackoverflow.com/questions/46860838/web-scraping-python-beautifulsoup-requests) 4. [Get HTML Source of WebElement in Selenium WebDriver using Python](https://stackoverflow.com/questions/7263824/get-html-source-of-webelement-in-selenium-webdriver-using-python) 5. [How to get Inspect Element code in Selenium WebDriver](https://stackoverflow.com/questions/26306566/how-to-get-inspect-element-code-in-selenium-webdriver) 6. [Web Scraping Python (BeautifulSoup,Requests)](https://stackoverflow.com/questions/46860838/web-scraping-python-beautifulsoup-requests) 7. <https://chrisalbon.com/python/web_scraping/monitor_a_website/> 8. <https://www.codementor.io/@gergelykovcs/how-and-why-i-built-a-simple-web-scrapig-script-to-notify-us-about-our-favourite-food-fcrhuhn45> 9. <https://www.tutorialspoint.com/python_web_scraping/python_web_scraping_dynamic_websites.htm>
2020/01/22
[ "https://Stackoverflow.com/questions/59868524", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9452116/" ]
To get text wait for visibility of the elements. Css selector for titles is `#printListOuter h3`: ``` titles = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, '#printListOuter h3'))) for title in titles: print(title.text) ``` Shorter version: ``` wait = WebDriverWait(driver, 10) titles = [title.text for title in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, '#printListOuter h3')))] ```
This is xpath of the name of the items: ``` .//div[@class='print-list-item']/div/a/h3/span ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
You get error with count because there isn't a function named `count`. But you can use `count` of `plyr` package : ``` count(dat, vars = 'Zaman') Zaman freq 1 2013-01-18 20:39:00 66 2 2013-01-18 20:40:00 16 3 2013-01-18 20:41:00 47 4 2013-01-18 20:52:00 1 5 2013-01-18 23:47:00 1 6 2013-01-18 23:53:00 2 7 2013-01-19 00:04:00 68 8 2013-01-19 00:05:00 9 9 2013-01-19 00:06:00 4 10 2013-01-19 00:08:00 3 11 2013-01-19 00:11:00 9 12 2013-01-19 00:14:00 1 13 2013-01-19 00:19:00 89 14 2013-01-19 00:20:00 9 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
Use `cut`, which has in-built support for working with dates: ``` attributes(data$Zaman)$tzone <- "GMT" table(cut(data$Zaman, "1 day")) # # 2013-01-18 2013-01-19 2013-01-20 2013-01-21 2013-01-22 2013-01-23 2013-01-24 2013-01-25 2013-01-26 # 339 209 20 8 76 56 0 10 12 table(cut(data$Zaman, "2 days")) # # 2013-01-18 2013-01-20 2013-01-22 2013-01-24 2013-01-26 # 548 28 132 10 12 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You must express that you want *days*; this should do the trick: ``` table(factor(format(data$Zaman,"%D"))) ```
I'm partial to data.table for these tasks: ``` library(data.table) dt <- as.data.table(df) dt[,days:=format(Zaman,"%d.%m.%Y")] dt[,.N,by=days] days N 1: 18.01.2013 133 2: 19.01.2013 415 3: 20.01.2013 20 <snip> ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
You get error with count because there isn't a function named `count`. But you can use `count` of `plyr` package : ``` count(dat, vars = 'Zaman') Zaman freq 1 2013-01-18 20:39:00 66 2 2013-01-18 20:40:00 16 3 2013-01-18 20:41:00 47 4 2013-01-18 20:52:00 1 5 2013-01-18 23:47:00 1 6 2013-01-18 23:53:00 2 7 2013-01-19 00:04:00 68 8 2013-01-19 00:05:00 9 9 2013-01-19 00:06:00 4 10 2013-01-19 00:08:00 3 11 2013-01-19 00:11:00 9 12 2013-01-19 00:14:00 1 13 2013-01-19 00:19:00 89 14 2013-01-19 00:20:00 9 ```
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
Use `cut`, which has in-built support for working with dates: ``` attributes(data$Zaman)$tzone <- "GMT" table(cut(data$Zaman, "1 day")) # # 2013-01-18 2013-01-19 2013-01-20 2013-01-21 2013-01-22 2013-01-23 2013-01-24 2013-01-25 2013-01-26 # 339 209 20 8 76 56 0 10 12 table(cut(data$Zaman, "2 days")) # # 2013-01-18 2013-01-20 2013-01-22 2013-01-24 2013-01-26 # 548 28 132 10 12 ```
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
14,549,433
Here is what my data frame looks like; ``` Zaman Operasyon Paket 1 2013-01-18 21:39:00 installed linux-api-headers 2 2013-01-18 21:39:00 installed tzdata 3 2013-01-18 21:39:00 installed glibc 4 2013-01-18 21:39:00 installed ncurses 5 2013-01-18 21:39:00 installed readline 6 2013-01-18 21:39:00 installed bash ``` Zaman is a POSIXct, other 2 colums are factors. I want to count the number of rows grouped by each day. I have tried, ``` > aggregate(data,by=list(data$Zaman),FUN=sum) Error occured: Summary.POSIXct(c(1358537940, 1358537940, 1358537940, 1358537940, : 'sum' not defined for "POSIXt" objects > aggregate(data,by=list(data$Zaman),FUN=count) Error occured: match.fun(FUN) : Couldn't find 'count' object ``` How can I do this? Output of `dput(data)` ``` dput(data) structure(list(Zaman = structure(c(1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358537940, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538000, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538060, 1358538720, 1358549220, 1358549580, 1358549580, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550240, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550300, 1358550360, 1358550360, 1358550360, 1358550360, 1358550480, 1358550480, 1358550480, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550660, 1358550840, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551140, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551200, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551320, 1358551800, 1358552580, 1358552580, 1358553300, 1358553300, 1358553300, 1358553300, 1358553300, 1358553720, 1358553720, 1358554320, 1358554320, 1358554320, 1358554320, 1358554320, 1358554380, 1358554380, 1358555100, 1358555100, 1358555100, 1358555280, 1358555280, 1358555280, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555460, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555520, 1358555640, 1358556240, 1358556240, 1358556240, 1358556240, 1358556300, 1358556300, 1358556300, 1358557020, 1358557860, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358558280, 1358559660, 1358559660, 1358559900, 1358559900, 1358559900, 1358559900, 1358560140, 1358613600, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614140, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614200, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614260, 1358614320, 1358614320, 1358614320, 1358614320, 1358614320, 1358614500, 1358614500, 1358614500, 1358614500, 1358616120, 1358616120, 1358616120, 1358616120, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358618280, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358620500, 1358689500, 1358689500, 1358689500, 1358689560, 1358691060, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708700, 1358708760, 1358709600, 1358726940, 1358726940, 1358726940, 1358726940, 1358727000, 1358727000, 1358768340, 1358768340, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856660, 1358856720, 1358856720, 1358856720, 1358857080, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857260, 1358857380, 1358857380, 1358857500, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858160, 1358858460, 1358858520, 1358859780, 1358859840, 1358859840, 1358862600, 1358862660, 1358863440, 1358863500, 1358896140, 1358896140, 1358896200, 1358896200, 1358943900, 1358943900, 1358943900, 1358944020, 1358944020, 1358944080, 1358949840, 1358949840, 1358949840, 1358949840, 1358949840, 1358949900, 1358949900, 1358949900, 1358949900, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358949960, 1358950020, 1358950020, 1358950020, 1358950020, 1358966040, 1358966040, 1358966040, 1358966040, 1358966040, 1358966100, 1358966520, 1358966520, 1358966640, 1358966640, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358967000, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1358970720, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120240, 1359120420, 1359194820, 1359194820, 1359194820, 1359194820, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359223080, 1359224100), class = c("POSIXct", "POSIXt"), tzone = ""), Operasyon = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 3L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 1L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("installed", "removed", "upgraded"), class = "factor"), Paket = structure(c(381L, 563L, 126L, 406L, 506L, 23L, 31L, 108L, 56L, 654L, 50L, 257L, 330L, 429L, 428L, 17L, 3L, 130L, 227L, 49L, 517L, 51L, 256L, 250L, 530L, 573L, 77L, 179L, 86L, 57L, 437L, 247L, 125L, 204L, 209L, 177L, 652L, 547L, 61L, 466L, 52L, 62L, 64L, 70L, 84L, 87L, 104L, 116L, 143L, 170L, 111L, 438L, 420L, 315L, 271L, 207L, 210L, 172L, 185L, 190L, 189L, 545L, 191L, 198L, 217L, 379L, 382L, 400L, 218L, 399L, 380L, 383L, 388L, 554L, 144L, 291L, 391L, 392L, 394L, 405L, 409L, 528L, 32L, 324L, 53L, 472L, 270L, 220L, 455L, 65L, 135L, 139L, 427L, 8L, 426L, 435L, 436L, 285L, 336L, 288L, 469L, 470L, 471L, 511L, 549L, 548L, 551L, 572L, 575L, 586L, 612L, 389L, 20L, 21L, 24L, 25L, 80L, 91L, 402L, 282L, 193L, 40L, 468L, 106L, 273L, 331L, 390L, 433L, 457L, 546L, 192L, 588L, 587L, 649L, 354L, 97L, 248L, 94L, 359L, 289L, 237L, 456L, 348L, 592L, 350L, 205L, 347L, 363L, 642L, 616L, 632L, 622L, 621L, 618L, 627L, 626L, 619L, 620L, 93L, 623L, 629L, 404L, 599L, 628L, 598L, 355L, 261L, 320L, 373L, 366L, 634L, 640L, 624L, 631L, 637L, 593L, 635L, 602L, 377L, 638L, 639L, 512L, 369L, 503L, 368L, 644L, 187L, 361L, 615L, 362L, 641L, 643L, 393L, 645L, 646L, 647L, 88L, 358L, 352L, 648L, 630L, 55L, 353L, 253L, 251L, 188L, 576L, 375L, 376L, 601L, 397L, 508L, 374L, 600L, 578L, 140L, 577L, 633L, 367L, 349L, 360L, 636L, 625L, 591L, 337L, 650L, 561L, 14L, 293L, 34L, 234L, 326L, 181L, 142L, 171L, 430L, 48L, 351L, 365L, 533L, 325L, 410L, 424L, 136L, 268L, 328L, 233L, 22L, 232L, 112L, 159L, 160L, 173L, 357L, 58L, 610L, 542L, 356L, 76L, 103L, 161L, 214L, 260L, 412L, 201L, 460L, 531L, 541L, 47L, 13L, 59L, 12L, 162L, 163L, 60L, 246L, 286L, 555L, 556L, 562L, 603L, 370L, 343L, 605L, 155L, 423L, 156L, 4L, 36L, 340L, 287L, 341L, 327L, 157L, 333L, 604L, 458L, 459L, 482L, 292L, 571L, 266L, 570L, 529L, 432L, 222L, 385L, 564L, 606L, 607L, 194L, 364L, 450L, 451L, 180L, 133L, 132L, 158L, 608L, 582L, 581L, 609L, 611L, 613L, 614L, 413L, 597L, 245L, 520L, 372L, 39L, 544L, 443L, 120L, 63L, 89L, 321L, 314L, 6L, 401L, 90L, 577L, 378L, 516L, 386L, 169L, 552L, 557L, 166L, 145L, 122L, 122L, 145L, 166L, 278L, 500L, 501L, 534L, 496L, 480L, 485L, 478L, 495L, 371L, 67L, 68L, 10L, 513L, 452L, 448L, 617L, 486L, 492L, 490L, 494L, 487L, 497L, 121L, 203L, 85L, 568L, 653L, 425L, 9L, 566L, 408L, 590L, 102L, 228L, 229L, 230L, 294L, 145L, 124L, 322L, 255L, 323L, 565L, 316L, 168L, 544L, 479L, 481L, 243L, 66L, 215L, 419L, 596L, 95L, 311L, 176L, 407L, 504L, 505L, 509L, 178L, 30L, 41L, 263L, 422L, 302L, 175L, 300L, 384L, 301L, 303L, 304L, 305L, 75L, 543L, 16L, 567L, 510L, 267L, 579L, 536L, 141L, 532L, 499L, 235L, 462L, 11L, 174L, 72L, 119L, 96L, 128L, 196L, 182L, 416L, 1L, 239L, 236L, 521L, 523L, 241L, 240L, 269L, 329L, 387L, 242L, 275L, 656L, 550L, 283L, 221L, 202L, 295L, 147L, 211L, 249L, 208L, 244L, 338L, 342L, 415L, 417L, 515L, 519L, 540L, 574L, 589L, 651L, 82L, 79L, 334L, 317L, 274L, 284L, 279L, 507L, 71L, 98L, 219L, 594L, 580L, 454L, 298L, 453L, 395L, 206L, 306L, 307L, 467L, 308L, 33L, 200L, 199L, 197L, 309L, 310L, 345L, 346L, 312L, 262L, 184L, 118L, 81L, 280L, 259L, 19L, 18L, 476L, 489L, 252L, 484L, 477L, 493L, 28L, 335L, 418L, 272L, 183L, 414L, 461L, 553L, 537L, 225L, 411L, 27L, 166L, 483L, 491L, 138L, 290L, 475L, 488L, 113L, 115L, 114L, 332L, 29L, 498L, 105L, 146L, 319L, 127L, 35L, 431L, 15L, 164L, 167L, 213L, 464L, 463L, 465L, 186L, 117L, 344L, 258L, 165L, 231L, 313L, 134L, 137L, 516L, 131L, 74L, 73L, 34L, 204L, 380L, 501L, 601L, 595L, 110L, 154L, 151L, 150L, 195L, 583L, 2L, 238L, 224L, 299L, 223L, 265L, 152L, 129L, 297L, 296L, 559L, 46L, 42L, 43L, 44L, 398L, 38L, 277L, 403L, 281L, 78L, 37L, 538L, 539L, 254L, 421L, 149L, 558L, 318L, 153L, 148L, 441L, 444L, 440L, 439L, 449L, 525L, 527L, 535L, 276L, 524L, 522L, 526L, 447L, 446L, 442L, 445L, 101L, 92L, 69L, 123L, 100L, 99L, 396L, 92L, 655L, 54L, 26L, 212L, 502L, 107L, 45L, 7L, 569L, 264L, 585L, 32L, 560L, 181L, 30L, 29L, 34L, 39L, 171L, 183L, 186L, 302L, 300L, 301L, 303L, 304L, 305L, 306L, 307L, 308L, 309L, 310L, 312L, 380L, 414L, 500L, 504L, 514L, 83L, 584L, 518L, 473L, 109L, 5L, 474L, 226L, 434L, 474L, 473L, 5L, 83L, 514L, 518L, 584L, 216L, 253L, 251L, 188L, 208L, 249L, 244L, 397L, 32L, 135L, 139L, 332L, 443L, 452L, 629L, 628L, 648L, 502L, 43L, 44L, 47L, 544L, 514L, 83L, 584L, 518L, 473L, 5L, 474L, 339L), .Label = c("a52dec", "aalib", "acl", "alsa-lib", "alsa-plugins", "alsa-utils", "apache-ant", "archlinux-keyring", "arj", "asciidoc", "aspell", "at-spi2-atk", "at-spi2-core", "atk", "atkmm", "attica", "attr", "audacious", "audacious-plugins", "autoconf", "automake", "avahi", "bash", "binutils", "bison", "blas", "blueman", "bluez", "boost", "boost-libs", "bzip2", "ca-certificates", "ca-certificates-java", "cairo", "cairomm", "cdparanoia", "celt", "chromaprint", "chromium", "cloog", "clucene", "clutter", "clutter-gst", "clutter-gtk", "cmake", "cogl", "colord", "compositeproto", "coreutils", "cracklib", "cronie", "cryptsetup", "curl", "curlpaste", "damageproto", "db", "dbus", "dbus-glib", "dconf", "desktop-file-utils", "device-mapper", "dhcpcd", "dialog", "diffutils", "dirmngr", "dnssec-anchors", "docbook-xml", "docbook-xsl", "doukutsu", "e2fsprogs", "enca", "enchant", "eog", "exempi", "exiv2", "exo", "expat", "faac", "faad2", "fakeroot", "feh", "ffmpeg", "fftw", "file", "file-roller", "filesystem", "findutils", "fixesproto", "flac", "flashplugin", "flex", "fltk", "fontconfig", "fontsproto", "foxitreader", "freeglut", "freetype2", "fribidi", "frogatto", "frogatto-data", "frozen-bubble", "fuse", "garcon", "gawk", "gc", "gcc", "gcc-fortran", "gcc-libs", "gconf", "gdb", "gdbm", "gdk-pixbuf2", "gedit", "geoip", "geoip-database", "gettext", "ghex", "giblib", "giflib", "git", "git-cola", "gitg", "glew", "glib-networking", "glib2", "glibc", "glibmm", "glu", "gmime", "gmp", "gnome-desktop", "gnome-icon-theme", "gnome-icon-theme-symbolic", "gnome-system-monitor", "gnupg", "gnutls", "go", "gobject-introspection", "gpgme", "gpm", "grantlee", "graphite", "grep", "groff", "gsettings-desktop-schemas", "gsl", "gsm", "gst-libav", "gst-plugins-bad", "gst-plugins-base", "gst-plugins-base-libs", "gst-plugins-good", "gst-plugins-ugly", "gstreamer", "gstreamer0.10", "gstreamer0.10-base", "gstreamer0.10-base-plugins", "gtk-engines", "gtk-update-icon-cache", "gtk2", "gtk2-xfce-engine", "gtk3", "gtk3-xfce-engine", "gtkmm", "gtkmm3", "gtksourceview3", "gtkspell", "gvfs", "gvim", "gzip", "harfbuzz", "heirloom-mailx", "hicolor-icon-theme", "hspell", "hsqldb-java", "hunspell", "hwids", "hyphen", "iana-etc", "icon-naming-utils", "icu", "ilmbase", "imagemagick", "imlib2", "inetutils", "inkscape", "inputproto", "intel-dri", "iproute2", "iptables", "iputils", "ipw2200-fw", "isl", "iso-codes", "jack", "jasper", "jdk7-openjdk", "jfsutils", "jre7-openjdk", "jre7-openjdk-headless", "js", "json-c", "json-glib", "kbd", "kbproto", "kdelibs", "keyutils", "khrplatform-devel", "kmod", "krb5", "lame", "lapack", "lcms", "lcms2", "ldns", "leafpad", "less", "libarchive", "libass", "libassuan", "libasyncns", "libatasmart", "libavc1394", "libcaca", "libcanberra", "libcanberra-pulse", "libcap", "libcddb", "libcdio", "libcdio-paranoia", "libcroco", "libcups", "libdaemon", "libdatrie", "libdbusmenu-qt", "libdca", "libdrm", "libdv", "libdvbpsi", "libdvdnav", "libdvdread", "libebml", "libedit", "libegl", "libevent", "libexif", "libffi", "libfontenc", "libgbm", "libgcrypt", "libgl", "libglade", "libglapi", "libgme", "libgnome-keyring", "libgpg-error", "libgssglue", "libgtop", "libguess", "libgusb", "libice", "libid3tag", "libidl2", "libidn", "libiec61883", "libimobiledevice", "libiodbc", "libjpeg-turbo", "libkate", "libksba", "libldap", "liblqr", "libltdl", "libmad", "libmatroska", "libmikmod", "libmms", "libmng", "libmodplug", "libmowgli", "libmp4v2", "libmpc", "libmpcdec", "libmpeg2", "libnl", "libnotify", "libogg", "libpcap", "libpciaccess", "libpeas", "libpipeline", "libplist", "libpng", "libproxy", "libpulse", "libquvi", "libquvi-scripts", "libqzeitgeist", "libraw1394", "libreoffice-base", "libreoffice-calc", "libreoffice-common", "libreoffice-draw", "libreoffice-gnome", "libreoffice-impress", "libreoffice-kde4", "libreoffice-math", "libreoffice-postgresql-connector", "libreoffice-sdk", "libreoffice-sdk-doc", "libreoffice-tr", "libreoffice-writer", "librsvg", "libsamplerate", "libsasl", "libsecret", "libshout", "libsidplay", "libsigc++", "libsm", "libsndfile", "libsoup", "libsoup-gnome", "libssh2", "libtasn1", "libthai", "libtheora", "libtiff", "libtiger", "libtirpc", "libtool", "libtorrent-rasterbar", "libunique", "libupnp", "libusb-compat", "libusbx", "libutempter", "libva", "libva-driver-intel-g45-h264", "libvisual", "libvorbis", "libvpx", "libwnck", "libwnck3", "libwpd", "libwps", "libx11", "libxau", "libxaw", "libxcb", "libxcomposite", "libxcursor", "libxdamage", "libxdmcp", "libxext", "libxfce4ui", "libxfce4util", "libxfixes", "libxfont", "libxft", "libxi", "libxinerama", "libxkbfile", "libxklavier", "libxml2", "libxmu", "libxpm", "libxrandr", "libxrender", "libxres", "libxslt", "libxss", "libxt", "libxtst", "libxv", "libxvmc", "libxxf86vm", "libyaml", "licenses", "linux", "linux-api-headers", "linux-firmware", "logrotate", "lpsolve", "lsof", "lua", "lua51", "lvm2", "m4", "make", "man-db", "man-pages", "mcpp", "mdadm", "media-player-info", "mercurial", "mesa", "mjpegtools", "mkinitcpio", "mkinitcpio-busybox", "mozilla-common", "mpfr", "mpg123", "mtdev", "nano", "ncurses", "neon", "net-tools", "netcfg", "nettle", "notification-daemon", "nspr", "nss", "obex-data-server", "opencore-amr", "openexr", "openjpeg", "openobex", "openssh", "openssl", "opus", "orbit2", "orc", "p11-kit", "p7zip", "pacman", "pacman-mirrorlist", "pam", "pambase", "pango", "pangomm", "parted", "patch", "pavucontrol", "pciutils", "pcmciautils", "pcre", "perl", "perl-alien-sdl", "perl-capture-tiny", "perl-class-inspector", "perl-compress-bzip2", "perl-error", "perl-file-sharedir", "perl-file-which", "perl-ipc-system-simple", "perl-sdl", "perl-test-pod", "perl-tie-simple", "perl-xml-parser", "perl-xml-simple", "perl-yaml-syck", "phonon", "phonon-vlc", "pinentry", "pixman", "pkg-config", "pm-quirks", "pm-utils", "polkit", "polkit-gnome", "polkit-qt", "poppler", "poppler-data", "poppler-glib", "popt", "postgresql-libs", "ppl", "ppp", "procps-ng", "psmisc", "pth", "pulseaudio", "pulseaudio-alsa", "pygobject-devel", "pygobject2-devel", "pygtk", "pyqt-common", "python", "python-dbus-common", "python-pyinotify", "python2", "python2-beaker", "python2-cairo", "python2-dbus", "python2-distribute", "python2-docutils", "python2-gobject", "python2-gobject2", "python2-jinja", "python2-mako", "python2-markupsafe", "python2-notify", "python2-pygments", "python2-pyqt", "python2-sip", "python2-sphinx", "qbittorrent", "qca", "qt", "qtwebkit", "r", "randrproto", "raptor", "rasqal", "readline", "recode", "recordproto", "redland", "redland-storage-virtuoso", "reiserfsprogs", "renderproto", "rsync", "rtkit", "rtmpdump", "ruby", "run-parts", "sbc", "schroedinger", "scrnsaverproto", "sdl", "sdl_gfx", "sdl_image", "sdl_mixer", "sdl_net", "sdl_pango", "sdl_ttf", "sed", "sg3_utils", "shadow", "shared-color-profiles", "shared-desktop-ontologies", "shared-mime-info", "sip", "smpeg", "soprano", "sound-theme-freedesktop", "soundtouch", "spandsp", "speex", "sqlite", "startup-notification", "strigi", "sudo", "sysfsutils", "syslinux", "systemd", "systemd-sysvcompat", "sysvinit-tools", "taglib", "tar", "tcl", "tdb", "texinfo", "thunar", "thunar-volman", "tk", "totem", "totem-plparser", "traceroute", "ttf-dejavu", "tumbler", "tzdata", "udisks", "udisks2", "unace", "unixodbc", "unrar", "unzip", "upower", "usbmuxd", "usbutils", "util-linux", "v4l-utils", "vi", "videoproto", "vim", "vim-runtime", "virtuoso-base", "vlc", "vte", "vte-common", "wavpack", "webrtc-audio-processing", "wget", "which", "wpa_actiond", "wpa_supplicant", "x264", "xampp", "xbitmaps", "xcb-proto", "xcb-util", "xcb-util-keysyms", "xchat", "xclip", "xdg-utils", "xextproto", "xf86-input-evdev", "xf86-input-synaptics", "xf86-video-intel", "xf86vidmodeproto", "xfce4-appfinder", "xfce4-mixer", "xfce4-panel", "xfce4-power-manager", "xfce4-session", "xfce4-settings", "xfce4-terminal", "xfconf", "xfdesktop", "xfsprogs", "xfwm4", "xfwm4-themes", "xineramaproto", "xkeyboard-config", "xmlto", "xorg-bdftopcf", "xorg-font-util", "xorg-font-utils", "xorg-fonts-alias", "xorg-fonts-encodings", "xorg-fonts-misc", "xorg-iceauth", "xorg-luit", "xorg-mkfontdir", "xorg-mkfontscale", "xorg-server", "xorg-server-common", "xorg-server-utils", "xorg-sessreg", "xorg-setxkbmap", "xorg-twm", "xorg-xauth", "xorg-xbacklight", "xorg-xclock", "xorg-xcmsdb", "xorg-xgamma", "xorg-xhost", "xorg-xinit", "xorg-xinput", "xorg-xkbcomp", "xorg-xmodmap", "xorg-xrandr", "xorg-xrdb", "xorg-xrefresh", "xorg-xset", "xorg-xsetroot", "xproto", "xterm", "xvidcore", "xz", "zip", "zlib", "zsh", "zvbi"), class = "factor")), .Names = c("Zaman", "Operasyon", "Paket"), row.names = c(NA, -730L), class = "data.frame") ```
2013/01/27
[ "https://Stackoverflow.com/questions/14549433", "https://Stackoverflow.com", "https://Stackoverflow.com/users/886669/" ]
I'm partial to data.table for these tasks: ``` library(data.table) dt <- as.data.table(df) dt[,days:=format(Zaman,"%d.%m.%Y")] dt[,.N,by=days] days N 1: 18.01.2013 133 2: 19.01.2013 415 3: 20.01.2013 20 <snip> ```
Perhaps you are simply looking for the `length` function? ``` > aggregate(data,by=list(data$zaman),FUN=length) Group.1 zaman oper paket 1 2013-01-18 21:39:00 6 6 6 ```
22,325,245
I have an example script where I am searching for `[Data]` in a line. The odd thing is that it always matches when reading the file with `csv.reader`. See code below. Any ideas? ``` #!/opt/Python-2.7.3/bin/python import csv import re import os content = '''# foo [Header],, foo bar,blah, [Settings] Yadda,yadda [Data],, Alfa,Beta,Gamma,Delta,Epsilon One,Two,Tree,Four,Five ''' f1 = open("/tmp/file", "w") print >>f1, content f1.close() f = open("/tmp/file", "r") reader = csv.reader(f) found_csv = 0 mycsv = [] for l in reader: line = str(l) if line == '[]': continue if re.search("[Data]", line): print line found_csv = 1 if found_csv: mycsv.append(l) ``` Prints: ``` ['[Header]', '', ''] ['foo bar', 'blah', ''] ['[Settings]'] ['Yadda', 'yadda'] ['[Data]', '', ''] ['Alfa', 'Beta', 'Gamma', 'Delta', 'Epsilon'] ```
2014/03/11
[ "https://Stackoverflow.com/questions/22325245", "https://Stackoverflow.com", "https://Stackoverflow.com/users/719016/" ]
If you want to find a line with string `[Data]` (with brackets), you should add `\` before brackets in pattern: ``` #if re.search("[Data]", line): if re.search("\[Data\]", line): ``` Pattern `[Data]` without backslashes means that you want search any character from set (`D` or `a` or `t`) in line.
Edit: The error in the original code is how you set `founc_csv` the first time it finds data, and then never resets it to 0. That or you can simply remove the need for it entirely: ``` mycsv = [] for l in reader: line = str(l) if line == '[]': continue elif "[Data]" in line: print line mycsv.append(l) ``` -- Original answer -- re is treating `"[Data]"` as a regular expression. You actually don't even need re at all in this case an simply can change it to: ``` if "[Data]" in line: ``` However if you are going to do more advanced regular expression based searching, just make sure to [format them properly](https://pythex.org/): ``` if re.search(r'\[Data\]', line): ```
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.bat ``` with ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxHeadless.exe -s {vm_name} -v on ``` run and test it - with WIN "[Command Line Interface (CLI)](http://en.wikipedia.org/wiki/Command-line_interface)" called "[Command shell](http://technet.microsoft.com/en-us/library/bb490954.aspx)" - and VM will be opened running in background. ``` vm.run.bat ``` --- **2) Then we use "[Windows-based script host (WSCRIPT)](http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/wsh_runfromwindowsbasedhost.mspx?mfr=true)"** and language "[Microsoft Visual Basic Script (VBS)](http://en.wikipedia.org/wiki/VBScript)" and run above file "vm.run.bat". Create file ``` vm.run.vbs ``` put code ``` Set WshShell = WScript.CreateObject("WScript.Shell") obj = WshShell.Run("vm.run.bat", 0) set WshShell = Nothing ``` run and test it - CLI will be run in background. ``` wscript.exe vm.run.vbs ``` --- **Ref** * Thanks to [iain](http://web.archive.org/web/20150407100735/http://www.techques.com/question/2-188105/Virtualbox-Start-VM-Headless-on-Windows)
If you do not mind operating the application once manually, to end with OS running in background; here are the options: Open Virtual Box. Right Click on your Guest OS > Choose: Start Headless. Wait for a while till the OS boots. Then close the Virtual Box application.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
Following Bruno Garett's Answer, in my experience: testing the `vm.run.bat` file fails, gives a read only error but will work fine running the VB script. Just to save anyone time. Also to shut down headless you can use another batch script (Sam F's solution wont work with Bruno's solution): ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxManage controlvm "Ubuntu Server" acpipowerbutton ``` Second line was obtained from [here](https://askubuntu.com/questions/82015/shutting-down-ubuntu-server-running-in-headless-virtualbox) you can use whichever option you want.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in Task Scheduler ### 1. Identify your virtual machine name Navigate to `C:\Users\YourUserNameHere\VirtualBox VMs` [![VirtualBox VMs Folder](https://i.stack.imgur.com/a0PYQ.png)](https://i.stack.imgur.com/a0PYQ.png) The folder name above generally reflects the virtual machine name. You can confirm this by checking VirtualBox Manager itself: [![VirtualBox GUI](https://i.stack.imgur.com/EkXhp.png)](https://i.stack.imgur.com/EkXhp.png) **The machine name is `WindowsXPSP3`.** ### 2. Create a task in Task Scheduler First click the start button and type "task scheduler" without the quotes. Then open the Task Scheduler: [![Task Scheduler Search](https://i.stack.imgur.com/rrH4X.png)](https://i.stack.imgur.com/rrH4X.png) Inside the task scheduler, we're going to see a structure tree on the left side. Right-click on `Task Scheduler Library`. Left-click on `New Folder...`: [![Task Scheduler New Folder](https://i.stack.imgur.com/rKPdx.png)](https://i.stack.imgur.com/rKPdx.png) Name the folder something memorable, like `User Custom` and hit OK *(if you already have an existing folder that you would prefer to use, that's fine as well, skip to the next paragraph instead)*: [![Name New Folder](https://i.stack.imgur.com/7yJTD.png)](https://i.stack.imgur.com/7yJTD.png) Click your newly created folder, in my case `User Custom`, to highlight it. Right-click in the empty list to the right and Left-click on `Create New Task...`: [![Create New Task](https://i.stack.imgur.com/6iRrZ.png)](https://i.stack.imgur.com/6iRrZ.png) Now comes the tricky stuff. Follow my instructions verbatim. **If you feel like downvoting because it didn't work, or say "this didn't work for me" in the comments, I'm betting you skipped a step here.** Come back and try it again. The `Name` and `Description` can be whatever you like, it is merely aesthetic and will not affect functionality. I'm going to name mine after my virtual machine and put a brief description. What IS important is that you choose `Run whether user is logged on or not` and `Run with highest privileges`: [![Create Task: General](https://i.stack.imgur.com/CN3cx.png)](https://i.stack.imgur.com/CN3cx.png) Switch to the `Triggers` tab at the top and Left-click `New...`. Switch the `Begin the task:` combination box to `At Startup` and then Left-click OK: [![New Trigger](https://i.stack.imgur.com/oZi17.png)](https://i.stack.imgur.com/oZi17.png) Switch to the `Actions` tab at the top and Left-click `New...`. Click browse (do **not** try to type this manually, you will cause yourself headaches) and navigate to `C:\Program Files\Oracle\VirtualBox`. Highlight `VBoxManage.exe` and Left-click `Open`: [![Browse to VBoxManage](https://i.stack.imgur.com/PpZz5.png)](https://i.stack.imgur.com/PpZz5.png) Copy everything **except** the executable and the quotation marks from `Program/script:` into `Start in (optional):`: [![Copy Directory Path](https://i.stack.imgur.com/rPnzq.png)](https://i.stack.imgur.com/rPnzq.png) Finally, put the following line into `Add arguments (optional):` and hit OK: `startvm "YourVirtualMachineNameFromStep1" --type headless` in my case, I will use: `startvm "WindowsXPSP3" --type headless` [![Enter Arguments](https://i.stack.imgur.com/CXWyE.png)](https://i.stack.imgur.com/CXWyE.png) My `Conditions` tab is generally set to the following: [![Conditions Tab](https://i.stack.imgur.com/C7rqq.png)](https://i.stack.imgur.com/C7rqq.png) Make sure your `Settings` tab looks like the following, but **absolutely ensure** you have set the items marked in yellow to match mine. This will make sure that if some pre-requisite wasn't ready yet that it will retry a few times to start the virtual machine and that the virtual machine won't be terminated after 3 days. I would leave everything else as default unless you know what you are doing. If you don't do what I show you here, and it ends up not working, it's your problem: [![Settings Tab](https://i.stack.imgur.com/vDzK3.png)](https://i.stack.imgur.com/vDzK3.png) Finally, hit OK at the bottom of the `Create Task` window. **You are done!** Testing the solution ==================== ### Testing My Fake Scenario Above (and how you can test yours) When I restart my computer, I can log in and open the VirtualBox Manager and see that my VM is running: [![Running VM](https://i.stack.imgur.com/XXfZ6.png)](https://i.stack.imgur.com/XXfZ6.png) I can also open Task Scheduler back up, and verify that it ran successfully, or see what the error was if it did not (most errors will be directory errors from people trying to manually enter where I told them not to): [![Task Scheduler Success](https://i.stack.imgur.com/OTHMa.png)](https://i.stack.imgur.com/OTHMa.png) ### Testing My Actual Use Case On another machine, I set up my Linux Server as a virtual machine with it's own raw solid-state hard drive. I wanted that Server to boot back up if the machine got restarted (crash, windows update, etc) automatically, without the user having to log in. I set that one up exactly as I described above and restarted that machine. I know it worked successfully because I was able to access my Samba share (laymens: a folder with stuff in it that I share over my network to my other computers) from another computer **WITHOUT** having first logged into the machine that runs the Server VM. **This 100% confirms that it does start on system boot and not after the user logs in.**
Following Bruno Garett's Answer, in my experience: testing the `vm.run.bat` file fails, gives a read only error but will work fine running the VB script. Just to save anyone time. Also to shut down headless you can use another batch script (Sam F's solution wont work with Bruno's solution): ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxManage controlvm "Ubuntu Server" acpipowerbutton ``` Second line was obtained from [here](https://askubuntu.com/questions/82015/shutting-down-ubuntu-server-running-in-headless-virtualbox) you can use whichever option you want.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
I used something similar to Samuel's solution that works great. On the desktop (or any folder), right click and go to New->Shortcut. In the target, type: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm {uuid} --type headless ``` In the name, type whatever you want and click Finish. Then to stop the same vm, create a new shortcut with the target being: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" controlvm {uuid} poweroff ``` Double clicking these starts and stops the VM without any window staying open.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.bat ``` with ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxHeadless.exe -s {vm_name} -v on ``` run and test it - with WIN "[Command Line Interface (CLI)](http://en.wikipedia.org/wiki/Command-line_interface)" called "[Command shell](http://technet.microsoft.com/en-us/library/bb490954.aspx)" - and VM will be opened running in background. ``` vm.run.bat ``` --- **2) Then we use "[Windows-based script host (WSCRIPT)](http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/wsh_runfromwindowsbasedhost.mspx?mfr=true)"** and language "[Microsoft Visual Basic Script (VBS)](http://en.wikipedia.org/wiki/VBScript)" and run above file "vm.run.bat". Create file ``` vm.run.vbs ``` put code ``` Set WshShell = WScript.CreateObject("WScript.Shell") obj = WshShell.Run("vm.run.bat", 0) set WshShell = Nothing ``` run and test it - CLI will be run in background. ``` wscript.exe vm.run.vbs ``` --- **Ref** * Thanks to [iain](http://web.archive.org/web/20150407100735/http://www.techques.com/question/2-188105/Virtualbox-Start-VM-Headless-on-Windows)
Following Bruno Garett's Answer, in my experience: testing the `vm.run.bat` file fails, gives a read only error but will work fine running the VB script. Just to save anyone time. Also to shut down headless you can use another batch script (Sam F's solution wont work with Bruno's solution): ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxManage controlvm "Ubuntu Server" acpipowerbutton ``` Second line was obtained from [here](https://askubuntu.com/questions/82015/shutting-down-ubuntu-server-running-in-headless-virtualbox) you can use whichever option you want.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in Task Scheduler ### 1. Identify your virtual machine name Navigate to `C:\Users\YourUserNameHere\VirtualBox VMs` [![VirtualBox VMs Folder](https://i.stack.imgur.com/a0PYQ.png)](https://i.stack.imgur.com/a0PYQ.png) The folder name above generally reflects the virtual machine name. You can confirm this by checking VirtualBox Manager itself: [![VirtualBox GUI](https://i.stack.imgur.com/EkXhp.png)](https://i.stack.imgur.com/EkXhp.png) **The machine name is `WindowsXPSP3`.** ### 2. Create a task in Task Scheduler First click the start button and type "task scheduler" without the quotes. Then open the Task Scheduler: [![Task Scheduler Search](https://i.stack.imgur.com/rrH4X.png)](https://i.stack.imgur.com/rrH4X.png) Inside the task scheduler, we're going to see a structure tree on the left side. Right-click on `Task Scheduler Library`. Left-click on `New Folder...`: [![Task Scheduler New Folder](https://i.stack.imgur.com/rKPdx.png)](https://i.stack.imgur.com/rKPdx.png) Name the folder something memorable, like `User Custom` and hit OK *(if you already have an existing folder that you would prefer to use, that's fine as well, skip to the next paragraph instead)*: [![Name New Folder](https://i.stack.imgur.com/7yJTD.png)](https://i.stack.imgur.com/7yJTD.png) Click your newly created folder, in my case `User Custom`, to highlight it. Right-click in the empty list to the right and Left-click on `Create New Task...`: [![Create New Task](https://i.stack.imgur.com/6iRrZ.png)](https://i.stack.imgur.com/6iRrZ.png) Now comes the tricky stuff. Follow my instructions verbatim. **If you feel like downvoting because it didn't work, or say "this didn't work for me" in the comments, I'm betting you skipped a step here.** Come back and try it again. The `Name` and `Description` can be whatever you like, it is merely aesthetic and will not affect functionality. I'm going to name mine after my virtual machine and put a brief description. What IS important is that you choose `Run whether user is logged on or not` and `Run with highest privileges`: [![Create Task: General](https://i.stack.imgur.com/CN3cx.png)](https://i.stack.imgur.com/CN3cx.png) Switch to the `Triggers` tab at the top and Left-click `New...`. Switch the `Begin the task:` combination box to `At Startup` and then Left-click OK: [![New Trigger](https://i.stack.imgur.com/oZi17.png)](https://i.stack.imgur.com/oZi17.png) Switch to the `Actions` tab at the top and Left-click `New...`. Click browse (do **not** try to type this manually, you will cause yourself headaches) and navigate to `C:\Program Files\Oracle\VirtualBox`. Highlight `VBoxManage.exe` and Left-click `Open`: [![Browse to VBoxManage](https://i.stack.imgur.com/PpZz5.png)](https://i.stack.imgur.com/PpZz5.png) Copy everything **except** the executable and the quotation marks from `Program/script:` into `Start in (optional):`: [![Copy Directory Path](https://i.stack.imgur.com/rPnzq.png)](https://i.stack.imgur.com/rPnzq.png) Finally, put the following line into `Add arguments (optional):` and hit OK: `startvm "YourVirtualMachineNameFromStep1" --type headless` in my case, I will use: `startvm "WindowsXPSP3" --type headless` [![Enter Arguments](https://i.stack.imgur.com/CXWyE.png)](https://i.stack.imgur.com/CXWyE.png) My `Conditions` tab is generally set to the following: [![Conditions Tab](https://i.stack.imgur.com/C7rqq.png)](https://i.stack.imgur.com/C7rqq.png) Make sure your `Settings` tab looks like the following, but **absolutely ensure** you have set the items marked in yellow to match mine. This will make sure that if some pre-requisite wasn't ready yet that it will retry a few times to start the virtual machine and that the virtual machine won't be terminated after 3 days. I would leave everything else as default unless you know what you are doing. If you don't do what I show you here, and it ends up not working, it's your problem: [![Settings Tab](https://i.stack.imgur.com/vDzK3.png)](https://i.stack.imgur.com/vDzK3.png) Finally, hit OK at the bottom of the `Create Task` window. **You are done!** Testing the solution ==================== ### Testing My Fake Scenario Above (and how you can test yours) When I restart my computer, I can log in and open the VirtualBox Manager and see that my VM is running: [![Running VM](https://i.stack.imgur.com/XXfZ6.png)](https://i.stack.imgur.com/XXfZ6.png) I can also open Task Scheduler back up, and verify that it ran successfully, or see what the error was if it did not (most errors will be directory errors from people trying to manually enter where I told them not to): [![Task Scheduler Success](https://i.stack.imgur.com/OTHMa.png)](https://i.stack.imgur.com/OTHMa.png) ### Testing My Actual Use Case On another machine, I set up my Linux Server as a virtual machine with it's own raw solid-state hard drive. I wanted that Server to boot back up if the machine got restarted (crash, windows update, etc) automatically, without the user having to log in. I set that one up exactly as I described above and restarted that machine. I know it worked successfully because I was able to access my Samba share (laymens: a folder with stuff in it that I share over my network to my other computers) from another computer **WITHOUT** having first logged into the machine that runs the Server VM. **This 100% confirms that it does start on system boot and not after the user logs in.**
There an easy manual option right in the GUI too: [![Screenshot from Virtualbox 5.2](https://i.stack.imgur.com/WRts7.png)](https://i.stack.imgur.com/WRts7.png) (Taken from Virtualbox 5.2)
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
If you do not mind operating the application once manually, to end with OS running in background; here are the options: Open Virtual Box. Right Click on your Guest OS > Choose: Start Headless. Wait for a while till the OS boots. Then close the Virtual Box application.
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
You can use VBoxManage to start a VM headless: ``` "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe" startvm "Your VM name" --type headless ```
An alternative solution : <http://vboxvmservice.sourceforge.net/> It works perfect for me !
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in Task Scheduler ### 1. Identify your virtual machine name Navigate to `C:\Users\YourUserNameHere\VirtualBox VMs` [![VirtualBox VMs Folder](https://i.stack.imgur.com/a0PYQ.png)](https://i.stack.imgur.com/a0PYQ.png) The folder name above generally reflects the virtual machine name. You can confirm this by checking VirtualBox Manager itself: [![VirtualBox GUI](https://i.stack.imgur.com/EkXhp.png)](https://i.stack.imgur.com/EkXhp.png) **The machine name is `WindowsXPSP3`.** ### 2. Create a task in Task Scheduler First click the start button and type "task scheduler" without the quotes. Then open the Task Scheduler: [![Task Scheduler Search](https://i.stack.imgur.com/rrH4X.png)](https://i.stack.imgur.com/rrH4X.png) Inside the task scheduler, we're going to see a structure tree on the left side. Right-click on `Task Scheduler Library`. Left-click on `New Folder...`: [![Task Scheduler New Folder](https://i.stack.imgur.com/rKPdx.png)](https://i.stack.imgur.com/rKPdx.png) Name the folder something memorable, like `User Custom` and hit OK *(if you already have an existing folder that you would prefer to use, that's fine as well, skip to the next paragraph instead)*: [![Name New Folder](https://i.stack.imgur.com/7yJTD.png)](https://i.stack.imgur.com/7yJTD.png) Click your newly created folder, in my case `User Custom`, to highlight it. Right-click in the empty list to the right and Left-click on `Create New Task...`: [![Create New Task](https://i.stack.imgur.com/6iRrZ.png)](https://i.stack.imgur.com/6iRrZ.png) Now comes the tricky stuff. Follow my instructions verbatim. **If you feel like downvoting because it didn't work, or say "this didn't work for me" in the comments, I'm betting you skipped a step here.** Come back and try it again. The `Name` and `Description` can be whatever you like, it is merely aesthetic and will not affect functionality. I'm going to name mine after my virtual machine and put a brief description. What IS important is that you choose `Run whether user is logged on or not` and `Run with highest privileges`: [![Create Task: General](https://i.stack.imgur.com/CN3cx.png)](https://i.stack.imgur.com/CN3cx.png) Switch to the `Triggers` tab at the top and Left-click `New...`. Switch the `Begin the task:` combination box to `At Startup` and then Left-click OK: [![New Trigger](https://i.stack.imgur.com/oZi17.png)](https://i.stack.imgur.com/oZi17.png) Switch to the `Actions` tab at the top and Left-click `New...`. Click browse (do **not** try to type this manually, you will cause yourself headaches) and navigate to `C:\Program Files\Oracle\VirtualBox`. Highlight `VBoxManage.exe` and Left-click `Open`: [![Browse to VBoxManage](https://i.stack.imgur.com/PpZz5.png)](https://i.stack.imgur.com/PpZz5.png) Copy everything **except** the executable and the quotation marks from `Program/script:` into `Start in (optional):`: [![Copy Directory Path](https://i.stack.imgur.com/rPnzq.png)](https://i.stack.imgur.com/rPnzq.png) Finally, put the following line into `Add arguments (optional):` and hit OK: `startvm "YourVirtualMachineNameFromStep1" --type headless` in my case, I will use: `startvm "WindowsXPSP3" --type headless` [![Enter Arguments](https://i.stack.imgur.com/CXWyE.png)](https://i.stack.imgur.com/CXWyE.png) My `Conditions` tab is generally set to the following: [![Conditions Tab](https://i.stack.imgur.com/C7rqq.png)](https://i.stack.imgur.com/C7rqq.png) Make sure your `Settings` tab looks like the following, but **absolutely ensure** you have set the items marked in yellow to match mine. This will make sure that if some pre-requisite wasn't ready yet that it will retry a few times to start the virtual machine and that the virtual machine won't be terminated after 3 days. I would leave everything else as default unless you know what you are doing. If you don't do what I show you here, and it ends up not working, it's your problem: [![Settings Tab](https://i.stack.imgur.com/vDzK3.png)](https://i.stack.imgur.com/vDzK3.png) Finally, hit OK at the bottom of the `Create Task` window. **You are done!** Testing the solution ==================== ### Testing My Fake Scenario Above (and how you can test yours) When I restart my computer, I can log in and open the VirtualBox Manager and see that my VM is running: [![Running VM](https://i.stack.imgur.com/XXfZ6.png)](https://i.stack.imgur.com/XXfZ6.png) I can also open Task Scheduler back up, and verify that it ran successfully, or see what the error was if it did not (most errors will be directory errors from people trying to manually enter where I told them not to): [![Task Scheduler Success](https://i.stack.imgur.com/OTHMa.png)](https://i.stack.imgur.com/OTHMa.png) ### Testing My Actual Use Case On another machine, I set up my Linux Server as a virtual machine with it's own raw solid-state hard drive. I wanted that Server to boot back up if the machine got restarted (crash, windows update, etc) automatically, without the user having to log in. I set that one up exactly as I described above and restarted that machine. I know it worked successfully because I was able to access my Samba share (laymens: a folder with stuff in it that I share over my network to my other computers) from another computer **WITHOUT** having first logged into the machine that runs the Server VM. **This 100% confirms that it does start on system boot and not after the user logs in.**
An alternative solution : <http://vboxvmservice.sourceforge.net/> It works perfect for me !
19,017,840
I have two large python dictionaries in the following form, and I want to compare them and report their mismatches at **identical indices**, for the **same key**. The dictionaries have the same keys, but the tuples do not have equal lengths. ``` d1 = {'a':(1,2,3,4,66,6,6,64), 'b':(3,2,5,3,2,1,1,1)} d2 = {'a':(1,2,4,3,66,6,6,64), 'b':(1,8,5,3,2,1,22,9)} ``` For example, for key 'a', indices 2 and 3 different values. As the dictionaries are large and the tuples lengths are not necessary equal, my amateur looping method doesn't work. ``` for k1,v1 in dict1: for k2, v2 in dict2: if k1 == k2: for i in range(len(v1)): for j in range(len(v2)): if i==j: if v1[i] != v2[j]: print k1, v1[i] print k2, v2[i] ```
2013/09/26
[ "https://Stackoverflow.com/questions/19017840", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2621263/" ]
**SOLUTION** The trick is to run the VM without GUI. With this you can easily run VM on WIN server like a service too. Prerequired is that exist some VM, you have some already. Below put its name instead `{vm_name}`. --- **1) At first we use build-in executable file "VBoxHeadless.exe".** Create file ``` vm.run.bat ``` with ``` cd "c:\Program Files\Oracle\VirtualBox\" VBoxHeadless.exe -s {vm_name} -v on ``` run and test it - with WIN "[Command Line Interface (CLI)](http://en.wikipedia.org/wiki/Command-line_interface)" called "[Command shell](http://technet.microsoft.com/en-us/library/bb490954.aspx)" - and VM will be opened running in background. ``` vm.run.bat ``` --- **2) Then we use "[Windows-based script host (WSCRIPT)](http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/wsh_runfromwindowsbasedhost.mspx?mfr=true)"** and language "[Microsoft Visual Basic Script (VBS)](http://en.wikipedia.org/wiki/VBScript)" and run above file "vm.run.bat". Create file ``` vm.run.vbs ``` put code ``` Set WshShell = WScript.CreateObject("WScript.Shell") obj = WshShell.Run("vm.run.bat", 0) set WshShell = Nothing ``` run and test it - CLI will be run in background. ``` wscript.exe vm.run.vbs ``` --- **Ref** * Thanks to [iain](http://web.archive.org/web/20150407100735/http://www.techques.com/question/2-188105/Virtualbox-Start-VM-Headless-on-Windows)
The truly most-consistent option is to use Task Scheduler. Implementing the solution ========================= This requires a couple of pretty easy steps, but I will explain them in detail to ensure anyone from with any technical background can set this up: 1. Identify your virtual machine name 2. Create a task in Task Scheduler ### 1. Identify your virtual machine name Navigate to `C:\Users\YourUserNameHere\VirtualBox VMs` [![VirtualBox VMs Folder](https://i.stack.imgur.com/a0PYQ.png)](https://i.stack.imgur.com/a0PYQ.png) The folder name above generally reflects the virtual machine name. You can confirm this by checking VirtualBox Manager itself: [![VirtualBox GUI](https://i.stack.imgur.com/EkXhp.png)](https://i.stack.imgur.com/EkXhp.png) **The machine name is `WindowsXPSP3`.** ### 2. Create a task in Task Scheduler First click the start button and type "task scheduler" without the quotes. Then open the Task Scheduler: [![Task Scheduler Search](https://i.stack.imgur.com/rrH4X.png)](https://i.stack.imgur.com/rrH4X.png) Inside the task scheduler, we're going to see a structure tree on the left side. Right-click on `Task Scheduler Library`. Left-click on `New Folder...`: [![Task Scheduler New Folder](https://i.stack.imgur.com/rKPdx.png)](https://i.stack.imgur.com/rKPdx.png) Name the folder something memorable, like `User Custom` and hit OK *(if you already have an existing folder that you would prefer to use, that's fine as well, skip to the next paragraph instead)*: [![Name New Folder](https://i.stack.imgur.com/7yJTD.png)](https://i.stack.imgur.com/7yJTD.png) Click your newly created folder, in my case `User Custom`, to highlight it. Right-click in the empty list to the right and Left-click on `Create New Task...`: [![Create New Task](https://i.stack.imgur.com/6iRrZ.png)](https://i.stack.imgur.com/6iRrZ.png) Now comes the tricky stuff. Follow my instructions verbatim. **If you feel like downvoting because it didn't work, or say "this didn't work for me" in the comments, I'm betting you skipped a step here.** Come back and try it again. The `Name` and `Description` can be whatever you like, it is merely aesthetic and will not affect functionality. I'm going to name mine after my virtual machine and put a brief description. What IS important is that you choose `Run whether user is logged on or not` and `Run with highest privileges`: [![Create Task: General](https://i.stack.imgur.com/CN3cx.png)](https://i.stack.imgur.com/CN3cx.png) Switch to the `Triggers` tab at the top and Left-click `New...`. Switch the `Begin the task:` combination box to `At Startup` and then Left-click OK: [![New Trigger](https://i.stack.imgur.com/oZi17.png)](https://i.stack.imgur.com/oZi17.png) Switch to the `Actions` tab at the top and Left-click `New...`. Click browse (do **not** try to type this manually, you will cause yourself headaches) and navigate to `C:\Program Files\Oracle\VirtualBox`. Highlight `VBoxManage.exe` and Left-click `Open`: [![Browse to VBoxManage](https://i.stack.imgur.com/PpZz5.png)](https://i.stack.imgur.com/PpZz5.png) Copy everything **except** the executable and the quotation marks from `Program/script:` into `Start in (optional):`: [![Copy Directory Path](https://i.stack.imgur.com/rPnzq.png)](https://i.stack.imgur.com/rPnzq.png) Finally, put the following line into `Add arguments (optional):` and hit OK: `startvm "YourVirtualMachineNameFromStep1" --type headless` in my case, I will use: `startvm "WindowsXPSP3" --type headless` [![Enter Arguments](https://i.stack.imgur.com/CXWyE.png)](https://i.stack.imgur.com/CXWyE.png) My `Conditions` tab is generally set to the following: [![Conditions Tab](https://i.stack.imgur.com/C7rqq.png)](https://i.stack.imgur.com/C7rqq.png) Make sure your `Settings` tab looks like the following, but **absolutely ensure** you have set the items marked in yellow to match mine. This will make sure that if some pre-requisite wasn't ready yet that it will retry a few times to start the virtual machine and that the virtual machine won't be terminated after 3 days. I would leave everything else as default unless you know what you are doing. If you don't do what I show you here, and it ends up not working, it's your problem: [![Settings Tab](https://i.stack.imgur.com/vDzK3.png)](https://i.stack.imgur.com/vDzK3.png) Finally, hit OK at the bottom of the `Create Task` window. **You are done!** Testing the solution ==================== ### Testing My Fake Scenario Above (and how you can test yours) When I restart my computer, I can log in and open the VirtualBox Manager and see that my VM is running: [![Running VM](https://i.stack.imgur.com/XXfZ6.png)](https://i.stack.imgur.com/XXfZ6.png) I can also open Task Scheduler back up, and verify that it ran successfully, or see what the error was if it did not (most errors will be directory errors from people trying to manually enter where I told them not to): [![Task Scheduler Success](https://i.stack.imgur.com/OTHMa.png)](https://i.stack.imgur.com/OTHMa.png) ### Testing My Actual Use Case On another machine, I set up my Linux Server as a virtual machine with it's own raw solid-state hard drive. I wanted that Server to boot back up if the machine got restarted (crash, windows update, etc) automatically, without the user having to log in. I set that one up exactly as I described above and restarted that machine. I know it worked successfully because I was able to access my Samba share (laymens: a folder with stuff in it that I share over my network to my other computers) from another computer **WITHOUT** having first logged into the machine that runs the Server VM. **This 100% confirms that it does start on system boot and not after the user logs in.**
72,751,658
I have written a python program for printing a diamond. It is working properly except that it is printing an extra kite after printing a diamond. May someone please help me to remove this bug? I can't find it and please give a fix from this code please. CODE: ``` limitRows = int(input("Enter the maximum number of rows: ")) maxRows = limitRows + (limitRows - 1) while currentRow <= limitRows: spaces = limitRows - currentRow while spaces > 0: print(" ", end='') spaces -= 1 stars = currentRow while stars > 0: print("*", end=' ') stars -= 1 print() currentRow += 1 while currentRow <= maxRows: leftRows = maxRows - currentRow + 1 while leftRows > 0: spaces = limitRows - leftRows while spaces > 0: print(" ", end='') spaces -= 1 stars = leftRows while stars > 0: print("*", end=' ') stars -= 1 print() leftRows -= 1 currentRow += 1 ``` OUTPUT(Case 1): ``` D:\Python\diamond\venv\Scripts\python.exe D:/Python/diamond/main.py Enter the maximum number of rows: 4 * * * * * * * * * * * * * * * * * * * * Process finished with exit code 0 ``` OUTPUT(Case 2): ``` D:\Python\diamond\venv\Scripts\python.exe D:/Python/diamond/main.py Enter the maximum number of rows: 5 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Process finished with exit code 0 ```
2022/06/25
[ "https://Stackoverflow.com/questions/72751658", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19409398/" ]
You have an extra while loop in the second half of your code. Try this ``` limitRows = int(input("Enter the maximum number of rows: ")) maxRows = limitRows + (limitRows - 1) currentRow = 0 while currentRow <= limitRows: spaces = limitRows - currentRow while spaces > 0: print(" ", end='') spaces -= 1 stars = currentRow while stars > 0: print("*", end=' ') stars -= 1 print() currentRow += 1 while currentRow <= maxRows: leftRows = maxRows - currentRow + 1 spaces = limitRows - leftRows # Removed unnecessary while loop here while spaces > 0: print(" ", end='') spaces -= 1 stars = leftRows while stars > 0: print("*", end=' ') stars -= 1 print() leftRows -= 1 currentRow += 1 ```
You can make this less complex by using just one loop as follows: ``` def make_row(n): return ' '.join(['*'] * n) rows = input('Number of rows: ') if (nrows := int(rows)) > 0: diamond = [make_row(nrows)] j = len(diamond[0]) for i in range(nrows-1, 0, -1): j -= 1 diamond.append(make_row(i).rjust(j)) print(*diamond[::-1], *diamond[1:], sep='\n') ``` **Example:** ``` Number of rows: 5 * * * * * * * * * * * * * * * * * * * * * * * * * ```
59,529,038
I am using [nameko](https://nameko.readthedocs.io/en/stable/) to build an ETL pipeline with a micro-service architecture, and I do not want to wait for a reply after making a RPC request. ``` from nameko.rpc import rpc, RpcProxy class Scheduler(object): name = "scheduler" task_runner = RpcProxy('task_runner') @rpc def schedule(self, task_type, group_id, time): return self.task_runner.start.async(task_type, group_id) ``` This code throws an error: ``` Traceback (most recent call last): File "/home/satnam-sandhu/.anaconda3/envs/etl/bin/nameko", line 8, in <module> sys.exit(main()) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/main.py", line 112, in main args.main(args) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/commands.py", line 110, in main main(args) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/run.py", line 181, in main import_service(path) File "/home/satnam-sandhu/.anaconda3/envs/etl/lib/python3.8/site-packages/nameko/cli/run.py", line 46, in import_service __import__(module_name) File "./scheduler/service.py", line 15 return self.task_runner.start.async(task_type, group_id) ^ SyntaxError: invalid syntax ``` I am new with microservices and Nameko, and also I am using RabbitMQ as the queuing service.
2019/12/30
[ "https://Stackoverflow.com/questions/59529038", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8776121/" ]
I had the same problem; you need to replace the `async` method with the `call_async` one, and retrieve the data with `result()`. [Documentation](https://nameko.readthedocs.io/en/stable/built_in_extensions.html) [GitHub issue](https://github.com/nameko/nameko/pull/318)
use call\_async instead async or for better result use event from nameko.events import EventDispatcher, event\_handler ``` @event_handler("service_a", "event_emit_name") def get_result(self, payload): #do_something... ``` and in other service ``` from nameko.events import EventDispatcher, event_handler @event_handler("service_a", "event_emit_name") def return_result(self, payload): #get payload and work over there ```
31,196,412
I am new to the world of map reduce, I have run a job and it seems to be taking forever to complete given that it is a relatively small task, I am guessing something has not gone according to plan. I am using hadoop version 2.6, here is some info gathered I thought could help. The map reduce programs themselves are straightforward so I won't bother adding those here unless someone really wants me to give more insight - the python code running for map reduce is identical to the one here - <http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/>. If someone can give a clue as to what has gone wrong or why that would be great. Thanks in advance. ``` Name: streamjob1669011192523346656.jar Application Type: MAPREDUCE Application Tags: State: ACCEPTED FinalStatus: UNDEFINED Started: 3-Jul-2015 00:17:10 Elapsed: 20mins, 57sec Tracking URL: UNASSIGNED Diagnostics: ``` this is what I get when running the program: ``` bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar - file python-files/mapper.py -mapper python-files/mapper.py -file python - files/reducer.py -reducer python-files/reducer.py -input /user/input/* - output /user/output 15/07/03 00:16:41 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. 2015-07-03 00:16:43.510 java[3708:1b03] Unable to load realm info from SCDynamicStore 15/07/03 00:16:44 WARN util.NativeCodeLoader: Unable to load native- hadoop library for your platform... using builtin-java classes where applicable packageJobJar: [python-files/mapper.py, python-files/reducer.py, /var/folders/4x/v16lrvy91ld4t8rqvnzbr83m0000gn/T/hadoop-unjar8212926403009053963/] [] /var/folders/4x/v16lrvy91ld4t8rqvnzbr83m0000gn/T/streamjob1669011192523346656.jar tmpDir=null 15/07/03 00:16:53 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/07/03 00:16:55 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 15/07/03 00:17:05 INFO mapred.FileInputFormat: Total input paths to process : 1 15/07/03 00:17:06 INFO mapreduce.JobSubmitter: number of splits:2 15/07/03 00:17:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1435852353333_0003 15/07/03 00:17:11 INFO impl.YarnClientImpl: Submitted application application_1435852353333_0003 15/07/03 00:17:11 INFO mapreduce.Job: The url to track the job: http://mymacbook.home:8088/proxy/application_1435852353333_0003/ 15/07/03 00:17:11 INFO mapreduce.Job: Running job: job_1435852353333_0003 ```
2015/07/02
[ "https://Stackoverflow.com/questions/31196412", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1285948/" ]
If a job is in `ACCEPTED` state for long time and not changing to `RUNNING` state, It could be due to the following reasons. Nodemanager(slave service) is either dead or unable to communicate with resource manager. if the `Active nodes` in the Yarn resource manager [Web ui main page](http://mymacbook.home:8088/) is zero then you can confirm no node managers are connected to resource manager. If so, you need to start nodemanager. Another reason is there might be other jobs running which occupies the available slot and no room for new jobs check the value of `Memory Total`, `Memory used` ,`Vcores Total` ,`VCores Used` in the resource manager webui main page.
Have you partitioned your data the same way you query it ? Basically, you don't want to query all your data, which is what you may be doing at the moment. That could explain why it's taking such a long time to run. You want to query a subset of your whole data set. For instance, if you partition over dates, you really want to write queries with a date constraint, otherwise the query will take forever to run. If you can, make your query with a constraint on the variable(s) used to partition your data.
32,959,770
In python, I can do this to get the current file's path: ``` os.path.dirname(os.path.abspath(__file__)) ``` But if I run this on a thread say: ``` def do_stuff(): class RunThread(threading.Thread): def run(self): print os.path.dirname(os.path.abspath(__file__)) a = RunThread() a.start() ``` I get this error: ``` Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner self.run() File "readrss.py", line 137, in run print os.path.dirname(os.path.abspath(__file__)) NameError: global name '__file__' is not defined ```
2015/10/06
[ "https://Stackoverflow.com/questions/32959770", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1515864/" ]
``` import inspect print(inspect.stack()[0][1]) ``` [inspect](https://docs.python.org/2/library/inspect.html)
I apologise for my previous answer. I was half asleep and replied stupidly. Every time I've done what you're trying to do, I have used it in the inverse order. E.g. `os.path.abspath(os.path.dirname(__file__))`
48,047,495
``` Collecting jws>=0.1.3 (from python-jwt==2.0.1->pyrebase) Using cached https://files.pythonhosted.org/packages/01/9e/1536d578ed50f5fe8196310ddcc921a3cd8e973312d60ac74488b805d395/jws-0.1.3.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Wesely\AppData\Local\Temp\pip-install-w5z8dsub\jws\setup.py", line 17, in <module> long_description=read('README.md'), File "C:\Users\Wesely\AppData\Local\Temp\pip-install-w5z8dsub\jws\setup.py", line 5, in read return open(os.path.join(os.path.dirname(__file__), fname)).read() UnicodeDecodeError: 'cp949' codec can't decode byte 0xe2 in position 500: illegal multibyte sequence ---------------------------------------- ``` I tried easy\_install pyrebase, and using virtualenv. I'm using Korean Windows 10.
2018/01/01
[ "https://Stackoverflow.com/questions/48047495", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8272238/" ]
I've just solved this. [MyGitHub.io](https://wesely.github.io/pip,%20python,%20pip/Fix-'cp950'-Error-when-using-'pip-install'/) It's a bug from `jws` package, it should consider the encoding problem in its `setup.py`. My Solution : install `jws` first * use `pip download jws` instead of `pip install` * use 7z open the `filename.tar.gz` archive * edit the `setup.py` file * change this line ``` return open(os.path.join(os.path.dirname(__file__), fname)).read() ``` to ``` return open(os.path.join(os.path.dirname(__file__), fname), encoding="UTF-8").read() ``` * Re-archive the tar file, run pip install `filename.tar` After `jws` being installed, run `pip install pyrebase`. It should work.
I solved this problem by deleting Visual Studio Community 2017 including python dev. option
48,047,495
``` Collecting jws>=0.1.3 (from python-jwt==2.0.1->pyrebase) Using cached https://files.pythonhosted.org/packages/01/9e/1536d578ed50f5fe8196310ddcc921a3cd8e973312d60ac74488b805d395/jws-0.1.3.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Wesely\AppData\Local\Temp\pip-install-w5z8dsub\jws\setup.py", line 17, in <module> long_description=read('README.md'), File "C:\Users\Wesely\AppData\Local\Temp\pip-install-w5z8dsub\jws\setup.py", line 5, in read return open(os.path.join(os.path.dirname(__file__), fname)).read() UnicodeDecodeError: 'cp949' codec can't decode byte 0xe2 in position 500: illegal multibyte sequence ---------------------------------------- ``` I tried easy\_install pyrebase, and using virtualenv. I'm using Korean Windows 10.
2018/01/01
[ "https://Stackoverflow.com/questions/48047495", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8272238/" ]
I've just solved this. [MyGitHub.io](https://wesely.github.io/pip,%20python,%20pip/Fix-'cp950'-Error-when-using-'pip-install'/) It's a bug from `jws` package, it should consider the encoding problem in its `setup.py`. My Solution : install `jws` first * use `pip download jws` instead of `pip install` * use 7z open the `filename.tar.gz` archive * edit the `setup.py` file * change this line ``` return open(os.path.join(os.path.dirname(__file__), fname)).read() ``` to ``` return open(os.path.join(os.path.dirname(__file__), fname), encoding="UTF-8").read() ``` * Re-archive the tar file, run pip install `filename.tar` After `jws` being installed, run `pip install pyrebase`. It should work.
I solved this problem by 1. moving the python-jwt install up in the order of my `requirements.txt` 2. not specifying an older version of python-jwt encoded in pyrebase One of those did the trick.
12,646,883
I've been trying to make a simple blackjack game in python and I seem to be stuck, my code is as follows: ``` from random import choice def deck(): cards = range(1, 12) return choice(cards) def diack(): card1= deck() card2 = deck() hand = card1 + card2 print hand if hand < 21: print raw_input("Would you like to hit or stand?") if "hit": return hand + deck() elif "stand": return hand ``` When I run that it seems to work for "hit" but when I type in "stand" it seems to "hit" aswell. As you can probably tell by now I am extremely new to programming. Could you guys help point me in the right direction on how to make my game work (I'd like to use as much of my code as possible).
2012/09/28
[ "https://Stackoverflow.com/questions/12646883", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1689672/" ]
`if "hit"` just tests if the string `"hit"` exists, and it does. Thus, the `elif` statement is never executed. You need to capture the user input in a variable and test against that instead: ``` choice = raw_input("Would you like to hit or stand?") print choice if choice == "hit": return hand + deck() elif choice == "stand": return hand ```
Assuming you get the indentation right: ``` print raw_input("Would you like to hit or stand?") if "hit": return hand + deck() elif "stand": return hand ``` Your `if` is just checking whether the string `"hit"` is true. All non-empty strings are true, and `"hit"` is non-empty, so this will always succeed. What you want is something like this: ``` cmd = raw_input("Would you like to hit or stand?") if cmd == "hit": return hand + deck() elif cmd == "stand": return hand ``` Now you're checking whether the result of `raw_input` is the string `"hit"`, which is what you want.
43,983,127
I wish to find all words that start with "Am" and this is what I tried so far with python ``` import re my_string = "America's mom, American" re.findall(r'\b[Am][a-zA-Z]+\b', my_string) ``` but this is the output that I get ``` ['America', 'mom', 'American'] ``` Instead of what I want ``` ['America', 'American'] ``` I know that in regex `[Am]` means match either `A` or `m`, but is it possible to match `A` and `m` as well?
2017/05/15
[ "https://Stackoverflow.com/questions/43983127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/863713/" ]
The `[Am]`, a positive [character class](http://www.regular-expressions.info/charclass.html), matches either `A` or `m`. To match a *sequence* of chars, you need to use them one after another. Remove the brackets: ``` import re my_string = "America's mom, American" print(re.findall(r'\bAm[a-zA-Z]+\b', my_string)) # => ['America', 'American'] ``` See the [Python demo](http://ideone.com/saGZrG) **This pattern details**: * `\b` - a word boundary * `Am` - a string of chars matched as a sequence `Am` * `[a-zA-Z]+` - 1 or more ASCII letters * `\b` - a word boundary.
Don't use character class: ``` import re my_string = "America's mom, American" re.findall(r'\bAm[a-zA-Z]+\b', my_string) ```
43,983,127
I wish to find all words that start with "Am" and this is what I tried so far with python ``` import re my_string = "America's mom, American" re.findall(r'\b[Am][a-zA-Z]+\b', my_string) ``` but this is the output that I get ``` ['America', 'mom', 'American'] ``` Instead of what I want ``` ['America', 'American'] ``` I know that in regex `[Am]` means match either `A` or `m`, but is it possible to match `A` and `m` as well?
2017/05/15
[ "https://Stackoverflow.com/questions/43983127", "https://Stackoverflow.com", "https://Stackoverflow.com/users/863713/" ]
The `[Am]`, a positive [character class](http://www.regular-expressions.info/charclass.html), matches either `A` or `m`. To match a *sequence* of chars, you need to use them one after another. Remove the brackets: ``` import re my_string = "America's mom, American" print(re.findall(r'\bAm[a-zA-Z]+\b', my_string)) # => ['America', 'American'] ``` See the [Python demo](http://ideone.com/saGZrG) **This pattern details**: * `\b` - a word boundary * `Am` - a string of chars matched as a sequence `Am` * `[a-zA-Z]+` - 1 or more ASCII letters * `\b` - a word boundary.
``` re.findall(r'(Am\w+)', my_text, re.I) ```
65,376,345
I had started scrapy with Official Tutorial, but I can't go with it successfully.My code is totally same with official one. ``` import scrapy class QuotesSpider(scrapy.Spider): name = 'Quotes'; def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', ] for url in urls: yield scrapy.Request(url=url,callback = self.parse); def parse(self, response): page = response.url.split('/')[-2]; print('--------------------------------->>>>'); for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.css('small.author::text').get(), 'tags': quote.css('div.tags a.tag::text').getall(), } ``` When i execute it on CMD with instruction (scrapy crawl Quotes) and the result like this: ``` 2020-12-20 10:00:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None) 2020-12-20 10:00:26 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/page/1/> (referer: None) Traceback (most recent call last): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) StopIteration: <200 http://quotes.toscrape.com/page/1/> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\defer.py", line 55, in mustbe_deferred result = f(*args, **kw) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\core\spidermw.py", line 58, in process_spider_input return scrape_func(response, request, spider) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\core\scraper.py", line 149, in call_spider warn_on_generator_with_return_value(spider, callback) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\misc.py", line 245, in warn_on_generator_with_return_value if is_generator_with_return_value(callable): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\misc.py", line 230, in is_generator_with_return_value tree = ast.parse(dedent(inspect.getsource(callable))) File "c:\users\a\appdata\local\programs\python\python38-32\lib\ast.py", line 47, in parse return compile(source, filename, mode, flags, File "<unknown>", line 1 def parse(self, response): ^ IndentationError: unexpected indent 2020-12-20 10:00:26 [scrapy.core.engine] INFO: Closing spider (finished) 2020-12-20 10:00:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats: ``` --- I check it many times but I still do not know how to deal with it!
2020/12/20
[ "https://Stackoverflow.com/questions/65376345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14482346/" ]
There is a `IndentationError`.Need to fix code indentation. Its work fine.
You might find a solution for your issue here > > [Scrapy installed, but won't run from the command line](https://stackoverflow.com/questions/37757233/scrapy-installed-but-wont-run-from-the-command-line) > > >
65,376,345
I had started scrapy with Official Tutorial, but I can't go with it successfully.My code is totally same with official one. ``` import scrapy class QuotesSpider(scrapy.Spider): name = 'Quotes'; def start_requests(self): urls = [ 'http://quotes.toscrape.com/page/1/', ] for url in urls: yield scrapy.Request(url=url,callback = self.parse); def parse(self, response): page = response.url.split('/')[-2]; print('--------------------------------->>>>'); for quote in response.css('div.quote'): yield { 'text': quote.css('span.text::text').get(), 'author': quote.css('small.author::text').get(), 'tags': quote.css('div.tags a.tag::text').getall(), } ``` When i execute it on CMD with instruction (scrapy crawl Quotes) and the result like this: ``` 2020-12-20 10:00:25 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None) 2020-12-20 10:00:26 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com/page/1/> (referer: None) Traceback (most recent call last): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\twisted\internet\defer.py", line 1418, in _inlineCallbacks result = g.send(result) StopIteration: <200 http://quotes.toscrape.com/page/1/> During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\defer.py", line 55, in mustbe_deferred result = f(*args, **kw) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\core\spidermw.py", line 58, in process_spider_input return scrape_func(response, request, spider) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\core\scraper.py", line 149, in call_spider warn_on_generator_with_return_value(spider, callback) File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\misc.py", line 245, in warn_on_generator_with_return_value if is_generator_with_return_value(callable): File "c:\users\a\appdata\local\programs\python\python38-32\lib\site-packages\scrapy\utils\misc.py", line 230, in is_generator_with_return_value tree = ast.parse(dedent(inspect.getsource(callable))) File "c:\users\a\appdata\local\programs\python\python38-32\lib\ast.py", line 47, in parse return compile(source, filename, mode, flags, File "<unknown>", line 1 def parse(self, response): ^ IndentationError: unexpected indent 2020-12-20 10:00:26 [scrapy.core.engine] INFO: Closing spider (finished) 2020-12-20 10:00:26 [scrapy.statscollectors] INFO: Dumping Scrapy stats: ``` --- I check it many times but I still do not know how to deal with it!
2020/12/20
[ "https://Stackoverflow.com/questions/65376345", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14482346/" ]
There is a `IndentationError`.Need to fix code indentation. Its work fine.
It is not about the yield, I think either all the semicolons or maybe the last comma after getall() `'tags': quote.css('div.tags a.tag::text').getall(),` might cause the interpreter to expect sth. else. Remove the semicolons and the last comma - does it still not work? The error output shows the indentation error at: ``` def parse ^ ``` this tells you, that something before there caused it, so I guess it should be the first semicolon.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of doing this? Note that I am a python newbie, even though I am using words like pythonic. Thanks in advance.
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
Are you sure they have the same keys? You could do: ``` c = dict( (k,a[k]+b[k]) for k in a ) ``` Addition of lists concatenates so `a[k] + b[k]` gives you something like `[1,2]+[3,4]` which equals `[1,2,3,4]`. The `dict` constructor can take a series of 2-element iterables which turn into `key` - `value` pairs. If they don't share the keys, you can use `set`s. ``` aset = set(a) bset = set(b) common_keys = aset & bset a_only_keys = aset - bset b_only_keys = bset - aset c = dict( (k,a[k]) for k in a_only_keys ) c.update( (k,b[k]) for k in b_only_keys ) c.update( (k,a[k]+b[k]) for k in common_keys ) ```
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of doing this? Note that I am a python newbie, even though I am using words like pythonic. Thanks in advance.
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
If you need ALL values: ``` from itertools import chain from collections import defaultdict a = {'I': [1,2], 'II': [1,2], 'IV': [1,2]} b = {'I': [3,4], 'II': [3,4], 'V': [3,4]} d = defaultdict(list) for key, value in chain(a.iteritems(), b.iteritems()): d[key].extend(value) d ``` Output: ``` defaultdict(<type 'list'>, {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4], 'V': [3, 4], 'IV': [1, 2]}) ```
Are you sure they have the same keys? You could do: ``` c = dict( (k,a[k]+b[k]) for k in a ) ``` Addition of lists concatenates so `a[k] + b[k]` gives you something like `[1,2]+[3,4]` which equals `[1,2,3,4]`. The `dict` constructor can take a series of 2-element iterables which turn into `key` - `value` pairs. If they don't share the keys, you can use `set`s. ``` aset = set(a) bset = set(b) common_keys = aset & bset a_only_keys = aset - bset b_only_keys = bset - aset c = dict( (k,a[k]) for k in a_only_keys ) c.update( (k,b[k]) for k in b_only_keys ) c.update( (k,a[k]+b[k]) for k in common_keys ) ```
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of doing this? Note that I am a python newbie, even though I am using words like pythonic. Thanks in advance.
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
If you need ALL values: ``` from itertools import chain from collections import defaultdict a = {'I': [1,2], 'II': [1,2], 'IV': [1,2]} b = {'I': [3,4], 'II': [3,4], 'V': [3,4]} d = defaultdict(list) for key, value in chain(a.iteritems(), b.iteritems()): d[key].extend(value) d ``` Output: ``` defaultdict(<type 'list'>, {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4], 'V': [3, 4], 'IV': [1, 2]}) ```
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of doing this? Note that I am a python newbie, even though I am using words like pythonic. Thanks in advance.
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
``` >>> from collections import Counter >>> class ListAccumulator(Counter): ... def __missing__(self, key): ... return [] ... >>> a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} >>> b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} >>> >>> ListAccumulator(a) + ListAccumulator(b) Counter({'IV': [3, 4], 'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4], 'III': [1, 2]}) ```
``` >>> a = {'I': [1,2], 'II': [1,2]} >>> b = {'I': [3,4], 'II': [3,4]} >>> {key:a[key]+b[key] for key in a} {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4]} ``` Note that this only works if they share keys exactly.
12,634,452
I have 2 dictionaries ``` a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} ``` how can i merge them such that i get the following result ``` c = merge_dicts(a,b) ``` where c is `{'I': [1,2,3,4], 'II': [1,2,3,4], 'III': [1,2], 'IV': [3,4]}` Is there a good pythonic way of doing this? Note that I am a python newbie, even though I am using words like pythonic. Thanks in advance.
2012/09/28
[ "https://Stackoverflow.com/questions/12634452", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1742869/" ]
If you need ALL values: ``` from itertools import chain from collections import defaultdict a = {'I': [1,2], 'II': [1,2], 'IV': [1,2]} b = {'I': [3,4], 'II': [3,4], 'V': [3,4]} d = defaultdict(list) for key, value in chain(a.iteritems(), b.iteritems()): d[key].extend(value) d ``` Output: ``` defaultdict(<type 'list'>, {'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4], 'V': [3, 4], 'IV': [1, 2]}) ```
``` >>> from collections import Counter >>> class ListAccumulator(Counter): ... def __missing__(self, key): ... return [] ... >>> a = {'I': [1,2], 'II': [1,2], 'III': [1,2]} >>> b = {'I': [3,4], 'II': [3,4], 'IV': [3,4]} >>> >>> ListAccumulator(a) + ListAccumulator(b) Counter({'IV': [3, 4], 'I': [1, 2, 3, 4], 'II': [1, 2, 3, 4], 'III': [1, 2]}) ```
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `check_v2` works because it doesn't contain a `yield` and is therefore a normal function. There are two ways to accomplish what you want: * Change the `return` to a `yield` in `check`. * Have the caller trap `StopIteration`, as shown below ``` try: next(a) except StopIteration as ex: print(ex.value) ```
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
As others have said, if you return from a generator, it means the the generator has stopped yielding items, which raises a `StopIteration` whatever you return. This means that `check` actually returns an empty iterator. If you want to return the results of another generator, you can use `yield from` : ``` def check(ingen, flag=None): if flag: for n in ingen: yield n*2 else: yield from ingen ```
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
When a generator hits its `return` statement (explicit or not) it raises `StopIteration`. So when you `return ingen` you end the iteration. `check_v2` is not a generator, since it does not contain the `yield` statement, that's why it works.
See [PEP 380](https://www.python.org/dev/peps/pep-0380/#formal-semantics): > > In a generator, the statement > > > > ``` > return value > > ``` > > is semantically equivalent to > > > > ``` > raise StopIteration(value) > > ``` > > except that, as currently, the exception cannot be caught by `except` > clauses within the returning generator. > > > This is a new feature in Python 3.3.
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `check_v2` works because it doesn't contain a `yield` and is therefore a normal function. There are two ways to accomplish what you want: * Change the `return` to a `yield` in `check`. * Have the caller trap `StopIteration`, as shown below ``` try: next(a) except StopIteration as ex: print(ex.value) ```
As others have said, if you return from a generator, it means the the generator has stopped yielding items, which raises a `StopIteration` whatever you return. This means that `check` actually returns an empty iterator. If you want to return the results of another generator, you can use `yield from` : ``` def check(ingen, flag=None): if flag: for n in ingen: yield n*2 else: yield from ingen ```
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
In Python, if `yield` is present in a function, then Python treats it as a generator. In a generator, any return will raise `StopIteration` with the returned value. This is a new feature in Python 3.3: see [PEP 380](https://www.python.org/dev/peps/pep-0380/) and [here](https://stackoverflow.com/a/16780113/2097780). `check_v2` works because it doesn't contain a `yield` and is therefore a normal function. There are two ways to accomplish what you want: * Change the `return` to a `yield` in `check`. * Have the caller trap `StopIteration`, as shown below ``` try: next(a) except StopIteration as ex: print(ex.value) ```
See [PEP 380](https://www.python.org/dev/peps/pep-0380/#formal-semantics): > > In a generator, the statement > > > > ``` > return value > > ``` > > is semantically equivalent to > > > > ``` > raise StopIteration(value) > > ``` > > except that, as currently, the exception cannot be caught by `except` > clauses within the returning generator. > > > This is a new feature in Python 3.3.
38,921,815
I am using python 3.5. When I tried to return a generator function instance and i am getting a StopIteration error. Why? here is my code: ``` >>> def gen(start, end): ... '''generator function similar to range function''' ... while start <= end: ... yield start ... start += 1 ... >>> def check(ingen, flag=None): ... if flag: ... for n in ingen: ... yield n*2 ... else: ... return ingen ... >>> # Trigger else clause in check function >>> a = check(gen(1,3)) >>> next(a) Traceback (most recent call last): File "<stdin>", line 1, in <module> StopIteration: <generator object gen at 0x7f37dc46e828> ``` It looks like the generator is somehow exhausted before the else clause is returns the generator. It works fine with this function: ``` >>> def check_v2(ingen): ... return ingen ... >>> b = check_v2(gen(1, 3)) >>> next(b) 1 >>> next(b) 2 >>> next(b) 3 ```
2016/08/12
[ "https://Stackoverflow.com/questions/38921815", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
As others have said, if you return from a generator, it means the the generator has stopped yielding items, which raises a `StopIteration` whatever you return. This means that `check` actually returns an empty iterator. If you want to return the results of another generator, you can use `yield from` : ``` def check(ingen, flag=None): if flag: for n in ingen: yield n*2 else: yield from ingen ```
See [PEP 380](https://www.python.org/dev/peps/pep-0380/#formal-semantics): > > In a generator, the statement > > > > ``` > return value > > ``` > > is semantically equivalent to > > > > ``` > raise StopIteration(value) > > ``` > > except that, as currently, the exception cannot be caught by `except` > clauses within the returning generator. > > > This is a new feature in Python 3.3.
57,484,399
I'm new to Python and am using Anaconda on Windows 10 to learn how to implement machine learning. Running this code on Spyder: ```py import sklearn as skl ``` Originally got me this: ``` Traceback (most recent call last): File "<ipython-input-1-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spyder-py3/temp.py', wdir='C:/Users/julia/.spyder-py3') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/julia/.spyder-py3/temp.py", line 3, in <module> from sklearn.family import Model File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\__init__.py", line 76, in <module> from .base import clone File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 16, in <module> from .utils import _IS_32BIT File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\__init__.py", line 20, in <module> from .validation import (as_float_array, File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 21, in <module> from .fixes import _object_dtype_isnan File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 289, in <module> from scipy.sparse.linalg import lsqr as sparse_lsqr File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\__init__.py", line 114, in <module> from .isolve import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module> from .iterative import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 10, in <module> from . import _iterative ImportError: DLL load failed: The specified module could not be found. ``` I then went to the command line and did ``` pip uninstall scipy pip install scipy pip uninstall scikit-learn pip install scikit-learn ``` and got no errors when doing so, with scipy 1.3.1 (along with numpy 1.17.0) and scikit-learn 0.21.3 being installed according to the command line. However, now when I try to import sklearn I get a different error: ``` File "<ipython-input-2-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spyder-py3/temp.py', wdir='C:/Users/julia/.spyder-py3') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/julia/.spyder-py3/temp.py", line 3, in <module> from sklearn.family import Model File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\__init__.py", line 76, in <module> from .base import clone File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 16, in <module> from .utils import _IS_32BIT File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\__init__.py", line 20, in <module> from .validation import (as_float_array, File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 21, in <module> from .fixes import _object_dtype_isnan File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\fixes.py", line 289, in <module> from scipy.sparse.linalg import lsqr as sparse_lsqr File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\__init__.py", line 113, in <module> from .isolve import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module> from .iterative import * File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 136, in <module> def bicg(A, b, x0=None, tol=1e-5, maxiter=None, M=None, callback=None, atol=None): File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\_lib\_threadsafety.py", line 59, in decorator return lock.decorate(func) File "C:\ProgramData\Anaconda3\lib\site-packages\scipy\_lib\_threadsafety.py", line 47, in decorate return scipy._lib.decorator.decorate(func, caller) AttributeError: module 'scipy' has no attribute '_lib' ``` Any suggestions? I've uninstalled and reinstalled Anaconda and I'm still getting the same issue. EDIT: When I do ``` conda list --show-channel-urls ``` I get ``` # packages in environment at C:\ProgramData\Anaconda3: # # Name Version Build Channel _ipyw_jlab_nb_ext_conf 0.1.0 py37_0 defaults alabaster 0.7.12 py37_0 defaults anaconda-client 1.7.2 py37_0 defaults anaconda-navigator 1.9.7 py37_0 defaults asn1crypto 0.24.0 py37_0 defaults astroid 2.2.5 py37_0 defaults attrs 19.1.0 py37_1 defaults babel 2.7.0 py_0 defaults backcall 0.1.0 py37_0 defaults backports 1.0 py_2 defaults backports.functools_lru_cache 1.5 py_2 defaults backports.tempfile 1.0 py_1 defaults backports.weakref 1.0.post1 py_1 defaults beautifulsoup4 4.7.1 py37_1 defaults blas 1.0 mkl defaults bleach 3.1.0 py37_0 defaults bzip2 1.0.8 he774522_0 defaults ca-certificates 2019.5.15 1 defaults certifi 2019.6.16 py37_1 defaults cffi 1.12.3 py37h7a1dbc1_0 defaults chardet 3.0.4 py37_1003 defaults click 7.0 py37_0 defaults cloudpickle 1.2.1 py_0 defaults clyent 1.2.2 py37_1 defaults colorama 0.4.1 py37_0 defaults conda 4.7.11 py37_0 defaults conda-build 3.18.8 py37_0 defaults conda-env 2.6.0 1 defaults conda-package-handling 1.3.11 py37_0 defaults conda-verify 3.4.2 py_1 defaults console_shortcut 0.1.1 3 defaults cryptography 2.7 py37h7a1dbc1_0 defaults decorator 4.4.0 py37_1 defaults defusedxml 0.6.0 py_0 defaults docutils 0.15.1 py37_0 defaults entrypoints 0.3 py37_0 defaults filelock 3.0.12 py_0 defaults freetype 2.9.1 ha9979f8_1 defaults future 0.17.1 py37_0 defaults glob2 0.7 py_0 defaults icc_rt 2019.0.0 h0cc432a_1 defaults icu 58.2 ha66f8fd_1 defaults idna 2.8 py37_0 defaults imagesize 1.1.0 py37_0 defaults intel-openmp 2019.4 245 defaults ipykernel 5.1.1 py37h39e3cac_0 defaults ipython 7.7.0 py37h39e3cac_0 defaults ipython_genutils 0.2.0 py37_0 defaults ipywidgets 7.5.1 py_0 defaults isort 4.3.21 py37_0 defaults jedi 0.13.3 py37_0 defaults jinja2 2.10.1 py37_0 defaults joblib 0.13.2 py37_0 defaults jpeg 9b hb83a4c4_2 defaults json5 0.8.5 py_0 defaults jsonschema 3.0.1 py37_0 defaults jupyter_client 5.3.1 py_0 defaults jupyter_core 4.5.0 py_0 defaults jupyterlab 1.0.2 py37hf63ae98_0 defaults jupyterlab_server 1.0.0 py_1 defaults keyring 18.0.0 py37_0 defaults lazy-object-proxy 1.4.1 py37he774522_0 defaults libarchive 3.3.3 h0643e63_5 defaults libiconv 1.15 h1df5818_7 defaults liblief 0.9.0 ha925a31_2 defaults libpng 1.6.37 h2a8f88b_0 defaults libsodium 1.0.16 h9d3ae62_0 defaults libtiff 4.0.10 hb898794_2 defaults libxml2 2.9.9 h464c3ec_0 defaults lz4-c 1.8.1.2 h2fa13f4_0 defaults lzo 2.10 h6df0209_2 defaults m2w64-gcc-libgfortran 5.3.0 6 defaults m2w64-gcc-libs 5.3.0 7 defaults m2w64-gcc-libs-core 5.3.0 7 defaults m2w64-gmp 6.1.0 2 defaults m2w64-libwinpthread-git 5.0.0.4634.697f757 2 defaults markupsafe 1.1.1 py37he774522_0 defaults mccabe 0.6.1 py37_1 defaults menuinst 1.4.16 py37he774522_0 defaults mistune 0.8.4 py37he774522_0 defaults mkl 2019.4 245 defaults mkl-service 2.0.2 py37he774522_0 defaults mkl_fft 1.0.12 py37h14836fe_0 defaults mkl_random 1.0.2 py37h343c172_0 defaults msys2-conda-epoch 20160418 1 defaults navigator-updater 0.2.1 py37_0 defaults nbconvert 5.5.0 py_0 defaults nbformat 4.4.0 py37_0 defaults notebook 6.0.0 py37_0 defaults numpy 1.17.0 pypi_0 pypi numpy-base 1.16.4 py37hc3f5095_0 defaults numpydoc 0.9.1 py_0 defaults olefile 0.46 py37_0 defaults openssl 1.1.1c he774522_1 defaults packaging 19.0 py37_0 defaults pandas 0.25.0 py37ha925a31_0 defaults pandoc 2.2.3.2 0 defaults pandocfilters 1.4.2 py37_1 defaults parso 0.5.0 py_0 defaults pickleshare 0.7.5 py37_0 defaults pillow 6.1.0 py37hdc69c19_0 defaults pip 19.2.2 pypi_0 pypi pkginfo 1.5.0.1 py37_0 defaults powershell_shortcut 0.0.1 2 defaults prometheus_client 0.7.1 py_0 defaults prompt_toolkit 2.0.9 py37_0 defaults psutil 5.6.3 py37he774522_0 defaults py-lief 0.9.0 py37ha925a31_2 defaults pycodestyle 2.5.0 py37_0 defaults pycosat 0.6.3 py37hfa6e2cd_0 defaults pycparser 2.19 py37_0 defaults pyflakes 2.1.1 py37_0 defaults pygments 2.4.2 py_0 defaults pylint 2.3.1 py37_0 defaults pyopenssl 19.0.0 py37_0 defaults pyparsing 2.4.0 py_0 defaults pyqt 5.9.2 py37h6538335_2 defaults pyrsistent 0.14.11 py37he774522_0 defaults pysocks 1.7.0 py37_0 defaults python 3.7.3 h8c8aaf0_1 defaults python-dateutil 2.8.0 py37_0 defaults python-libarchive-c 2.8 py37_13 defaults pytz 2019.1 py_0 defaults pywin32 223 py37hfa6e2cd_1 defaults pywinpty 0.5.5 py37_1000 defaults pyyaml 5.1.1 py37he774522_0 defaults pyzmq 18.0.0 py37ha925a31_0 defaults qt 5.9.7 vc14h73c81de_0 defaults qtawesome 0.5.7 py37_1 defaults qtconsole 4.5.2 py_0 defaults qtpy 1.8.0 py_0 defaults requests 2.22.0 py37_0 defaults rope 0.14.0 py_0 defaults ruamel_yaml 0.15.46 py37hfa6e2cd_0 defaults scikit-learn 0.21.3 pypi_0 pypi scipy 1.3.0 pypi_0 pypi send2trash 1.5.0 py37_0 defaults setuptools 41.0.1 py37_0 defaults sip 4.19.8 py37h6538335_0 defaults six 1.12.0 py37_0 defaults snowballstemmer 1.9.0 py_0 defaults soupsieve 1.9.2 py37_0 defaults sphinx 2.1.2 py_0 defaults sphinxcontrib-applehelp 1.0.1 py_0 defaults sphinxcontrib-devhelp 1.0.1 py_0 defaults sphinxcontrib-htmlhelp 1.0.2 py_0 defaults sphinxcontrib-jsmath 1.0.1 py_0 defaults sphinxcontrib-qthelp 1.0.2 py_0 defaults sphinxcontrib-serializinghtml 1.1.3 py_0 defaults spyder 3.3.6 py37_0 defaults spyder-kernels 0.5.1 py37_0 defaults sqlite 3.29.0 he774522_0 defaults terminado 0.8.2 py37_0 defaults testpath 0.4.2 py37_0 defaults tk 8.6.8 hfa6e2cd_0 defaults tornado 6.0.3 py37he774522_0 defaults tqdm 4.32.1 py_0 defaults traitlets 4.3.2 py37_0 defaults urllib3 1.24.2 py37_0 defaults vc 14.1 h0510ff6_4 defaults vs2015_runtime 14.15.26706 h3a45250_4 defaults wcwidth 0.1.7 py37_0 defaults webencodings 0.5.1 py37_1 defaults wheel 0.33.4 py37_0 defaults widgetsnbextension 3.5.0 py37_0 defaults win_inet_pton 1.1.0 py37_0 defaults wincertstore 0.2 py37_0 defaults winpty 0.4.3 4 defaults wrapt 1.11.2 py37he774522_0 defaults xz 5.2.4 h2fa13f4_4 defaults yaml 0.1.7 hc54c509_2 defaults zeromq 4.3.1 h33f27b4_3 defaults zlib 1.2.11 h62dcd97_3 defaults zstd 1.3.7 h508b16e_0 defaults ``` with the version of scipy not matching up with the version that pip installed. Not sure how significant it is but it seemed strange to me. EDIT 2: Doing `pip list` returns ``` Package Version ----------------------------- --------- -cipy 1.3.0 alabaster 0.7.12 anaconda-client 1.7.2 anaconda-navigator 1.9.7 asn1crypto 0.24.0 astroid 2.2.5 attrs 19.1.0 Babel 2.7.0 backcall 0.1.0 backports.functools-lru-cache 1.5 backports.tempfile 1.0 backports.weakref 1.0.post1 beautifulsoup4 4.7.1 bleach 3.1.0 certifi 2019.6.16 cffi 1.12.3 chardet 3.0.4 Click 7.0 cloudpickle 1.2.1 clyent 1.2.2 colorama 0.4.1 conda 4.7.11 conda-build 3.18.8 conda-package-handling 1.3.11 conda-verify 3.4.2 cryptography 2.7 decorator 4.4.0 defusedxml 0.6.0 docutils 0.15.1 entrypoints 0.3 filelock 3.0.12 future 0.17.1 glob2 0.7 idna 2.8 imagesize 1.1.0 ipykernel 5.1.1 ipython 7.7.0 ipython-genutils 0.2.0 ipywidgets 7.5.1 isort 4.3.21 jedi 0.13.3 Jinja2 2.10.1 joblib 0.13.2 json5 0.8.5 jsonschema 3.0.1 jupyter-client 5.3.1 jupyter-core 4.5.0 jupyterlab 1.0.2 jupyterlab-server 1.0.0 keyring 18.0.0 lazy-object-proxy 1.4.1 libarchive-c 2.8 MarkupSafe 1.1.1 mccabe 0.6.1 menuinst 1.4.16 mistune 0.8.4 mkl-fft 1.0.12 mkl-random 1.0.2 mkl-service 2.0.2 navigator-updater 0.2.1 nbconvert 5.5.0 nbformat 4.4.0 notebook 6.0.0 numpy 1.17.0 numpydoc 0.9.1 olefile 0.46 packaging 19.0 pandas 0.25.0 pandocfilters 1.4.2 parso 0.5.0 pickleshare 0.7.5 Pillow 6.1.0 pio 0.0.3 pip 19.2.2 pkginfo 1.5.0.1 prometheus-client 0.7.1 prompt-toolkit 2.0.9 psutil 5.6.3 pycodestyle 2.5.0 pycosat 0.6.3 pycparser 2.19 pyflakes 2.1.1 Pygments 2.4.2 pylint 2.3.1 pyOpenSSL 19.0.0 pyparsing 2.4.0 pyrsistent 0.14.11 PySocks 1.7.0 python-dateutil 2.8.0 pytz 2019.1 pywin32 223 pywinpty 0.5.5 PyYAML 5.1.1 pyzmq 18.0.0 QtAwesome 0.5.7 qtconsole 4.5.2 QtPy 1.8.0 requests 2.22.0 rope 0.14.0 ruamel-yaml 0.15.46 scikit-learn 0.21.3 scipy 1.3.1 Send2Trash 1.5.0 setuptools 41.0.1 six 1.12.0 snowballstemmer 1.9.0 soupsieve 1.9.2 Sphinx 2.1.2 sphinxcontrib-applehelp 1.0.1 sphinxcontrib-devhelp 1.0.1 sphinxcontrib-htmlhelp 1.0.2 sphinxcontrib-jsmath 1.0.1 sphinxcontrib-qthelp 1.0.2 sphinxcontrib-serializinghtml 1.1.3 spyder 3.3.6 spyder-kernels 0.5.1 terminado 0.8.2 testpath 0.4.2 tornado 6.0.3 tqdm 4.32.1 traitlets 4.3.2 urllib3 1.24.2 wcwidth 0.1.7 webencodings 0.5.1 wheel 0.33.4 widgetsnbextension 3.5.0 win-inet-pton 1.1.0 wincertstore 0.2 wrapt 1.11.2 ``` `pip list` says scipy is version 1.3.1, while `conda list` says it's version 1.3.0. Again, not sure how relevant it is, but seems strange EDIT 3: I got this error after putting the following lines (suggested by @Brennan) in my command prompt then running the file ``` pip uninstall scikit-learn pip uninstall scipy conda uninstall scikit-learn conda uninstall scipy conda update --all conda install scipy conda install scikit-learn ``` This is the new error I get when trying to import sklearn: ``` Traceback (most recent call last): File "<ipython-input-15-7135d3f24347>", line 1, in <module> runfile('C:/Users/julia/.spyder-py3/temp.py', wdir='C:/Users/julia/.spyder-py3') File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile execfile(filename, namespace) File "C:\ProgramData\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/julia/.spyder-py3/temp.py", line 2, in <module> import sklearn as skl File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\__init__.py", line 76, in <module> from .base import clone File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 13, in <module> import numpy as np File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\__init__.py", line 140, in <module> from . import _distributor_init File "C:\ProgramData\Anaconda3\lib\site-packages\numpy\_distributor_init.py", line 34, in <module> from . import _mklinit ImportError: DLL load failed: The specified module could not be found. ``` A possible cause of this might be me deleting the mkl\_rt.dll file from my Anaconda/Library/bin after encountering the error described here: <https://github.com/ContinuumIO/anaconda-issues/issues/10182> This puts me in a predicament, because reinstalling Anaconda to repair this will get me the same "ordinal 242 could not be located" error that I faced earlier, but not repairing it will continue the issue with sklearn... **FINAL EDIT: Solved by installing old version of Anaconda. Will mark as solved when I am able to (2 days)**
2019/08/13
[ "https://Stackoverflow.com/questions/57484399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7385274/" ]
I ended up fixing this by uninstalling my current version of Anaconda and installing a version from a few months ago. I didn't get the "ordinal 242" error nor the issues with scikit-learn.
I encountered the same error after letting my PC sit for 4 days unattended. Restarting the kernel solved it. This probably won't work for everyone, but it might save someone a little agony.
66,436,933
I am working with sequencing data and need to count the number of reads that match to a grna library in python. Simplified my data looks like this: ``` reads = ['abc', 'abc','def', 'ghi'] grnas = ['abc', 'ghi'] ``` The grnas list is unique, while the reads list can contain entries that are not of interest and don't match to the grnas or are repeat entries. What I want to do is to reduce the list of reads to only contain those entries which match to one entry in grnas. I am currently doing this with a list comprehension like this: ``` reads_matched = [read for read in reads if (read in grnas)] ``` in my example for reads\_matched this would return : ``` ['abc', 'abc', 'ghi'] ``` Since both of my lists are very large (6 million entries in reads and 80k entries in grnas) this of course takes some time to compute. Is there any way for me to speed this up further? I have tried writing it as a for loop or while loop in many different variations but this is much slower than the method I listed above. In general I am very inexperienced with runtime improvements and have come to this solution through trial and error so any tips to further improve would be appreciated!
2021/03/02
[ "https://Stackoverflow.com/questions/66436933", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8413867/" ]
Your case cannot be less of O(n). Using single process the best solution is: ``` [x for x in reads if x in set(grnas)] or [x for x in reads if x in dict.fromkeys(grnas)] ``` but this is a simple case to parallelyze, you can reduce input data in some bunch of works and append all results.
as the worst case of complexity search in both `set` & `dict` in python is some how `O(N)` so the complexity of the program would be `O(N * M)`. it would not be efficient to use them. so use the `Counter` object which will do the search in `O(1)` so the whole program would be done in `O(max(N, M))` complexity. ```py from collections import Counter reads = ['abc', 'abc','def', 'ghi'] grnas = ['abc', 'ghi'] count = Counter(reads) for g in grnas: print(g, count[g]) ```
58,774,718
I'm writing multi-process code, which runs perfectly in Python 3.7. Yet I want one of the parallel process to execute an IO process take stakes for ever using AsyncIO i order to get better performance, but have not been able to get it to run. Ubuntu 18.04, Python 3.7, AsyncIO, pipenv (all pip libraries installed) The method in particular runs as expected using multithreading, which is what I want to replace with AsyncIO. I have googled and tried looping in the main() function and now only in the intended cor-routine, have looked at examples and read about this new Async way of getting things down and no results so far. The following is the app.py code which is esecuted: **python app.py** ``` import sys import traceback import logging import asyncio from config import DEBUG from config import log_config from <some-module> import <some-class> if DEBUG: logging.config.dictConfig(log_config()) else: logging.basicConfig( level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s') logger = logging.getLogger(__name__) def main(): try: <some> = <some-class>([ 'some-data1.csv', 'some-data2.csv' ]) <some>.run() except: traceback.print_exc() pdb.post_mortem() sys.exit(0) if __name__ == '__main__': asyncio.run(main()) ``` Here is the code where I have the given class defined ``` _sql_client = SQLServer() _blob_client = BlockBlobStore() _keys = KeyVault() _data_source = _keys.fetch('some-data') # Multiprocessing _manager = mp.Manager() _ns = _manager.Namespace() def __init__(self, list_of_collateral_files: list) -> None: @timeit def _get_filter_collateral(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _get_hours(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _load_original_bids(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _merge_bids_with_hours(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _get_collaterial_per_month(self, ns: mp.managers.NamespaceProxy) -> None: @timeit def _calc_bid_per_path(self) -> None: @timeit def run(self) -> None: ``` The method containing the async code is here: ``` def _get_filter_collateral(self, ns: mp.managers.NamespaceProxy) -> None: all_files = self._blob_client.download_blobs(self._list_of_blob_files) _all_dfs = pd.DataFrame() async def read_task(file_: str) -> None: nonlocal _all_dfs df = pd.read_csv(StringIO(file_.content)) _all_dfs = _all_dfs.append(df, sort=False) tasks = [] loop = asyncio.new_event_loop() for file_ in all_files: tasks.append(asyncio.create_task(read_task(file_))) loop.run_until_complete(asyncio.wait(tasks)) loop.close() _all_dfs['TOU'] = _all_dfs['TOU'].map(lambda x: 'OFFPEAK' if x == 'OFF' else 'ONPEAK') ns.dfs = _all_dfs ``` And the method that calls the particular sequence and and this async method is: ``` def run(self) -> None: extract = [] extract.append(mp.Process(target=self._get_filter_collateral, args=(self._ns, ))) extract.append(mp.Process(target=self._get_hours, args=(self._ns, ))) extract.append(mp.Process(target=self._load_original_bids, args=(self._ns, ))) # Start the parallel processes for process in extract: process.start() # Await for database process to end extract[1].join() extract[2].join() # Merge both database results self._merge_bids_with_hours(self._ns) extract[0].join() self._get_collaterial_per_month(self._ns) self._calc_bid_per_path() self._save_reports() self._upload_data() ``` These are the errors I get: ``` Process Process-2: Traceback (most recent call last): File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "<some-path>/src/azure/application/utils/lib.py", line 10, in timed result = method(*args, **kwargs) File "<some-path>/src/azure/application/caiso/main.py", line 104, in _get_filter_collateral tasks.append(asyncio.create_task(read_task(file_))) File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/asyncio/tasks.py", line 350, in create_task loop = events.get_running_loop() RuntimeError: no running event loop <some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py:313: RuntimeWarning: coroutine '<some-class>._get_filter_collateral.<locals>.read_task' was never awaited traceback.print_exc() RuntimeWarning: Enable tracemalloc to get the object allocation traceback DEBUG Calculating monthly collateral... Traceback (most recent call last): File "app.py", line 25, in main caiso.run() File "<some-path>/src/azure/application/utils/lib.py", line 10, in timed result = method(*args, **kwargs) File "<some-path>/src/azure/application/caiso/main.py", line 425, in run self._get_collaterial_per_month(self._ns) File "<some-path>/src/azure/application/utils/lib.py", line 10, in timed result = method(*args, **kwargs) File "<some-path>/src/azure/application/caiso/main.py", line 196, in _get_collaterial_per_month credit_margin = ns.dfs File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/managers.py", line 1122, in __getattr__ return callmethod('__getattribute__', (key,)) File "<some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/managers.py", line 834, in _callmethod raise convert_to_error(kind, result) AttributeError: 'Namespace' object has no attribute 'dfs' > <some-path>/.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/managers.py(834)_callmethod() -> raise convert_to_error(kind, result) (Pdb) ```
2019/11/08
[ "https://Stackoverflow.com/questions/58774718", "https://Stackoverflow.com", "https://Stackoverflow.com/users/982446/" ]
As it seems from the *Traceback* log it is look like you are trying to add tasks to not running *event loop*. > > /.pyenv/versions/3.7.4/lib/python3.7/multiprocessing/process.py:313: > RuntimeWarning: coroutine > '.\_get\_filter\_collateral..read\_task' **was never > awaited** > > > The *loop* was just created and it's not running yet, therefor the `asyncio` unable to attach tasks to it. The following example will reproduce the same results, adding tasks and then trying to `await` for all of them to finish: ``` import asyncio async def func(num): print('My name is func {0}...'.format(num)) loop = asyncio.get_event_loop() tasks = list() for i in range(5): tasks.append(asyncio.create_task(func(i))) loop.run_until_complete(asyncio.wait(tasks)) loop.close() ``` Results with: ``` Traceback (most recent call last): File "C:/tmp/stack_overflow.py", line 42, in <module> tasks.append(asyncio.create_task(func(i))) File "C:\Users\Amiram\AppData\Local\Programs\Python\Python37-32\lib\asyncio\tasks.py", line 324, in create_task loop = events.get_running_loop() RuntimeError: no running event loop sys:1: RuntimeWarning: coroutine 'func' was never awaited ``` Nonetheless the solution is pretty simple, you just need to add the tasks to the created loop - instead of asking the `asyncio` to go it. The only change is needed in the following line: ``` tasks.append(asyncio.create_task(func(i))) ``` Change the creation of the task from the `asyncio` to the newly created *loop*, you are able to do it because this is your loop unlike the *asynio* which is searching for a running one. So the new line should look like this: ``` tasks.append(loop.create_task(func(i))) ``` Another solution could be running an *async* function and create the tasks there for example (Because that loop is already running now the `asyncio` enable to attach tasks to it): ``` import asyncio async def func(num): print('Starting func {0}...'.format(num)) await asyncio.sleep(0.1) print('Ending func {0}...'.format(num)) loop = asyncio.get_event_loop() async def create_tasks_func(): tasks = list() for i in range(5): tasks.append(asyncio.create_task(func(i))) await asyncio.wait(tasks) loop.run_until_complete(create_tasks_func()) loop.close() ``` This simple change will results with: ``` Starting func 0... Starting func 1... Starting func 2... Starting func 3... Starting func 4... Ending func 0... Ending func 2... Ending func 4... Ending func 1... Ending func 3... ```
Use `asyncio.ensure_future` instead. See <https://docs.python.org/3/library/asyncio-future.html#asyncio.ensure_future>
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) pbuf min max command) (unless proc (let ((currbuff (current-buffer))) (shell) (switch-to-buffer currbuff) (setq proc (get-process "shell")) )) (setq pbuff (process-buffer proc)) (if (use-region-p) (setq min (region-beginning) max (region-end)) (setq min (point-at-bol) max (point-at-eol))) (setq command (concat (buffer-substring min max) "\n")) (with-current-buffer pbuff (goto-char (process-mark proc)) (insert command) (move-marker (process-mark proc) (point)) ) ;;pop-to-buffer does not work with save-current-buffer -- bug? (process-send-string proc command) (display-buffer (process-buffer proc) t) (when step (goto-char max) (next-line)) )) (defun sh-send-line-or-region-and-step () (interactive) (sh-send-line-or-region t)) (defun sh-switch-to-process-buffer () (interactive) (pop-to-buffer (process-buffer (get-process "shell")) t)) (define-key sh-mode-map [(control ?j)] 'sh-send-line-or-region-and-step) (define-key sh-mode-map [(control ?c) (control ?z)] 'sh-switch-to-process-buffer) ``` Enjoy.
`M-x` `append-to-buffer` `RET`
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
M-x shell-command-on-region aka. `M-|`
Here is another solution from [this post](https://superuser.com/questions/688829/evaluate-a-python-one-line-statement-in-gnu-emacs/688834#688834). Just copying it for convenience. The print statement is key here. ``` (add-hook 'python-mode-hook 'my-python-send-statement) (defun my-python-send-statement () (interactive) (local-set-key [C-return] 'my-python-send-statement) (end-of-line) (set-mark (line-beginning-position)) (call-interactively 'python-shell-send-region) (python-shell-send-string "; print()")) ```
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
``` (defun shell-region (start end) "execute region in an inferior shell" (interactive "r") (shell-command (buffer-substring-no-properties start end))) ```
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) (interactive "r") ;;Make the custom function interactive and operative on a region (append-to-buffer (get-buffer "*PYTHON*") start end) ;;append to the buffer named *PYTHON* (switch-to-buffer-other-window (get-buffer "*PYTHON*")) ;;switches to the buffer (execute-kbd-macro "\C-m")) ;;sends the enter keystroke to the shell ```
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
``` (defun shell-region (start end) "execute region in an inferior shell" (interactive "r") (shell-command (buffer-substring-no-properties start end))) ```
`M-x` `append-to-buffer` `RET`
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) pbuf min max command) (unless proc (let ((currbuff (current-buffer))) (shell) (switch-to-buffer currbuff) (setq proc (get-process "shell")) )) (setq pbuff (process-buffer proc)) (if (use-region-p) (setq min (region-beginning) max (region-end)) (setq min (point-at-bol) max (point-at-eol))) (setq command (concat (buffer-substring min max) "\n")) (with-current-buffer pbuff (goto-char (process-mark proc)) (insert command) (move-marker (process-mark proc) (point)) ) ;;pop-to-buffer does not work with save-current-buffer -- bug? (process-send-string proc command) (display-buffer (process-buffer proc) t) (when step (goto-char max) (next-line)) )) (defun sh-send-line-or-region-and-step () (interactive) (sh-send-line-or-region t)) (defun sh-switch-to-process-buffer () (interactive) (pop-to-buffer (process-buffer (get-process "shell")) t)) (define-key sh-mode-map [(control ?j)] 'sh-send-line-or-region-and-step) (define-key sh-mode-map [(control ?c) (control ?z)] 'sh-switch-to-process-buffer) ``` Enjoy.
Here is another solution from [this post](https://superuser.com/questions/688829/evaluate-a-python-one-line-statement-in-gnu-emacs/688834#688834). Just copying it for convenience. The print statement is key here. ``` (add-hook 'python-mode-hook 'my-python-send-statement) (defun my-python-send-statement () (interactive) (local-set-key [C-return] 'my-python-send-statement) (end-of-line) (set-mark (line-beginning-position)) (call-interactively 'python-shell-send-region) (python-shell-send-string "; print()")) ```
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
I wrote a package that sends/pipes lines or regions of code to shell processes, basically something similar that ESS is for R. It also allows for multiple shell processes to exist, and lets you choose which one to send the region to. Have a look here: <http://www.emacswiki.org/emacs/essh>
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) (interactive "r") ;;Make the custom function interactive and operative on a region (append-to-buffer (get-buffer "*PYTHON*") start end) ;;append to the buffer named *PYTHON* (switch-to-buffer-other-window (get-buffer "*PYTHON*")) ;;switches to the buffer (execute-kbd-macro "\C-m")) ;;sends the enter keystroke to the shell ```
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
I wrote a package that sends/pipes lines or regions of code to shell processes, basically something similar that ESS is for R. It also allows for multiple shell processes to exist, and lets you choose which one to send the region to. Have a look here: <http://www.emacswiki.org/emacs/essh>
**Update** The above (brilliant and useful) answers look a bit incomplete as of mid-2020: `sh-mode` has a function for sending shell region to non-interactive shell with output in the minibuffer called `sh-send-line-or-region-and-step`. Alternatively: click `Shell-script` in the mode bar at the bottom of the window, then `Mouse-1`, then `Execute region`. The output is sent to the minibuffer and `#<*Messages*>`. If minibuffer output is not enough, there are referenced techniques to redirect output to other buffers (not only the shell one, see for example ["How to redirect message/echo output to a buffer in Emacs?"](https://stackoverflow.com/questions/21524488/how-to-redirect-message-echo-output-to-a-buffer-in-emacs)). You can also execute all the script with `C-c C-x`. Voilà.
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
Ok, wrote an easy bit. Will probably spend some time to write a complete minor mode. For time being the following function will send current line (or region if the mark is active). Does quite a good job for me: ``` (defun sh-send-line-or-region (&optional step) (interactive ()) (let ((proc (get-process "shell")) pbuf min max command) (unless proc (let ((currbuff (current-buffer))) (shell) (switch-to-buffer currbuff) (setq proc (get-process "shell")) )) (setq pbuff (process-buffer proc)) (if (use-region-p) (setq min (region-beginning) max (region-end)) (setq min (point-at-bol) max (point-at-eol))) (setq command (concat (buffer-substring min max) "\n")) (with-current-buffer pbuff (goto-char (process-mark proc)) (insert command) (move-marker (process-mark proc) (point)) ) ;;pop-to-buffer does not work with save-current-buffer -- bug? (process-send-string proc command) (display-buffer (process-buffer proc) t) (when step (goto-char max) (next-line)) )) (defun sh-send-line-or-region-and-step () (interactive) (sh-send-line-or-region t)) (defun sh-switch-to-process-buffer () (interactive) (pop-to-buffer (process-buffer (get-process "shell")) t)) (define-key sh-mode-map [(control ?j)] 'sh-send-line-or-region-and-step) (define-key sh-mode-map [(control ?c) (control ?z)] 'sh-switch-to-process-buffer) ``` Enjoy.
Do you want the command to be executed automatically, or just entered into the command line in preparation? `M-x` `append-to-buffer` `RET` will enter the selected text into the specified buffer at point, but the command would not be executed by the shell. A wrapper function for that could automatically choose `*shell*` for the buffer (or more smartly select/prompt based on current buffers in shell-mode), and then call append-to-buffer. You could trivially record a keyboard macro to copy the region, switch to `*shell*`, yank, and enter (if required). `F3``M-w``C-x``b``*shell*``RET``C-y``RET``F4` `C-x``C-k``n``my-execute-region-in-shell``RET` `M-x``insert-kbd-macro``RET``my-execute-region-in-shell``RET` `(global-set-key (kbd "C-c e") 'my-execute-region-in-shell)`
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
M-x shell-command-on-region aka. `M-|`
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) (interactive "r") ;;Make the custom function interactive and operative on a region (append-to-buffer (get-buffer "*PYTHON*") start end) ;;append to the buffer named *PYTHON* (switch-to-buffer-other-window (get-buffer "*PYTHON*")) ;;switches to the buffer (execute-kbd-macro "\C-m")) ;;sends the enter keystroke to the shell ```
6,286,579
Is there some module or command that'll let me send the current region to shell? I want to have something like Python-mode's `python-send-region` which sends the selected region to the currently running Python shell.
2011/06/08
[ "https://Stackoverflow.com/questions/6286579", "https://Stackoverflow.com", "https://Stackoverflow.com/users/419116/" ]
`M-x` `append-to-buffer` `RET`
Modifying Jurgens answer above to operate on a specific buffer gives the following function, which will send the region and then switch to the buffer, displaying it in another window, the buffer named *PYTHON* is used for illustration. The target buffer should already be running a shell. ``` (defun p-send(start end) (interactive "r") ;;Make the custom function interactive and operative on a region (append-to-buffer (get-buffer "*PYTHON*") start end) ;;append to the buffer named *PYTHON* (switch-to-buffer-other-window (get-buffer "*PYTHON*")) ;;switches to the buffer (execute-kbd-macro "\C-m")) ;;sends the enter keystroke to the shell ```
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGBRegressor`: ``` model.feature_importances_ AttributeError Traceback (most recent call last) <ipython-input-36-fbaa36f9f167> in <module>() ----> 1 model.feature_importances_ AttributeError: 'XGBRegressor' object has no attribute 'feature_importances_' ``` The weird thing is: For a collaborator of mine the attribute `feature_importances_` is there! What could be the issue? These are the versions I have: ``` In [2]: xgboost.__version__ Out[2]: '0.6' In [4]: sklearn.__version__ Out[4]: '0.18.1' ``` ... and the xgboost C++ library from github, commit `ef8d92fc52c674c44b824949388e72175f72e4d1`.
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc? <http://xgboost.readthedocs.io/en/latest/build.html> As in this answer: [Feature Importance with XGBClassifier](https://stackoverflow.com/questions/38212649/feature-importance-with-xgbclassifier) There always seems to be a problem with the pip-installation and xgboost. Building and installing it from your build seems to help.
This is useful for you,maybe. `xgb.plot_importance(bst)` And this is the link:[plot](http://xgboost.readthedocs.io/en/latest/python/python_intro.html#plotting)
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGBRegressor`: ``` model.feature_importances_ AttributeError Traceback (most recent call last) <ipython-input-36-fbaa36f9f167> in <module>() ----> 1 model.feature_importances_ AttributeError: 'XGBRegressor' object has no attribute 'feature_importances_' ``` The weird thing is: For a collaborator of mine the attribute `feature_importances_` is there! What could be the issue? These are the versions I have: ``` In [2]: xgboost.__version__ Out[2]: '0.6' In [4]: sklearn.__version__ Out[4]: '0.18.1' ``` ... and the xgboost C++ library from github, commit `ef8d92fc52c674c44b824949388e72175f72e4d1`.
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
How did you install xgboost? Did you build the package after cloning it from github, as described in the doc? <http://xgboost.readthedocs.io/en/latest/build.html> As in this answer: [Feature Importance with XGBClassifier](https://stackoverflow.com/questions/38212649/feature-importance-with-xgbclassifier) There always seems to be a problem with the pip-installation and xgboost. Building and installing it from your build seems to help.
This worked for me: ```py model.get_booster().get_score(importance_type='weight') ``` hope it helps
41,565,091
I'm calling xgboost via its scikit-learn-style Python interface: ``` model = xgboost.XGBRegressor() %time model.fit(trainX, trainY) testY = model.predict(testX) ``` Some sklearn models tell you which importance they assign to features via the attribute `feature_importances`. This doesn't seem to exist for the `XGBRegressor`: ``` model.feature_importances_ AttributeError Traceback (most recent call last) <ipython-input-36-fbaa36f9f167> in <module>() ----> 1 model.feature_importances_ AttributeError: 'XGBRegressor' object has no attribute 'feature_importances_' ``` The weird thing is: For a collaborator of mine the attribute `feature_importances_` is there! What could be the issue? These are the versions I have: ``` In [2]: xgboost.__version__ Out[2]: '0.6' In [4]: sklearn.__version__ Out[4]: '0.18.1' ``` ... and the xgboost C++ library from github, commit `ef8d92fc52c674c44b824949388e72175f72e4d1`.
2017/01/10
[ "https://Stackoverflow.com/questions/41565091", "https://Stackoverflow.com", "https://Stackoverflow.com/users/626537/" ]
This worked for me: ```py model.get_booster().get_score(importance_type='weight') ``` hope it helps
This is useful for you,maybe. `xgb.plot_importance(bst)` And this is the link:[plot](http://xgboost.readthedocs.io/en/latest/python/python_intro.html#plotting)
73,966,292
By "Google Batch" I'm referring to the new service Google launched about a month or so ago. <https://cloud.google.com/batch> I have a Python script which takes a few minutes to execute at the moment. However with the data it will soon be processing in the next few months this execution time will go from minutes to **hours**. This is why I am not using Cloud Function or Cloud Run to run this script, both of these have a max 60 minute execution time. Google Batch came about recently and I wanted to explore this as a possible method to achieve what I'm looking for **without** just using Compute Engine. However documentation is sparse across the internet and I can't find a method to "trigger" an already created Batch job by using Cloud Scheduler. I've already successfully manually created a batch job which runs my docker image. Now I need something to trigger this batch job 1x a day, **thats it**. It would be wonderful if Cloud Scheduler could serve this purpose. I've seen 1 article describing using GCP Workflow to create a a new Batch job on a cron determined by Cloud Scheduler. Issue with this is its creating a new batch job every time, not simply **re-running** the already existing one. To be honest I can't even re-run an already executed batch job on the GCP website itself so I don't know if its even possible. <https://www.intertec.io/resource/python-script-on-gcp-batch> Lastly, I've even explored the official Google Batch Python library and could not find anywhere in there some built in function which allows me to "call" a previously created batch job and just re-run it. <https://github.com/googleapis/python-batch>
2022/10/05
[ "https://Stackoverflow.com/questions/73966292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13379101/" ]
EDIT -- added the "55 difference columns" part at the bottom. --- Adjusting data to be column pairs: ``` df <- data.frame(matrix(sample(1:10, 200, replace = TRUE), ncol = 20, nrow = 10)) names(df) <- paste0("var", rep(1:10, each = 2), "_", rep(c("apple", "banana"))) names(df) [1] "var1_apple" "var1_banana" "var2_apple" "var2_banana" "var3_apple" "var3_banana" [7] "var4_apple" "var4_banana" "var5_apple" "var5_banana" "var6_apple" "var6_banana" [13] "var7_apple" "var7_banana" "var8_apple" "var8_banana" "var9_apple" "var9_banana" [19] "var10_apple" "var10_banana" ``` --- ``` library(tidyverse) df %>% mutate(row = row_number()) %>% pivot_longer(-row, names_to = c("var", ".value"), names_sep = "_") # A tibble: 100 × 4 row var apple banana <int> <chr> <int> <int> 1 1 var1 8 7 2 1 var2 4 9 3 1 var3 7 3 4 1 var4 6 10 5 1 var5 10 10 6 1 var6 1 1 7 1 var7 2 10 8 1 var8 7 9 9 1 var9 3 8 10 1 var10 2 6 # … with 90 more rows # ℹ Use `print(n = ...)` to see more rows ``` --- Here a variation to add all the difference columns interspersed: ``` df %>% mutate(row = row_number()) %>% pivot_longer(-row, names_to = c("var", ".value"), names_sep = "_") %>% mutate(difference = banana - apple) %>% pivot_wider(names_from = var, values_from = apple:difference, names_glue = "{var}_{.value}", names_vary = "slowest") ``` Result (truncated) ``` # A tibble: 10 × 10 row var1_apple var1_banana var1_difference var2_apple var2_banana var2_difference var3_apple var3_banana var3_difference <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> 1 1 7 10 3 5 3 -2 1 9 8 2 2 9 2 -7 3 6 3 8 1 -7 3 3 2 10 8 3 3 0 7 8 1 4 4 3 1 -2 8 3 -5 9 9 0 5 5 2 7 5 7 10 3 6 9 3 6 6 5 4 -1 2 1 -1 5 4 -1 7 7 4 5 1 10 3 -7 9 4 -5 8 8 10 7 -3 3 2 -1 5 9 4 9 9 5 5 0 7 3 -4 10 7 -3 10 10 10 6 -4 1 4 3 10 10 0 ```
I think @Tom's comment is spot-on. Restructuring the data probably makes sense if you are working with paired data. E.g.: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] data.frame( odd = unlist(df[od]), oddname = rep(od,each=nrow(df)), even = unlist(df[ev]), evenname = rep(ev,each=nrow(df)) ) ## odd oddname even evenname ##X11 7 X1 10 X2 ##X12 6 X1 1 X2 ##X13 2 X1 6 X2 ##X14 5 X1 2 X2 ##X15 3 X1 1 X2 ## ... ``` It is then trival to take one column from another in this structure. If you must have the matrix-like output, then that is also achievable: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] setNames(df[od] - df[ev], paste(od, ev, sep="_")) ## X1_X2 X3_X4 X5_X6 X7_X8 X9_X10 X11_X12 X13_X14 X15_X16 X17_X18 X19_X20 ##1 -3 2 4 4 -2 4 3 1 -3 9 ##2 5 5 4 3 -1 3 -1 -3 5 -2 ##3 -4 3 7 4 -5 1 1 5 -4 4 ##4 3 0 6 3 4 -5 6 6 -7 4 ##5 2 2 1 4 -6 -3 6 2 3 1 ##6 -6 -2 4 -2 0 1 3 0 0 -7 ##7 0 -6 3 7 -1 0 0 -5 3 1 ##8 -1 3 3 1 2 -2 -5 3 0 0 ##9 -4 1 -5 -2 -4 7 6 -2 4 -4 ##10 2 -7 4 -1 0 -6 -4 -4 0 0 ```
73,966,292
By "Google Batch" I'm referring to the new service Google launched about a month or so ago. <https://cloud.google.com/batch> I have a Python script which takes a few minutes to execute at the moment. However with the data it will soon be processing in the next few months this execution time will go from minutes to **hours**. This is why I am not using Cloud Function or Cloud Run to run this script, both of these have a max 60 minute execution time. Google Batch came about recently and I wanted to explore this as a possible method to achieve what I'm looking for **without** just using Compute Engine. However documentation is sparse across the internet and I can't find a method to "trigger" an already created Batch job by using Cloud Scheduler. I've already successfully manually created a batch job which runs my docker image. Now I need something to trigger this batch job 1x a day, **thats it**. It would be wonderful if Cloud Scheduler could serve this purpose. I've seen 1 article describing using GCP Workflow to create a a new Batch job on a cron determined by Cloud Scheduler. Issue with this is its creating a new batch job every time, not simply **re-running** the already existing one. To be honest I can't even re-run an already executed batch job on the GCP website itself so I don't know if its even possible. <https://www.intertec.io/resource/python-script-on-gcp-batch> Lastly, I've even explored the official Google Batch Python library and could not find anywhere in there some built in function which allows me to "call" a previously created batch job and just re-run it. <https://github.com/googleapis/python-batch>
2022/10/05
[ "https://Stackoverflow.com/questions/73966292", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13379101/" ]
Here is a SAS solution. As far as I understood your data looks like that: I tried to generate an example for 4 patients and 6 proteins, and two values for each protein (nasal and plasma). ``` data proteinvalues; patient=1; protein1_nas=12; protein1_plas=13; protein2_nas=6; protein2_plas=8; protein3_nas=23; protein3_plas=24; protein4_nas=15; protein4_plas=15; protein5_nas=45; protein5_plas=47; protein6_nas=56; protein6_plas=50; output; patient=2; protein1_nas=1; protein1_plas=5; protein2_nas=6; protein2_plas=8; protein3_nas=2; protein3_plas=4; protein4_nas=7; protein4_plas=9; protein5_nas=3; protein5_plas=3; protein6_nas=8; protein6_plas=7; output; patient=3; protein1_nas=0; protein1_plas=1; protein2_nas=20; protein2_plas=19; protein3_nas=33; protein3_plas=5; protein4_nas=19; protein4_plas=20; protein5_nas=32; protein5_plas=8; protein6_nas=12; protein6_plas=14; output; patient=4; protein1_nas=4; protein1_plas=5; protein2_nas=9; protein2_plas=11; protein3_nas=4; protein3_plas=6; protein4_nas=78; protein4_plas=70; protein5_nas=4; protein5_plas=7; protein6_nas=78; protein6_plas=77; output; run; ``` OK, this data generating step is not quite elegant, but it works... Here is my solution by using an array with all protein-variables as members. The value pairs are then adjacent members in the array, i.e. No.1/No.2 , No.3/No.4 etc... ``` /* Number of proteins, in your data 55, in my data 6 */ %let NUM_PROTEINS=6; data result (keep=patient diff:); set proteinvalues; /* Array definition with all variable names starting with "protein" */ array protarr{*} protein:; /* Array for the resulting values */ array diff(&NUM_PROTEINS.); /* Initializing the protein number */ num_prot=0; /* Loop over the protein by step 2 = one iteration per protein */ do n=1 to dim(protarr) by 2; /* p is plasma, n is nasal value */ p=n+1; /* setting the protein number */ num_prot+1; /* calculate the difference, using sum-function to handle missing values */ diff{num_prot}=sum(protarr{p},-protarr{n}); end; run; ```
I think @Tom's comment is spot-on. Restructuring the data probably makes sense if you are working with paired data. E.g.: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] data.frame( odd = unlist(df[od]), oddname = rep(od,each=nrow(df)), even = unlist(df[ev]), evenname = rep(ev,each=nrow(df)) ) ## odd oddname even evenname ##X11 7 X1 10 X2 ##X12 6 X1 1 X2 ##X13 2 X1 6 X2 ##X14 5 X1 2 X2 ##X15 3 X1 1 X2 ## ... ``` It is then trival to take one column from another in this structure. If you must have the matrix-like output, then that is also achievable: ``` od <- names(df)[c(TRUE,FALSE)] ev <- names(df)[c(FALSE,TRUE)] setNames(df[od] - df[ev], paste(od, ev, sep="_")) ## X1_X2 X3_X4 X5_X6 X7_X8 X9_X10 X11_X12 X13_X14 X15_X16 X17_X18 X19_X20 ##1 -3 2 4 4 -2 4 3 1 -3 9 ##2 5 5 4 3 -1 3 -1 -3 5 -2 ##3 -4 3 7 4 -5 1 1 5 -4 4 ##4 3 0 6 3 4 -5 6 6 -7 4 ##5 2 2 1 4 -6 -3 6 2 3 1 ##6 -6 -2 4 -2 0 1 3 0 0 -7 ##7 0 -6 3 7 -1 0 0 -5 3 1 ##8 -1 3 3 1 2 -2 -5 3 0 0 ##9 -4 1 -5 -2 -4 7 6 -2 4 -4 ##10 2 -7 4 -1 0 -6 -4 -4 0 0 ```
73,937,555
I have a folder named `deployment`, under deployment there are two sibling folders: `folder1` and `folder2`. i need to move folder2 with its sub contents to folder1 with python scrips, so from: ``` .../deployment/folder1/... /folder1/... ``` to ``` .../deployment/folder1/... /folder1/folder2/... ``` I know how to copy folders and jobs in Jenkins, MANUALLY, and i need to copy tens of folders to a new folder programmatically, e.g. with Python scripting. I tried with the code: ``` import jenkins server = jenkins.Jenkins('https://comp.com/job/deployment', username='xxxx', password='******') server.copy_job('folder2', 'folder1/folder2') ``` The code returns: **JenkinsException: copy[folder2 to folder1/folder2] failed, source and destination folder must be the same** how can i have this done?
2022/10/03
[ "https://Stackoverflow.com/questions/73937555", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4483819/" ]
Settings are empty, maybe they are not exported correctly. Check your settings file.
I think you're not using the find API call for MongoDB properly, find usually takes up a filter object and an object of properties as a second argument. Check the syntax required for find(){} function and probably you'll get through with it. Hope it helps. Happy coding!!
41,836,353
I have a project in which I run multiple data through a specific function that `"cleans"` them. The cleaning function looks like this: Misc.py ``` def clean(my_data) sys.stdout.write("Cleaning genes...\n") synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms() clean_genes = {} for g in data: if g in synonyms: # Found a data point which appears in the synonym list. #print synonyms[g] for synonym in synonyms[g]: if synonym in data: del data[synonym] clean_data[g] = synonym sys.stdout.write("\t%s is also known as %s\n" % (g, clean_data[g])) return data ``` `FileIO` is a custom class I made to open files. My question is, this function will be called many times throughout the program's life cycle. What I want to achieve is don't have to read the input\_data every time since it's gonna be the same every time. I know that I can just return it, and pass it as an argument in this way: ``` def clean(my_data, synonyms = None) if synonyms == None: ... else ... ``` But is there another, better looking way of doing this? My file structure is the following: ``` lib Misc.py FileIO.py __init__.py ... raw_data runme.py ``` From `runme.py`, I do this `from lib import *` and call all the functions I made. Is there a pythonic way to go around this? Like a 'memory' for the function Edit: this line: `synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms()` returns a `collections.OrderedDict()` from `input_data` and using the 3rd column as the key of the dictionary. The dictionary for the following dataset: ``` column1 column2 key data ... ... A B|E|Z ... ... B F|W ... ... C G|P ... ``` Will look like this: ``` OrderedDict([('A',['B','E','Z']), ('B',['F','W']), ('C',['G','P'])]) ``` This tells my script that `A` is also known as `B,E,Z`. `B` as `F,W`. etc... So these are the synonyms. Since, The synonyms list will never change throughout the life of the code. I want to just read it once, and re-use it.
2017/01/24
[ "https://Stackoverflow.com/questions/41836353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3008400/" ]
Use a class with a \_\_call\_\_ operator. You can call objects of this class and store data between calls in the object. Some data probably can best be saved by the constructor. What you've made this way is known as a 'functor' or 'callable object'. Example: ``` class Incrementer: def __init__ (self, increment): self.increment = increment def __call__ (self, number): return self.increment + number incrementerBy1 = Incrementer (1) incrementerBy2 = Incrementer (2) print (incrementerBy1 (3)) print (incrementerBy2 (3)) ``` Output: ``` 4 5 ``` [EDIT] Note that you can combine the answer of @Tagc with my answer to create exactly what you're looking for: a 'function' with built-in memory. Name your class `Clean` rather than `DataCleaner` and the name the instance `clean`. Name the method `__call__` rather than `clean`.
I think the cleanest way to do this would be to decorate your "`clean`" (pun intended) function with another function that provides the `synonyms` local for the function. this is iamo cleaner and more concise than creating another custom class, yet still allows you to easily change the "input\_data" file if you need to (factory function): ``` def defineSynonyms(datafile): def wrap(func): def wrapped(*args, **kwargs): kwargs['synonyms'] = FileIO(datafile, 3, header=False).openSynonyms() return func(*args, **kwargs) return wrapped return wrap @defineSynonyms("raw_data/input_data") def clean(my_data, synonyms={}): # do stuff with synonyms and my_data... pass ```
41,836,353
I have a project in which I run multiple data through a specific function that `"cleans"` them. The cleaning function looks like this: Misc.py ``` def clean(my_data) sys.stdout.write("Cleaning genes...\n") synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms() clean_genes = {} for g in data: if g in synonyms: # Found a data point which appears in the synonym list. #print synonyms[g] for synonym in synonyms[g]: if synonym in data: del data[synonym] clean_data[g] = synonym sys.stdout.write("\t%s is also known as %s\n" % (g, clean_data[g])) return data ``` `FileIO` is a custom class I made to open files. My question is, this function will be called many times throughout the program's life cycle. What I want to achieve is don't have to read the input\_data every time since it's gonna be the same every time. I know that I can just return it, and pass it as an argument in this way: ``` def clean(my_data, synonyms = None) if synonyms == None: ... else ... ``` But is there another, better looking way of doing this? My file structure is the following: ``` lib Misc.py FileIO.py __init__.py ... raw_data runme.py ``` From `runme.py`, I do this `from lib import *` and call all the functions I made. Is there a pythonic way to go around this? Like a 'memory' for the function Edit: this line: `synonyms = FileIO("raw_data/input_data", 3, header=False).openSynonyms()` returns a `collections.OrderedDict()` from `input_data` and using the 3rd column as the key of the dictionary. The dictionary for the following dataset: ``` column1 column2 key data ... ... A B|E|Z ... ... B F|W ... ... C G|P ... ``` Will look like this: ``` OrderedDict([('A',['B','E','Z']), ('B',['F','W']), ('C',['G','P'])]) ``` This tells my script that `A` is also known as `B,E,Z`. `B` as `F,W`. etc... So these are the synonyms. Since, The synonyms list will never change throughout the life of the code. I want to just read it once, and re-use it.
2017/01/24
[ "https://Stackoverflow.com/questions/41836353", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3008400/" ]
> > Like a 'memory' for the function > > > Half-way to rediscovering object-oriented programming. Encapsulate the data cleaning logic in a class, such as `DataCleaner`. Make it so that instances read synonym data once when instantiated and then retain that information as part of their state. Have the class expose a `clean` method that operates on the data: ``` class FileIO(object): def __init__(self, file_path, some_num, header): pass def openSynonyms(self): return [] class DataCleaner(object): def __init__(self, synonym_file): self.synonyms = FileIO(synonym_file, 3, header=False).openSynonyms() def clean(self, data): for g in data: if g in self.synonyms: # ... pass if __name__ == '__main__': dataCleaner = DataCleaner('raw_data/input_file') dataCleaner.clean('some data here') dataCleaner.clean('some more data here') ``` As a possible future optimisation, you can expand on this approach to use a [factory method](https://en.wikipedia.org/wiki/Factory_method_pattern) to create instances of `DataCleaner` which can cache instances based on the synonym file provided (so you don't need to do expensive recomputation every time for the same file).
I think the cleanest way to do this would be to decorate your "`clean`" (pun intended) function with another function that provides the `synonyms` local for the function. this is iamo cleaner and more concise than creating another custom class, yet still allows you to easily change the "input\_data" file if you need to (factory function): ``` def defineSynonyms(datafile): def wrap(func): def wrapped(*args, **kwargs): kwargs['synonyms'] = FileIO(datafile, 3, header=False).openSynonyms() return func(*args, **kwargs) return wrapped return wrap @defineSynonyms("raw_data/input_data") def clean(my_data, synonyms={}): # do stuff with synonyms and my_data... pass ```
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing use these DIFFERENT characters: **“** ... **”** How can I modify my regex such that it will find either **"**..**"** or **“**.. **”** or **"**..**”**
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Using character classes might work, or might break everything for you: ``` matchquotes = re.findall(r'[“”"](?:\\[“”"]|.)*?[“”"]', text) ``` If you don't care a lot about matching pairs always lining up, this will probably do what you want. The case where they use the third type inside the other two is always going to screw you unless you build a few patterns and find their intersection.
Depending on what other processing you are doing and where the text is coming from, it would be better to convert all quotation marks to " rather than handle each case.
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing use these DIFFERENT characters: **“** ... **”** How can I modify my regex such that it will find either **"**..**"** or **“**.. **”** or **"**..**”**
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Depending on what other processing you are doing and where the text is coming from, it would be better to convert all quotation marks to " rather than handle each case.
I am no expert but for those type of 'fancy' quotes, I would first get their codes which are like **\xe2\x80\x9c** or **\u2019** from a table. Then I would try to match them writing their regex codes. For that purpose this might be helpful: <http://www.regular-expressions.info/refunicode.html> I hope it helps!
13,152,085
Hi im trying to use regex in python 2.7 to search for text inbetween two quotation marks such as "hello there". Right now im using: ``` matchquotes = re.findall(r'"(?:\\"|.)*?"', text) ``` It works great but only finds quotes using this character: **"** However I'm finding sometimes that some text that im parsing use these DIFFERENT characters: **“** ... **”** How can I modify my regex such that it will find either **"**..**"** or **“**.. **”** or **"**..**”**
2012/10/31
[ "https://Stackoverflow.com/questions/13152085", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1495000/" ]
Using character classes might work, or might break everything for you: ``` matchquotes = re.findall(r'[“”"](?:\\[“”"]|.)*?[“”"]', text) ``` If you don't care a lot about matching pairs always lining up, this will probably do what you want. The case where they use the third type inside the other two is always going to screw you unless you build a few patterns and find their intersection.
I am no expert but for those type of 'fancy' quotes, I would first get their codes which are like **\xe2\x80\x9c** or **\u2019** from a table. Then I would try to match them writing their regex codes. For that purpose this might be helpful: <http://www.regular-expressions.info/refunicode.html> I hope it helps!
71,232,402
First of all, thank you for the time you took to answer me. To give **a little example**, I have a huge dataset (n instances, 3 features) like that: `data = np.array([[7.0, 2.5, 3.1], [4.3, 8.8, 6.2], [1.1, 5.5, 9.9]])` It's labeled in another array: `label = np.array([0, 1, 0])` **Questions**: 1. I know that I can solve my problem by looping python like (for loop) but I'm concerned about a numpy way (without for-loop) to be less time consumption (do it as fast as possible). 2. If there aren't a way without for-loop, what would be the best one (M1, M2, any other wizardry method?)?. **My solution**: ``` clusters = [] for lab in range(label.max()+1): # M1: creating new object c = data[label == lab] clusters.append([c.min(axis=0), c.max(axis=0)]) # M2: comparing multiple times (called views?) # clusters.append([data[label == lab].min(axis=0), data[label == lab].max(axis=0)]) print(clusters) # [[array([1.1, 2.5, 3.1]), array([7. , 5.5, 9.9])], [array([4.3, 8.8, 6.2]), array([4.3, 8.8, 6.2])]] ```
2022/02/23
[ "https://Stackoverflow.com/questions/71232402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18285419/" ]
You could start from and easier variant of this problem: ***Given `arr` and its label, could you find a minimum and maximum values of `arr` items in each group of labels?*** For instance: ``` arr = np.array([55, 7, 49, 65, 46, 75, 4, 54, 43, 54]) label = np.array([1, 3, 2, 0, 0, 2, 1, 1, 1, 2]) ``` Then you would expect that minimum and maximum values of `arr` in each label group were: ``` min_values = np.array([46, 4, 49, 7]) max_values = np.array([65, 55, 75, 7]) ``` Here is a numpy approach to this kind of problem: ``` def groupby_minmax(arr, label, return_groups=False): arg_idx = np.argsort(label) arr_sort = arr[arg_idx] label_sort = label[arg_idx] div_points = np.r_[0, np.flatnonzero(np.diff(label_sort)) + 1] min_values = np.minimum.reduceat(arr_sort, div_points) max_values = np.maximum.reduceat(arr_sort, div_points) if return_groups: return min_values, max_values, label_sort[div_points] else: return min_values, max_values ``` Now there's not much to change in order to adapt it to your use case: ``` def groupby_minmax_OP(arr, label, return_groups=False): arg_idx = np.argsort(label) arr_sort = arr[arg_idx] label_sort = label[arg_idx] div_points = np.r_[0, np.flatnonzero(np.diff(label_sort)) + 1] min_values = np.minimum.reduceat(arr_sort, div_points, axis=0) max_values = np.maximum.reduceat(arr_sort, div_points, axis=0) if return_groups: return min_values, max_values, label_sort[div_points] else: return np.array([min_values, max_values]).swapaxes(0, 1) groupby_minmax(data, label) ``` Output: ``` array([[[1.1, 2.5, 3.1], [7. , 5.5, 9.9]], [[4.3, 8.8, 6.2], [4.3, 8.8, 6.2]]]) ```
it has already been answered, you can go to this link for your answer [python numpy access list of arrays without for loop](https://stackoverflow.com/questions/36530446/python-numpy-access-list-of-arrays-without-for-loop)
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should conduct a non-greedy match, so \w{1,2}? should only return 1 character. But for both of these functions, I get the same output: ['cat', 'hat', 'sat', 'flat', 'mat'] In the book, ``` nongreedyHaRegex = re.compile(r'(Ha){3,5}?') mo2 = nongreedyHaRegex.search('HaHaHaHaHa') mo2.group() 'HaHaHa' ``` Any one can help me understand why there is a difference? Thanks!
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
There's nothing wrong (that is, ordinary assignment in P6 is designed to do as it has done) but at a guess you were hoping that making the structure on the two sides the same would result in `$a` getting `1`, `$b` getting `2` and `$c` getting `3`. For that, you want "binding assignment" (aka just "binding"), not ordinary assignment: ``` my ($a, $b, $c); :(($a, $b), $c) := ((1, 2), 3); ``` Note the colon before the list on the left, making it a signature literal, and the colon before the `=`, making it a binding operation.
If you want to have the result be `1, 2, 3`, you must `Slip` the list: ``` my ($a, $b, $c) = |(1, 2), 3; ``` This is a consequence of the single argument rule: <https://docs.raku.org/type/Signature#Single_Argument_Rule_Slurpy> This is also why this just works: ``` my ($a, $b, $c) = (1, 2, 3); ``` Even though `(1,2,3)` is a `List` with 3 elements, it will be auto-slipped because of the same single argument rule. You can of course also just remove the (superstitious) parentheses: ``` my ($a, $b, $c) = 1, 2, 3; ```
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should conduct a non-greedy match, so \w{1,2}? should only return 1 character. But for both of these functions, I get the same output: ['cat', 'hat', 'sat', 'flat', 'mat'] In the book, ``` nongreedyHaRegex = re.compile(r'(Ha){3,5}?') mo2 = nongreedyHaRegex.search('HaHaHaHaHa') mo2.group() 'HaHaHa' ``` Any one can help me understand why there is a difference? Thanks!
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
If you want to have the result be `1, 2, 3`, you must `Slip` the list: ``` my ($a, $b, $c) = |(1, 2), 3; ``` This is a consequence of the single argument rule: <https://docs.raku.org/type/Signature#Single_Argument_Rule_Slurpy> This is also why this just works: ``` my ($a, $b, $c) = (1, 2, 3); ``` Even though `(1,2,3)` is a `List` with 3 elements, it will be auto-slipped because of the same single argument rule. You can of course also just remove the (superstitious) parentheses: ``` my ($a, $b, $c) = 1, 2, 3; ```
You are asking \*What's wrong here", and I would say some variant of [the single argument rule](https://docs.raku.org/syntax/Single%20Argument%20Rule) is at work. Since parentheses are only used here for grouping, what's going on is this assignment ``` ($a, $b), $c = (1, 2), 3 ``` `(1, 2), 3` are behaving as a single argument, so they are slurpily assigned to the first element in your group, `$a, $b`. Thus they get it all, and por old `$c` only gets Any. Look at it this way: ``` my ($a, $b, $c); ($a, ($b, $c)) = ((1, 2), 3, 'þ'); say $a, $c; # OUTPUT: «(1 2)þ␤» ``` You might want to look at [this code by Larry Wall](https://irclog.perlgeek.de/perl6/2018-05-08#i_16141283), which uses `»=«`, which does exactly what you are looking for. It's [not documented](https://github.com/perl6/doc/issues/2010), so you might want to wait a bit until it is.
50,221,468
This question comes from "Automate the boring stuff with python" book. ``` atRegex1 = re.compile(r'\w{1,2}at') atRegex2 = re.compile(r'\w{1,2}?at') atRegex1.findall('The cat in the hat sat on the flat mat.') atRegex2.findall('The cat in the hat sat on the flat mat.') ``` I thought the question market ? should conduct a non-greedy match, so \w{1,2}? should only return 1 character. But for both of these functions, I get the same output: ['cat', 'hat', 'sat', 'flat', 'mat'] In the book, ``` nongreedyHaRegex = re.compile(r'(Ha){3,5}?') mo2 = nongreedyHaRegex.search('HaHaHaHaHa') mo2.group() 'HaHaHa' ``` Any one can help me understand why there is a difference? Thanks!
2018/05/07
[ "https://Stackoverflow.com/questions/50221468", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8370526/" ]
There's nothing wrong (that is, ordinary assignment in P6 is designed to do as it has done) but at a guess you were hoping that making the structure on the two sides the same would result in `$a` getting `1`, `$b` getting `2` and `$c` getting `3`. For that, you want "binding assignment" (aka just "binding"), not ordinary assignment: ``` my ($a, $b, $c); :(($a, $b), $c) := ((1, 2), 3); ``` Note the colon before the list on the left, making it a signature literal, and the colon before the `=`, making it a binding operation.
You are asking \*What's wrong here", and I would say some variant of [the single argument rule](https://docs.raku.org/syntax/Single%20Argument%20Rule) is at work. Since parentheses are only used here for grouping, what's going on is this assignment ``` ($a, $b), $c = (1, 2), 3 ``` `(1, 2), 3` are behaving as a single argument, so they are slurpily assigned to the first element in your group, `$a, $b`. Thus they get it all, and por old `$c` only gets Any. Look at it this way: ``` my ($a, $b, $c); ($a, ($b, $c)) = ((1, 2), 3, 'þ'); say $a, $c; # OUTPUT: «(1 2)þ␤» ``` You might want to look at [this code by Larry Wall](https://irclog.perlgeek.de/perl6/2018-05-08#i_16141283), which uses `»=«`, which does exactly what you are looking for. It's [not documented](https://github.com/perl6/doc/issues/2010), so you might want to wait a bit until it is.
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json.load(f) for a in js['Rating']: for rate in a['rating']: rating.append(rate['rating']) print(rating) ``` output ``` ['4', '2', '5', '1', '3', '5', '2', '5'] ``` my expected result ``` ['4', '2', '5'] ['1', '3'] ['5', '2', '5'] ``` json file ``` { "Rating" : [ { "user" : "john", "rating":[ { "placename" : "Kingstreet Café", "rating" : "4" }, { "placename" : "Royce Hotel", "rating" : "2" }, { "placename" : "The Cabinet", "rating" : "5" } ] }, { "user" : "emily", "rating":[ { "placename" : "abc", "rating" : "1" }, { "placename" : "def", "rating" : "3" } ] }, { "user" : "jack", "rating":[ { "placename" : "a", "rating" : "5" }, { "placename" : "b", "rating" : "2" }, { "placename" : "C", "rating" : "5" } ] } ] } ```
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
Don't only append all ratings to one list, but create a list for every user: ```py with open('a.json') as f: ratings = [] #to store ratings of all user js = json.load(f) for a in js['Rating']: rating = [] #to store ratings of single user for rate in a['rating']: rating.append(rate['rating']) ratings.append(rating) print(ratings) ```
A simple one-liner ``` all_ratings = [list(map(lambda x: x['rating'], r['rating'])) for r in js['Rating']] ``` Explanation ``` all_ratings = [ list( # Converts map to list map(lambda x: x['rating'], r['rating']) # Get attribute from list of dict ) for r in js['Rating'] # Iterate ratings ] ```
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json.load(f) for a in js['Rating']: for rate in a['rating']: rating.append(rate['rating']) print(rating) ``` output ``` ['4', '2', '5', '1', '3', '5', '2', '5'] ``` my expected result ``` ['4', '2', '5'] ['1', '3'] ['5', '2', '5'] ``` json file ``` { "Rating" : [ { "user" : "john", "rating":[ { "placename" : "Kingstreet Café", "rating" : "4" }, { "placename" : "Royce Hotel", "rating" : "2" }, { "placename" : "The Cabinet", "rating" : "5" } ] }, { "user" : "emily", "rating":[ { "placename" : "abc", "rating" : "1" }, { "placename" : "def", "rating" : "3" } ] }, { "user" : "jack", "rating":[ { "placename" : "a", "rating" : "5" }, { "placename" : "b", "rating" : "2" }, { "placename" : "C", "rating" : "5" } ] } ] } ```
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
Don't only append all ratings to one list, but create a list for every user: ```py with open('a.json') as f: ratings = [] #to store ratings of all user js = json.load(f) for a in js['Rating']: rating = [] #to store ratings of single user for rate in a['rating']: rating.append(rate['rating']) ratings.append(rating) print(ratings) ```
You can just indent the print into the outer for loop and reset the array everytime. ``` for user in js["Rating"]: temp_array = [] for rate in user["rating"]: temp_array.append(rate["rating"]) print(temp_array) ```
66,779,282
I would like to print the rating result for different user in separate array. It can be solved by creating many arrays, but I didn't want to do so, because I have a lot of user in my Json file, so how can I do this programmatically? python code ``` with open('/content/user_data.json') as f: rating = [] js = json.load(f) for a in js['Rating']: for rate in a['rating']: rating.append(rate['rating']) print(rating) ``` output ``` ['4', '2', '5', '1', '3', '5', '2', '5'] ``` my expected result ``` ['4', '2', '5'] ['1', '3'] ['5', '2', '5'] ``` json file ``` { "Rating" : [ { "user" : "john", "rating":[ { "placename" : "Kingstreet Café", "rating" : "4" }, { "placename" : "Royce Hotel", "rating" : "2" }, { "placename" : "The Cabinet", "rating" : "5" } ] }, { "user" : "emily", "rating":[ { "placename" : "abc", "rating" : "1" }, { "placename" : "def", "rating" : "3" } ] }, { "user" : "jack", "rating":[ { "placename" : "a", "rating" : "5" }, { "placename" : "b", "rating" : "2" }, { "placename" : "C", "rating" : "5" } ] } ] } ```
2021/03/24
[ "https://Stackoverflow.com/questions/66779282", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10275606/" ]
A simple one-liner ``` all_ratings = [list(map(lambda x: x['rating'], r['rating'])) for r in js['Rating']] ``` Explanation ``` all_ratings = [ list( # Converts map to list map(lambda x: x['rating'], r['rating']) # Get attribute from list of dict ) for r in js['Rating'] # Iterate ratings ] ```
You can just indent the print into the outer for loop and reset the array everytime. ``` for user in js["Rating"]: temp_array = [] for rate in user["rating"]: temp_array.append(rate["rating"]) print(temp_array) ```
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, I think so), but I'm stuck on the final part. This is what I have so far: ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): dict = {} for word in line: key = ''.join(sorted(word.lower())) if key in dict.keys(): dict[key].append(word.lower()) else: dict[key] = [] dict[key].append(word.lower()) if line == key: print(line) make_anagram_dict(line) ``` I think I need something which compares the key of each value to the keys of other values, and then prints if they match, but I can't get something to work. At the moment, the best I can do is print out all the keys and values in the file, but ideally, I would be able to print all the anagrams from the file. Output: I don't have a concrete specified output, but something along the lines of: [cat: act, tac] for each anagram. Again, apologies if this is a repetition, but any help would be greatly appreciated.
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm not sure about the output format. In my implementation, all anagrams are printed out in the end. ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = {} # avoid using 'dict' as variable name for word in line: word = word.lower() # call lower() only once key = ''.join(sorted(word)) if key in d: # no need to call keys() d[key].append(word) else: d[key] = [word] # you can initialize list with the initial value return d # just return the mapping to process it later if __name__ == '__main__': d = make_anagram_dict(line) for words in d.values(): if len(words) > 1: # several anagrams in this group print('Anagrams: {}'.format(', '.join(words))) ``` --- Also, consider using `defaultdict` - it's a dictionary, that creates values of a specified type for fresh keys. ``` from collections import defaultdict with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = defaultdict(list) # argument is the default constructor for value for word in line: word = word.lower() # call lower() only once key = ''.join(sorted(word)) d[key].append(word) # now d[key] is always list return d # just return the mapping to process it later if __name__ == '__main__': d = make_anagram_dict(line) for words in d.values(): if len(words) > 1: # several anagrams in this group print('Anagrams: {}'.format(', '.join(words))) ```
Your code is pretty much there, just needs some tweaks: ``` import re def make_anagram_dict(words): d = {} for word in words: word = word.lower() # call lower() only once key = ''.join(sorted(word)) # make the key if key in d: # check if it's in dictionary already if word not in d[key]: # avoid duplicates d[key].append(word) else: d[key] = [word] # initialize list with the initial value return d # return the entire dictionary if __name__ == '__main__': filename = 'words.txt' with open(filename) as file: # Use regex to extract words. You can adjust to include/exclude # characters, numbers, punctuation... # This returns a list of words words = re.findall(r"([a-zA-Z\-]+)", file.read()) # Now process them d = make_anagram_dict(words) # Now print them for words in d.values(): if len(words) > 1: # we found anagrams print('Anagram group {}: {}'.format(', '.join(words))) ```
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, I think so), but I'm stuck on the final part. This is what I have so far: ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): dict = {} for word in line: key = ''.join(sorted(word.lower())) if key in dict.keys(): dict[key].append(word.lower()) else: dict[key] = [] dict[key].append(word.lower()) if line == key: print(line) make_anagram_dict(line) ``` I think I need something which compares the key of each value to the keys of other values, and then prints if they match, but I can't get something to work. At the moment, the best I can do is print out all the keys and values in the file, but ideally, I would be able to print all the anagrams from the file. Output: I don't have a concrete specified output, but something along the lines of: [cat: act, tac] for each anagram. Again, apologies if this is a repetition, but any help would be greatly appreciated.
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm not sure about the output format. In my implementation, all anagrams are printed out in the end. ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = {} # avoid using 'dict' as variable name for word in line: word = word.lower() # call lower() only once key = ''.join(sorted(word)) if key in d: # no need to call keys() d[key].append(word) else: d[key] = [word] # you can initialize list with the initial value return d # just return the mapping to process it later if __name__ == '__main__': d = make_anagram_dict(line) for words in d.values(): if len(words) > 1: # several anagrams in this group print('Anagrams: {}'.format(', '.join(words))) ``` --- Also, consider using `defaultdict` - it's a dictionary, that creates values of a specified type for fresh keys. ``` from collections import defaultdict with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): d = defaultdict(list) # argument is the default constructor for value for word in line: word = word.lower() # call lower() only once key = ''.join(sorted(word)) d[key].append(word) # now d[key] is always list return d # just return the mapping to process it later if __name__ == '__main__': d = make_anagram_dict(line) for words in d.values(): if len(words) > 1: # several anagrams in this group print('Anagrams: {}'.format(', '.join(words))) ```
I'm going to make the assumption you're grouping words within a file which are anagrams of eachother. If, on the other hand, you're being asked to find all the English-language anagrams for a list of words in a file, you will need a way of determining what is or isn't a word. This means you either need an actual "dictionary" as in a `set(<of all english words>)` or a maybe a *very* sophisticated predicate method. Anyhow, here's a relatively straightforward solution which assumes your `words.txt` is small enough to be read into memory completely: ``` with open('words.txt', 'r') as infile: words = infile.read().split() anagram_dict = {word : list() for word in words} for k, v in anagram_dict.items(): k_anagrams = (othr for othr in words if (sorted(k) == sorted(othr)) and (k != othr)) anagram_dict[k].extend(k_anagrams) print(anagram_dict) ``` This isn't the most efficient way to do this, but hopefully it gets accross the power of filtering. Arguably, the most important thing here is the `if (sorted(k) == sorted(othr)) and (k != othr)` filter in the `k_anagrams` definition. This is a filter which only allows identical letter-combinations, but weeds out exact matches.
54,623,084
I'm trying to create a function in python that will print out the anagrams of words in a text file using dictionaries. I've looked at what feels like hundreds of similar questions, so I apologise if this is a repetition, but I can't seem to find a solution that fits my issue. I understand what I need to do (at least, I think so), but I'm stuck on the final part. This is what I have so far: ``` with open('words.txt', 'r') as fp: line = fp.readlines() def make_anagram_dict(line): dict = {} for word in line: key = ''.join(sorted(word.lower())) if key in dict.keys(): dict[key].append(word.lower()) else: dict[key] = [] dict[key].append(word.lower()) if line == key: print(line) make_anagram_dict(line) ``` I think I need something which compares the key of each value to the keys of other values, and then prints if they match, but I can't get something to work. At the moment, the best I can do is print out all the keys and values in the file, but ideally, I would be able to print all the anagrams from the file. Output: I don't have a concrete specified output, but something along the lines of: [cat: act, tac] for each anagram. Again, apologies if this is a repetition, but any help would be greatly appreciated.
2019/02/11
[ "https://Stackoverflow.com/questions/54623084", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11041870/" ]
I'm going to make the assumption you're grouping words within a file which are anagrams of eachother. If, on the other hand, you're being asked to find all the English-language anagrams for a list of words in a file, you will need a way of determining what is or isn't a word. This means you either need an actual "dictionary" as in a `set(<of all english words>)` or a maybe a *very* sophisticated predicate method. Anyhow, here's a relatively straightforward solution which assumes your `words.txt` is small enough to be read into memory completely: ``` with open('words.txt', 'r') as infile: words = infile.read().split() anagram_dict = {word : list() for word in words} for k, v in anagram_dict.items(): k_anagrams = (othr for othr in words if (sorted(k) == sorted(othr)) and (k != othr)) anagram_dict[k].extend(k_anagrams) print(anagram_dict) ``` This isn't the most efficient way to do this, but hopefully it gets accross the power of filtering. Arguably, the most important thing here is the `if (sorted(k) == sorted(othr)) and (k != othr)` filter in the `k_anagrams` definition. This is a filter which only allows identical letter-combinations, but weeds out exact matches.
Your code is pretty much there, just needs some tweaks: ``` import re def make_anagram_dict(words): d = {} for word in words: word = word.lower() # call lower() only once key = ''.join(sorted(word)) # make the key if key in d: # check if it's in dictionary already if word not in d[key]: # avoid duplicates d[key].append(word) else: d[key] = [word] # initialize list with the initial value return d # return the entire dictionary if __name__ == '__main__': filename = 'words.txt' with open(filename) as file: # Use regex to extract words. You can adjust to include/exclude # characters, numbers, punctuation... # This returns a list of words words = re.findall(r"([a-zA-Z\-]+)", file.read()) # Now process them d = make_anagram_dict(words) # Now print them for words in d.values(): if len(words) > 1: # we found anagrams print('Anagram group {}: {}'.format(', '.join(words))) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally. I would expect the data to be arranged vertically, meaning row 1 has the first item in the list, row 2 has the second item and so on. I have tried several approaches but the data always ends up as a horizontal representation. I am new to python and obviously don't have a firm enough grasp on the language to figure this out. Any help would be greatly fully appreciated. I am parsing the website using Beautifulsoup4 and the request library. Pulling all the 'a' tags from the website was easy enough and getting the URLs from those 'a' tags into a list was pretty clear as well. But when I append the list to my CSV file with a writerow function, all the data ends up in one row as opposed to one separate row for each URL. ``` import requests import csv requests.get from bs4 import BeautifulSoup from pprint import pprint page = requests.get('https://www.census.gov/programs-surveys/popest.html') soup = BeautifulSoup(page.text, 'html.parser') ## Create Link to append web data to links = [] # Pull text from all instances of <a> tag within BodyText div AllLinks = soup.find_all('a') for link in AllLinks: links.append(link.get('href')) with open("htmlTable.csv", "w") as f: writer = csv.writer(f) writer.writerow(links) pprint(links) ```
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
One brutal solution is with `apply` function ``` apply(df1[ ,2:ncol(df1)], 2, function(x){sum(x != 0)}) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally. I would expect the data to be arranged vertically, meaning row 1 has the first item in the list, row 2 has the second item and so on. I have tried several approaches but the data always ends up as a horizontal representation. I am new to python and obviously don't have a firm enough grasp on the language to figure this out. Any help would be greatly fully appreciated. I am parsing the website using Beautifulsoup4 and the request library. Pulling all the 'a' tags from the website was easy enough and getting the URLs from those 'a' tags into a list was pretty clear as well. But when I append the list to my CSV file with a writerow function, all the data ends up in one row as opposed to one separate row for each URL. ``` import requests import csv requests.get from bs4 import BeautifulSoup from pprint import pprint page = requests.get('https://www.census.gov/programs-surveys/popest.html') soup = BeautifulSoup(page.text, 'html.parser') ## Create Link to append web data to links = [] # Pull text from all instances of <a> tag within BodyText div AllLinks = soup.find_all('a') for link in AllLinks: links.append(link.get('href')) with open("htmlTable.csv", "w") as f: writer = csv.writer(f) writer.writerow(links) pprint(links) ```
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
One brutal solution is with `apply` function ``` apply(df1[ ,2:ncol(df1)], 2, function(x){sum(x != 0)}) ```
Assuming negative occurrences are an impossibility the sum of a sign solution works. ``` colSums(sign(df1[names(df1) != "ID"])) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally. I would expect the data to be arranged vertically, meaning row 1 has the first item in the list, row 2 has the second item and so on. I have tried several approaches but the data always ends up as a horizontal representation. I am new to python and obviously don't have a firm enough grasp on the language to figure this out. Any help would be greatly fully appreciated. I am parsing the website using Beautifulsoup4 and the request library. Pulling all the 'a' tags from the website was easy enough and getting the URLs from those 'a' tags into a list was pretty clear as well. But when I append the list to my CSV file with a writerow function, all the data ends up in one row as opposed to one separate row for each URL. ``` import requests import csv requests.get from bs4 import BeautifulSoup from pprint import pprint page = requests.get('https://www.census.gov/programs-surveys/popest.html') soup = BeautifulSoup(page.text, 'html.parser') ## Create Link to append web data to links = [] # Pull text from all instances of <a> tag within BodyText div AllLinks = soup.find_all('a') for link in AllLinks: links.append(link.get('href')) with open("htmlTable.csv", "w") as f: writer = csv.writer(f) writer.writerow(links) pprint(links) ```
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
A `dplyr` variant could be: ``` df %>% summarise_at(-1, ~ sum(. != 0)) var1 var2 var3 1 4 4 5 ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally. I would expect the data to be arranged vertically, meaning row 1 has the first item in the list, row 2 has the second item and so on. I have tried several approaches but the data always ends up as a horizontal representation. I am new to python and obviously don't have a firm enough grasp on the language to figure this out. Any help would be greatly fully appreciated. I am parsing the website using Beautifulsoup4 and the request library. Pulling all the 'a' tags from the website was easy enough and getting the URLs from those 'a' tags into a list was pretty clear as well. But when I append the list to my CSV file with a writerow function, all the data ends up in one row as opposed to one separate row for each URL. ``` import requests import csv requests.get from bs4 import BeautifulSoup from pprint import pprint page = requests.get('https://www.census.gov/programs-surveys/popest.html') soup = BeautifulSoup(page.text, 'html.parser') ## Create Link to append web data to links = [] # Pull text from all instances of <a> tag within BodyText div AllLinks = soup.find_all('a') for link in AllLinks: links.append(link.get('href')) with open("htmlTable.csv", "w") as f: writer = csv.writer(f) writer.writerow(links) pprint(links) ```
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
You can try: ``` colSums(df1[,2:4]>0) ``` Output: ``` var1 var2 var3 4 4 5 ```
Assuming negative occurrences are an impossibility the sum of a sign solution works. ``` colSums(sign(df1[names(df1) != "ID"])) ```
57,876,971
I have a project for one of my college classes that requires me to pull all URLs from a page on the U.S. census bureau website and store them in a CSV file. For the most part I've figured out how to do that but for some reason when the data gets appended to the CSV file, all the entries are being inserted horizontally. I would expect the data to be arranged vertically, meaning row 1 has the first item in the list, row 2 has the second item and so on. I have tried several approaches but the data always ends up as a horizontal representation. I am new to python and obviously don't have a firm enough grasp on the language to figure this out. Any help would be greatly fully appreciated. I am parsing the website using Beautifulsoup4 and the request library. Pulling all the 'a' tags from the website was easy enough and getting the URLs from those 'a' tags into a list was pretty clear as well. But when I append the list to my CSV file with a writerow function, all the data ends up in one row as opposed to one separate row for each URL. ``` import requests import csv requests.get from bs4 import BeautifulSoup from pprint import pprint page = requests.get('https://www.census.gov/programs-surveys/popest.html') soup = BeautifulSoup(page.text, 'html.parser') ## Create Link to append web data to links = [] # Pull text from all instances of <a> tag within BodyText div AllLinks = soup.find_all('a') for link in AllLinks: links.append(link.get('href')) with open("htmlTable.csv", "w") as f: writer = csv.writer(f) writer.writerow(links) pprint(links) ```
2019/09/10
[ "https://Stackoverflow.com/questions/57876971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10912876/" ]
A `dplyr` variant could be: ``` df %>% summarise_at(-1, ~ sum(. != 0)) var1 var2 var3 1 4 4 5 ```
Assuming negative occurrences are an impossibility the sum of a sign solution works. ``` colSums(sign(df1[names(df1) != "ID"])) ```
48,402,276
I am taking a Udemy course. The problem I am working on is to take two strings and determine if they are 'one edit away' from each other. That means you can make a single change -- change one letter, add one letter, delete one letter -- from one string and have it become identical to the other. Examples: ``` s1a = "abcde" s1b = "abfde" s2a = "abcde" s2b = "abde" s3a = "xyz" s3b = "xyaz" ``` * `s1a` changes the `'c'` to an `'f'`. * `s2a` deletes `'c'`. * `s3a` adds `'a'`. The instructors solution (and test suite): ``` def is_one_away(s1, s2): if len(s1) - len(s2) >= 2 or len(s2) - len(s1) >= 2: return False elif len(s1) == len(s2): return is_one_away_same_length(s1, s2) elif len(s1) > len(s2): return is_one_away_diff_lengths(s1, s2) else: return is_one_away_diff_lengths(s2, s1) def is_one_away_same_length(s1, s2): count_diff = 0 for i in range(len(s1)): if not s1[i] == s2[i]: count_diff += 1 if count_diff > 1: return False return True # Assumption: len(s1) == len(s2) + 1 def is_one_away_diff_lengths(s1, s2): i = 0 count_diff = 0 while i < len(s2): if s1[i + count_diff] == s2[i]: i += 1 else: count_diff += 1 if count_diff > 1: return False return True # NOTE: The following input values will be used for testing your solution. print(is_one_away("abcde", "abcd")) # should return True print(is_one_away("abde", "abcde")) # should return True print(is_one_away("a", "a")) # should return True print(is_one_away("abcdef", "abqdef")) # should return True print(is_one_away("abcdef", "abccef")) # should return True print(is_one_away("abcdef", "abcde")) # should return True print(is_one_away("aaa", "abc")) # should return False print(is_one_away("abcde", "abc")) # should return False print(is_one_away("abc", "abcde")) # should return False print(is_one_away("abc", "bcc")) # should return False ``` When I saw the problem I decided to tackle it using `set()`. I found this very informative: [Opposite of set.intersection in python?](https://stackoverflow.com/questions/29947844/opposite-of-set-intersection-in-python) This is my attempted solution: ``` def is_one_away(s1, s2): if len(set(s1).symmetric_difference(s2)) <= 1: return True if len(set(s1).symmetric_difference(s2)) == 2: if len(set(s1).difference(s2)) == len(set(s2).difference(s1)): return True return False return False ``` When I run my solution online (you can test within the course itself) I am failing on the last test suite item: ``` False != True : Input 1: abc Input 2: bcc Expected Result: False Actual Result: True ``` I have tried and tried but I can't get the last test item to work (at least not without breaking a bunch of other stuff). There is no guarantee that I can solve the full test suite with a `set()` based solution, but since I am one item away I was really wanting to see if I could get it done.
2018/01/23
[ "https://Stackoverflow.com/questions/48402276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7535419/" ]
This fails to pass this test, because you only look at *unique characters*: ``` >>> s1 = 'abc' >>> s2 = 'bcc' >>> set(s1).symmetric_difference(s2) {'a'} ``` That's a set of length 1, but there are **two** characters changed. By converting to a set, you only see that there is at least one `'c'` character in the `s2` input, not that there are now two. Another way your approach would fail is if the *order* of the characters changed. `'abc'` is two changes away from `'cba'`, but your approach would fail to detect those changes too. You can't solve this problem using sets, because sets remove two important pieces of information: how many times a character appears, and in what order the characters are listed.
Here's a solution using differences found by list comprehension. ``` def one_away(s1, s2): diff1 = [el for el in s1 if el not in s2] diff2 = [el for el in s2 if el not in s1] if len(diff1) < 2 and len(diff2) < 2: return True return False ``` Unlike a set-based solution, this one doesn't lose vital information about non-unique characters.
48,402,276
I am taking a Udemy course. The problem I am working on is to take two strings and determine if they are 'one edit away' from each other. That means you can make a single change -- change one letter, add one letter, delete one letter -- from one string and have it become identical to the other. Examples: ``` s1a = "abcde" s1b = "abfde" s2a = "abcde" s2b = "abde" s3a = "xyz" s3b = "xyaz" ``` * `s1a` changes the `'c'` to an `'f'`. * `s2a` deletes `'c'`. * `s3a` adds `'a'`. The instructors solution (and test suite): ``` def is_one_away(s1, s2): if len(s1) - len(s2) >= 2 or len(s2) - len(s1) >= 2: return False elif len(s1) == len(s2): return is_one_away_same_length(s1, s2) elif len(s1) > len(s2): return is_one_away_diff_lengths(s1, s2) else: return is_one_away_diff_lengths(s2, s1) def is_one_away_same_length(s1, s2): count_diff = 0 for i in range(len(s1)): if not s1[i] == s2[i]: count_diff += 1 if count_diff > 1: return False return True # Assumption: len(s1) == len(s2) + 1 def is_one_away_diff_lengths(s1, s2): i = 0 count_diff = 0 while i < len(s2): if s1[i + count_diff] == s2[i]: i += 1 else: count_diff += 1 if count_diff > 1: return False return True # NOTE: The following input values will be used for testing your solution. print(is_one_away("abcde", "abcd")) # should return True print(is_one_away("abde", "abcde")) # should return True print(is_one_away("a", "a")) # should return True print(is_one_away("abcdef", "abqdef")) # should return True print(is_one_away("abcdef", "abccef")) # should return True print(is_one_away("abcdef", "abcde")) # should return True print(is_one_away("aaa", "abc")) # should return False print(is_one_away("abcde", "abc")) # should return False print(is_one_away("abc", "abcde")) # should return False print(is_one_away("abc", "bcc")) # should return False ``` When I saw the problem I decided to tackle it using `set()`. I found this very informative: [Opposite of set.intersection in python?](https://stackoverflow.com/questions/29947844/opposite-of-set-intersection-in-python) This is my attempted solution: ``` def is_one_away(s1, s2): if len(set(s1).symmetric_difference(s2)) <= 1: return True if len(set(s1).symmetric_difference(s2)) == 2: if len(set(s1).difference(s2)) == len(set(s2).difference(s1)): return True return False return False ``` When I run my solution online (you can test within the course itself) I am failing on the last test suite item: ``` False != True : Input 1: abc Input 2: bcc Expected Result: False Actual Result: True ``` I have tried and tried but I can't get the last test item to work (at least not without breaking a bunch of other stuff). There is no guarantee that I can solve the full test suite with a `set()` based solution, but since I am one item away I was really wanting to see if I could get it done.
2018/01/23
[ "https://Stackoverflow.com/questions/48402276", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7535419/" ]
This fails to pass this test, because you only look at *unique characters*: ``` >>> s1 = 'abc' >>> s2 = 'bcc' >>> set(s1).symmetric_difference(s2) {'a'} ``` That's a set of length 1, but there are **two** characters changed. By converting to a set, you only see that there is at least one `'c'` character in the `s2` input, not that there are now two. Another way your approach would fail is if the *order* of the characters changed. `'abc'` is two changes away from `'cba'`, but your approach would fail to detect those changes too. You can't solve this problem using sets, because sets remove two important pieces of information: how many times a character appears, and in what order the characters are listed.
Here is solving one away where set is used to find the unique character. Done not completely using set but set is used to find the unique character in two given strings. List as a stack is used to pop item from both stacks, and then to compare them. Using stack, pop items from both items and see if they match. Find the unique character item, and when popped item matches with unique character, we need to pop first string again. For example, pales and pale when s is found, we should pop again. pales pale would be true because s is unique char and when s is popped, we will pop string1 again. If unmatched popped chars greater than 1 then we can return False. ```py def one_away(string1: str, string2: str) -> bool: string1, string2 = [ char for char in string1.replace(" ","").lower() ], [ char for char in string2.replace(" ","").lower() ] if len(string2) > len(string1): string1, string2 = string2, string1 len_s1, len_s2, unmatch_count = len(string1), len(string2), 0 if len_s1 == len_s2 or len_s1 - len_s2 == 1: unique_char = list(set(string1) - set(string2)) or ["None"] if unique_char[0] == "None" and len_s1 - len_s2 == 1: return True # this is for "abcc" "abc" while len(string1) > 0: pop_1, pop_2 = string1.pop(), string2.pop() if pop_1 == unique_char[0] and len_s1 - len_s2 == 1: pop_1 = string1.pop() if pop_1 != pop_2: unmatch_count += 1 if unmatch_count > 1: return False return True return False ``` For example test: ```py strings = [("abc","ab"), ("pale", "bale"), ("ples", "pale"), ("palee", "pale"), ("pales", "pale"), ("pale", "ple"), ("abc", "cab"), ("pale", "bake")] for string in strings: print(f"{string} one_away: {one_away(string[0]), string[1]}") ``` ``` ('pale', 'bale') one_away: True ('ples', 'pale') one_away: False ('palee', 'pale') one_away: True ('pales', 'pale') one_away: True ('pale', 'ple') one_away: True ('abc', 'cab') one_away: False ('pale', 'bake') one_away: False ```