qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
49,835,559
|
After installing Ubuntu as WSL(Windows Subsystem for Linux) I've run:
```
root@teclast:~# python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 ...
```
and try to access to this web server from my windows machine `http://0.0.0.0:8000` or `http://192.168.1.178:8000` but no success, web server available only by the address `http://127.0.0.1:8000` or `http://localhost:8000` it means that I can't connect to this web server from another pc in my network. Is it possible to getting an access to WSL from outside?
|
2018/04/14
|
[
"https://Stackoverflow.com/questions/49835559",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1441863/"
] |
Similar to @countach 's answer:
If using Ubuntu type `ip address` in the WSL terminal. Look for the entry that says `#: eth0 ...` where `#` is some small number. There is an ip address there. Use that.
|
**Avoid using firewall rules used in some answers on the web, I saw some of them create some kind of allow-all firewall rule (allow any terrafic from any ip and any port), this can cause security problems**
Simply use this single line from [this answer](https://stackoverflow.com/a/70566842/11190169) which worked perfectly for me (just creates a port proxy):
```
netsh interface portproxy add v4tov4 listenport=<windows_port> listenaddress=0.0.0.0 connectport=<wsl_port> connectaddress=<WSL_IP>
```
where `<WSL_IP>` is the ip of wsl, you can get it by running `ifconfig` on wsl. Also `<windows_port>` is the port windows will listen on and `<wsl_port>` is the port server is running on WSL.
You can list current port proxies using:
`netsh interface portproxy show all`
|
39,878,262
|
I have very large log file, which contains log of service restart messages. After I initiated service restart with external command I need to tail this log file from last occurrence of reboot message and check following messages to confirm correct restart procedure. I'm analysing messages by python, so only find last occurrence and follow file needed, then i check output line-by-line and simply close connection when read everything I need.
```
.... # lots of previous data
[timestamp] previous message
[timestamp] Rebooting... # from tis point i need to track messages
[timestamp] doing thing
[timestamp] doing other thing
[timestamp] doing final thing # final point, reboot successful
[timestamp] service activity message #
```
How can I perform such tailing?
```
tail -f <from last Rebooting... message>
```
|
2016/10/05
|
[
"https://Stackoverflow.com/questions/39878262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1777415/"
] |
give a generous buffer value, reverse, extract, reverse
```
$ tail -1000 file | tac | awk '1,/Rebooting/' | tac
```
or, replace `awk` script with `!p; /Rebooting/{p=1}`
|
Perhaps something like:
```
tail -fn +$(awk '/Rebooting/ { line = NR } END { print(line) }' log) log
```
which uses `awk` to find the line number of the last occurrence of the pattern and then tails starting at that line.
This still scans the entire file, though.
If you're really doing it from python, you can probably do better by searching the file in reverse directly in python.
|
18,520,203
|
I just installed the 'eve demo' I can't get it to start working.
The error is:
>
> eve.io.base.ConnectionException: Error initializing the driver. Make sure the database serveris running. Driver exception: OperationFailure(u"command SON([('authenticate', 1), ('user', u'user'), ('nonce', u'cec66353cb35b6f5'), ('key', u'14817e596653376514b76248055e1d4f')]) failed: auth fails",)
>
>
>
I have mongoDB running, and I have installed [Eve](http://python-eve.org) and Python2.7.
I create the [run.py](http://pastebin.com/HVJykT43) and the [settings.py](http://pastebin.com/jNukQAvW "settings.py") required.
What is not working ? am I missing something ?
|
2013/08/29
|
[
"https://Stackoverflow.com/questions/18520203",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2615737/"
] |
It looks like the MongoDB user/pw combo you configured in your `settings.py` has not been set at the db level. From the mongo shell type `use <dbname>`, then `db.system.users.find()` to get a list of authorized users for `<dbname>`. It is probably empty; add the user as needed (see the [MongoDB docs](http://docs.mongodb.org/manual/reference/method/db.addUser/)).
|
1. get your mongodb's dbname,username and password from setting.py,eg:
```
MONGO_USERNAME = 'username'
MONGO_PASSWORD = 'password'
MONGO_DBNAME = 'apitest'
```
2. login in mongod server with mongo,and make sure your username in dbname's system.user collection.you can query authenticated users in that database with the following operation:
```
use apitest
db.system.users.find()
```
3. if username doesn't exist in system.users,then you can use db.addUser command to add a user to system.users collection.eg:
```
use apitest
db.addUser{'username','password'}
```
|
74,618,712
|
I want to read the data on an excel file within a F drive. I am using python on Visual Studio Code to try achieve this however I am getting an error as seen in the pictures below. I installed pandas but I still get an error. How can I fix this issue?
[Coding Error](https://i.stack.imgur.com/XFyH4.png)
[Installed Pandas Library](https://i.stack.imgur.com/p1aiN.png)
I tried closing and opening visual studio. I tried uninstalling and reinstalling pandas.
[Python on Computer](https://i.stack.imgur.com/d3xEp.png)
|
2022/11/29
|
[
"https://Stackoverflow.com/questions/74618712",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20636206/"
] |
You should try to open terminal in VS Code, and run `pip freeze` (and `pip3 freeze`). Check if you find pandas in the results, it won't. That must be because you'd have multiple installations of Python on your system. You may do any one of the below -
1. Get rid of all but one Python installation.
2. Install pandas on the VS Code installed python instance.
3. Configure the same installation of python that is referenced by your command prompt. (choose the correct python interpreter from VS Code Command Palette)
|
To read an Excel file with Python, you need to install the pandas library. To install pandas, open the command line or terminal and type:
pip install pandas
Once pandas is installed, you can read an Excel file like this:
import pandas as pd
df = pd.read\_excel('file\_name.xlsx')
print(df)
You should also make sure that the file path is correct and that you have the correct permissions to access the file.
If you are still having issues, try using the absolute file path instead of a relative file path.
|
23,516,150
|
I created a thread for a keylogger that logs in parallel to another thread that produces some sounds ( I want to catch reaction times).
Unfortunately, the thread never finishes although i invoke killKey() and "invoked killkey()" is printed.
I allways get an thread.isActive() = true from this thread.
```
class KeyHandler(threading.Thread):
hm = pyHook.HookManager()
def __init__(self):
threading.Thread.__init__(self)
def OnKeyboardCharEvent(self,event):
print 'Key:', event.Key
if event.Key=='E':
...
return True
def killKey(self):
KeyHandler.hm.UnhookKeyboard()
ctypes.windll.user32.PostQuitMessage(0)
print "invoked killkey()"
def run(self):
print "keyHandlerstartetrunning"
KeyHandler.hm.KeyDown = self.OnKeyboardCharEvent
KeyHandler.hm.HookKeyboard()
#print "keyboardhooked"
pythoncom.PumpMessages()
```
to be more precise,
ctypes.windll.user32.PostQuitMessage(0) does nothing
I would favor an external timeout to invoke killKey(), respective ctypes.windll.user32.PostQuitMessage(0) in this thread.
|
2014/05/07
|
[
"https://Stackoverflow.com/questions/23516150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2720827/"
] |
PostQuitMessage has to be posted from the same thread. To do so you need to introduce a global variable `STOP_KEY_HANDLER`. If you want to quit then just set global `STOP_KEY_HANDLER = True` from any thread you want and it will quit with the next keystroke. Your key handler has to run on the main thread.
```
STOP_KEY_HANDLER = False
def main():
pass # here do all you want
#bla bla
global STOP_KEY_HANDLER
STOP_KEY_HANDLER = True # This will kill KeyHandler
class KeyHandler:
hm = pyHook.HookManager()
def OnKeyboardCharEvent(self,event):
if STOP_KEY_HANDLER:
self.killKey()
print 'Key:', event.Key
if event.Key=='E':
pass
return True
def killKey(self):
global STOP_KEY_HANDLER
if not STOP_KEY_HANDLER:
STOP_KEY_HANDLER = True
return None
KeyHandler.hm.UnhookKeyboard()
ctypes.windll.user32.PostQuitMessage(0)
print "invoked killkey()"
def _timeout(self):
if self.timeout:
time.sleep(self.timeout)
self.killKey()
def run(self, timeout=False):
print "keyHandlerstartetrunning"
self.timeout = timeout
threading.Thread(target=self._timeout).start()
KeyHandler.hm.KeyDown = self.OnKeyboardCharEvent
KeyHandler.hm.HookKeyboard()
#print "keyboardhooked"
pythoncom.PumpMessages()
k=KeyHandler()
threading.Thread(target=main).start()
k.run(timeout=100) # You can specify the timeout in seconds or you can kill it directly by setting STOP_KEY_HANDLER to True.
```
|
I guess pbackup's solution is fine. Just to conclude I found a solution by simply sending a key myself instead of waiting for the user to input. It's proably not the best but was the fastest an goes parallel in my timing thread with the other timing routines.
```
STOP_KEY_HANDLER = True
# send key to kill handler - not pretty but works
for hwnd in get_hwnds_for_pid (GUIWINDOW_to_send_key_to.pid):
win32gui.PostMessage (hwnd, win32con.WM_KEYDOWN, win32con.VK_F5, 0)
# sleep to make sure processing is done
time.sleep(0.1)
# kill window
finished()
```
|
60,961,248
|
```py
bigger_list_of_names = ['Jim', 'Bob', 'Fred', 'Cam', 'Reagan','Alejandro','Dee','Rana','Denisha','Nicolasa','Annett','Catrina','Louvenia','Emmanuel','Dina','Jasmine','Shirl','Jene','Leona','Lise','Dodie','Kanesha','Carmela','Yuette',]
name_list = ['Jim', 'Bob', 'Fred', 'Cam']
search_people = re.compile(r'\b({})\b'.format(r'|'.join(name_list)), re.IGNORECASE)
print(search_people)
for names in bigger_list_of_names:
found_them = search_people.search(names, re.IGNORECASE | re.X)
print(names)
if found_them:
print('I found this person: {}'.format(found_them.group()))
else:
print('Did not find them')
```
The issue I am having is the regex does not find the names at all and keeps hitting the `else:`
I have tried `re.search`, `re.findall`, `re.find`, `re.match`, `re.fullmatch`, etc. They all return `None`. The only way for it to find anything is if I use `re.finditer` but that would not allow me to use `.group()`.
The output of the `re.compile` is `re.compile('\\b(Jim|Bob|Fred|Cam)\\b', re.IGNORECASE)`
I tested it on <https://regex101.com/> ([](https://i.stack.imgur.com/lL3mh.png)) and it looks like its working but not in the python.
Here is my console output:
[](https://i.stack.imgur.com/kxdcc.png)
Am I missing anything?
|
2020/03/31
|
[
"https://Stackoverflow.com/questions/60961248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13176034/"
] |
The second argument to a compiled regular expression is the position in the string to start searching, not flags to use with the regex (the third, also optional argument, is the ending position to search). See the docs for [Regular expression objects](https://docs.python.org/3/library/re.html?highlight=re#re.Pattern.search) for details.
If you want to specify a case-insensitive search, pass `re.IGNORECASE` to `re.compile`. For this regex, `re.X` isn't needed.
|
What you are trying to do, does not require a regex search. You can achieve the same as follows.
```py
search_result = []
targets = set(names_list)
for name in set(bigger_list_of_names):
if name in targets:
search_result.append(name)
print(f'Found name: {name}')
else:
print(f'Did not find name: {name}')
print(search_result)
```
**Shorter version** using list-comprehension
```py
search_result = [name for name in set(bigger_list_of_names) if name in targets]
```
|
60,961,248
|
```py
bigger_list_of_names = ['Jim', 'Bob', 'Fred', 'Cam', 'Reagan','Alejandro','Dee','Rana','Denisha','Nicolasa','Annett','Catrina','Louvenia','Emmanuel','Dina','Jasmine','Shirl','Jene','Leona','Lise','Dodie','Kanesha','Carmela','Yuette',]
name_list = ['Jim', 'Bob', 'Fred', 'Cam']
search_people = re.compile(r'\b({})\b'.format(r'|'.join(name_list)), re.IGNORECASE)
print(search_people)
for names in bigger_list_of_names:
found_them = search_people.search(names, re.IGNORECASE | re.X)
print(names)
if found_them:
print('I found this person: {}'.format(found_them.group()))
else:
print('Did not find them')
```
The issue I am having is the regex does not find the names at all and keeps hitting the `else:`
I have tried `re.search`, `re.findall`, `re.find`, `re.match`, `re.fullmatch`, etc. They all return `None`. The only way for it to find anything is if I use `re.finditer` but that would not allow me to use `.group()`.
The output of the `re.compile` is `re.compile('\\b(Jim|Bob|Fred|Cam)\\b', re.IGNORECASE)`
I tested it on <https://regex101.com/> ([](https://i.stack.imgur.com/lL3mh.png)) and it looks like its working but not in the python.
Here is my console output:
[](https://i.stack.imgur.com/kxdcc.png)
Am I missing anything?
|
2020/03/31
|
[
"https://Stackoverflow.com/questions/60961248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13176034/"
] |
I'm a little late but I had the same issue before. It looks like you are using pycharm, if you check the auto complete thing (if you don't have it turned off) it would say:
`pattern.search(self, string, pos, endpos)`
Instead of adding flags in the `.search()` part, you will need to add the flags in the `re.compile` part. Since `re.compile()` actually accepts the flags.
What it looks like in pycharm auto complete:
`re.compile(pattern, flags)`
So it would look a bit like this:
```py
search_people = re.compile(r'\b({})\b'.format(r'|'.join(name_list)), re.IGNORECASE | re.X)
for names in bigger_list_of_names:
found_them = search_people.search(names)
print(names)
if found_them:
print('I found this person: {}'.format(found_them.group()))
else:
print('Did not find them')
```
|
What you are trying to do, does not require a regex search. You can achieve the same as follows.
```py
search_result = []
targets = set(names_list)
for name in set(bigger_list_of_names):
if name in targets:
search_result.append(name)
print(f'Found name: {name}')
else:
print(f'Did not find name: {name}')
print(search_result)
```
**Shorter version** using list-comprehension
```py
search_result = [name for name in set(bigger_list_of_names) if name in targets]
```
|
41,965,187
|
To test my tensorflow installation I am using the mnist example provided in tensorflow repository, but when I execute the convolutional.py script I have this output:
```
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.2405
pciBusID 0000:03:00.0
Total memory: 5.93GiB
Free memory: 5.83GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x29020c0
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
Traceback (most recent call last):
File "convolutional.py", line 339, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "convolutional.py", line 284, in main
with tf.Session() as sess:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1187, in __init__
super(Session, self).__init__(target, graph, config=config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 552, in __init__
self._session = tf_session.TF_NewDeprecatedSession(opts, status)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
```
My first idea was that maybe I had problems in cuda installation but I tested using one of the examples provided for nvidia. In this case I used this example:
>
> NVIDIA\_CUDA-8.0\_Samples/6\_Advanced/c++11\_cuda
>
>
>
And the output is this:
```
GPU Device 0: "GeForce GTX 980 Ti" with compute capability 5.2
Read 3223503 byte corpus from ./warandpeace.txt
counted 107310 instances of 'x', 'y', 'z', or 'w' in "./warandpeace.txt"
```
Then my conclusion is the cuda is installed correctly. But I don not have any idea what is happening here. If someone can help me I will appreciated.
For more information this is my gpu configuration:
```
Tue Jan 31 19:42:10 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57 Driver Version: 367.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 560 Ti Off | 0000:01:00.0 N/A | N/A |
| 25% 45C P0 N/A / N/A | 463MiB / 958MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 980 Ti Off | 0000:03:00.0 Off | N/A |
| 0% 31C P8 13W / 280W | 1MiB / 6077MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
```
**EDIT:**
It is normal the two nvidia cards have the same physical id?
```
sudo lshw -C "display"
*-display
description: VGA compatible controller
product: GM200 [GeForce GTX 980 Ti]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:03:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:50 memory:f9000000-f9ffffff memory:b0000000-bfffffff memory:c0000000-c1ffffff ioport:d000(size=128) memory:fa000000-fa07ffff
*-display
description: VGA compatible controller
product: GF114 [GeForce GTX 560 Ti]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:45 memory:f6000000-f7ffffff memory:c8000000-cfffffff memory:d0000000-d3ffffff ioport:e000(size=128) memory:f8000000-f807ffff
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41965187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3094625/"
] |
The important points in the output you have shown is this:
```
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.2405
pciBusID 0000:03:00.0
Total memory: 5.93GiB
Free memory: 5.83GiB
```
i.e. the compute device you want is enumerated as device 0 and
```
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
```
i.e. the compute device generating the error is enumerated as device 1. Device 1 is your display GPU, which can't be used for computation in Tensorflow. If you either mark that device as compute prohibited with `nvidia-smi`, or use the `CUDA_VISIBLE_DEVICES` environment variable to only make your compute device visible to CUDA, the error should probably disappear.
|
I encountered a similar error when I attempted to run the `classify_image.py` script that is part of [the image recognition tutorial](https://www.tensorflow.org/tutorials/image_recognition). Since I already had a running Python session (elpy) in which I had run some TensorFlow code, the GPUs were allocated there and thus were not available for the script I was attempting to run from shell.
Quitting the existing Python session resolved the error.
|
39,775,489
|
I'm trying to push a new git repo upstream using gitpython module. Below are the steps that I'm doing and get an error 128.
```
# Initialize a local git repo
init_repo = Repo.init(gitlocalrepodir+"%s" %(gitinitrepo))
# Add a file to this new local git repo
init_repo.index.add([filename])
# Initial commit
init_repo.index.commit('Initial Commit - %s' %(timestr))
# Create remote
init_repo.create_remote('origin', giturl+gitinitrepo+'.git')
# Push upstream (Origin)
init_repo.remotes.origin.push()
```
While executing the push(), gitpython throws an exception:
```
'git push --porcelain origin' returned with exit code 128
```
Access to github is via SSH.
Do you see anything wrong that I'm doing?
|
2016/09/29
|
[
"https://Stackoverflow.com/questions/39775489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2170456/"
] |
I tracked it down with a similar approach:
```
class ProgressPrinter(git.RemoteProgress):
def line_dropped(self, line):
print("line dropped : " + str(line))
```
Which you can then call in your code:
```
init_repo.remotes.origin.push(progress=ProgressPrinter())
```
|
You need to capture the output from the git command.
Given this Progress class:
```
class Progress(git.RemoteProgress):
def __init__( self ):
super().__init__()
self.__all_dropped_lines = []
def update( self, op_code, cur_count, max_count=None, message='' ):
pass
def line_dropped( self, line ):
if line.startswith( 'POST git-upload-pack' ):
return
self.__all_dropped_lines.append( line )
def allErrorLines( self ):
return self.error_lines() + self.__all_dropped_lines
def allDroppedLines( self ):
return self.__all_dropped_lines
```
You can write code like this:
```
progress = Progress()
try:
for info in remote.push( progress=progress ):
# call info_callback with the push commands info
info_callback( info )
for line in progress.allDroppedLines():
log.info( line )
except GitCommandError:
for line in progress.allErrorLines():
log.error( line )
raise
```
When you run with this you will still get the 128 error, but you will also have the output og git to explain the problem.
|
19,546,631
|
I'm trying to extract values from numerous text files in python. The numbers I require are in the scientific notation form. My result text files are as follows
```
ADDITIONAL DATA
Tip Rotation (degrees)
Node , UR[x] , UR[y] , UR[z]
21 , 1.0744 , 1.2389 , -4.3271
22 , -1.0744 , -1.2389 , -4.3271
53 , 0.9670 , 1.0307 , -3.8990
54 , -0.0000 , -0.0000 , -3.5232
55 , -0.9670 , -1.0307 , -3.8990
Mean rotation variation along blade
Region , Rotation (degrees)
Partition line 0, 7.499739E-36
Partition line 1, -3.430092E-01
Partition line 2, -1.019287E+00
Partition line 3, -1.499808E+00
Partition line 4, -1.817651E+00
Partition line 5, -2.136372E+00
Partition line 6, -2.448321E+00
Partition line 7, -2.674414E+00
Partition line 8, -2.956737E+00
Partition line 9, -3.457806E+00
Partition line 10, -3.995106E+00
```
I've been using regexp successfully in the past but its doesnt seem to want to pick up the numbers. The number of the nodes changes in my results file so can't search by line. My python script is as follows.
```
import re
from pylab import *
from scipy import *
import matplotlib
from numpy import *
import numpy as np
from matplotlib import pyplot as plt
import csv
########################################
minTheta = -90
maxTheta = 0
thetaIncrements = 10
numberOfPartitions = 10
########################################
numberOfThetas = ((maxTheta - minTheta)/thetaIncrements)+1
print 'Number of thetas = '+str(numberOfThetas)
thetas = linspace(minTheta,maxTheta,numberOfThetas)
print 'Thetas = '+str(thetas)
part = linspace(1,numberOfPartitions,numberOfPartitions)
print 'Parts = '+str(part)
meanRotations = np.zeros((numberOfPartitions+1,numberOfThetas))
#print meanRotations
theta = minTheta
n=0
m=0
while theta <= maxTheta:
fileName = str(theta)+'.0.txt'
#print fileName
regexp = re.compile(r'Partition line 0, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[0,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 1, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[1,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 2, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[2,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 3, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[3,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 4, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[4,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 5, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[5,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 6, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[6,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 7, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[7,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 8, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[8,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 9, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[9,m]=(float((match.group(1))))
regexp = re.compile(r'Partition line 10, .*?([-+0-9.E]+)')
with open(fileName) as f:
for line in f:
match = regexp.match(line)
if match:
print (float((match.group(1))))
meanRotations[10,m]=(float((match.group(1))))
m=m+1
theta = theta+thetaIncrements
print 'Mean rotations on partition lines = '
print meanRotations
```
Any help would be much appreciated!!
|
2013/10/23
|
[
"https://Stackoverflow.com/questions/19546631",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2739143/"
] |
Is this format of file standard one? If so? you can get all your float values with another technic.
So, here is the code:
```py
str = """ ADDITIONAL DATA
Tip Rotation (degrees)
Node , UR[x] , UR[y] , UR[z]
21 , 1.0744 , 1.2389 , -4.3271
22 , -1.0744 , -1.2389 , -4.3271
53 , 0.9670 , 1.0307 , -3.8990
54 , -0.0000 , -0.0000 , -3.5232
55 , -0.9670 , -1.0307 , -3.8990
Mean rotation variation along blade
Region , Rotation (degrees)
Partition line 0, 7.499739E-36
Partition line 1, -3.430092E-01
Partition line 2, -1.019287E+00
Partition line 3, -1.499808E+00
Partition line 4, -1.817651E+00
Partition line 5, -2.136372E+00
Partition line 6, -2.448321E+00
Partition line 7, -2.674414E+00
Partition line 8, -2.956737E+00
Partition line 9, -3.457806E+00
Partition line 10, -3.995106E+00
"""
arr = str.split()
for index in enumerate(arr):
print index # just to see the list
start = 59 # from this position the numbers begin
step = 4 # current number is each fourth
ar = []
for j in range(start, len(arr), step):
ar.append(arr[j])
floatAr = []
# or you can use this expression instead of the following loop
# floatAr = [float(x) for x in ar]
for n in range(len(ar)):
floatAr.append(float(ar[n]))
print floatAr
```
At the end you will recive a list called **floatAr** with all your float values. You can add *try-except* block for better usability.
Or, alternatively, if you want to use regex, here is the code:
```
<!--language:python -->
str = """ ADDITIONAL DATA
Tip Rotation (degrees)
Node , UR[x] , UR[y] , UR[z]
21 , 1.0744 , 1.2389 , -4.3271
22 , -1.0744 , -1.2389 , -4.3271
53 , 0.9670 , 1.0307 , -3.8990
54 , -0.0000 , -0.0000 , -3.5232
55 , -0.9670 , -1.0307 , -3.8990
Mean rotation variation along blade
Region , Rotation (degrees)
Partition line 0, 7.499739E-36
Partition line 1, -3.430092E-01
Partition line 2, -1.019287E+00
Partition line 3, -1.499808E+00
Partition line 4, -1.817651E+00
Partition line 5, -2.136372E+00
Partition line 6, -2.448321E+00
Partition line 7, -2.674414E+00
Partition line 8, -2.956737E+00
Partition line 9, -3.457806E+00
Partition line 10, -3.995106E+00"""
regex = '\s-?[1-9]+[0-9]*.?[0-9]*E-?\+?[0-9]+\s?'
import re
values = re.findall(regex, str)
floatAr = [float(x) for x in values]
print floatAr
```
By the way, here is a good on-line regex checker for python [pythex](https://pythex.org/)
|
I don't get the need for regex, to be honest. Something like this should do what you need:
```
with open(fileName) as f:
for line in f:
if line.startswith('Partition line'):
number=float(line.split(',')[1])
print number # or do whatever you want with it
# read other file contents with different if clauses
```
|
58,033,457
|
Hey im trying to create a postgresql db container, im running it using the command:
```
docker-compose up
```
on the following compose file:
```
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_USERNAME: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: default_db
ports:
- 54320:5432
```
However when I try to connect to it using the follwoing python code:
```
import sqlalchemy
engine = sqlalchemy.create_engine('postgres://admin:admin@localhost:54320/default_db')
engine.connect()
```
I get the following error:
```
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "admin"
```
Anyone knows why this happens?
|
2019/09/20
|
[
"https://Stackoverflow.com/questions/58033457",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5273907/"
] |
Using POSTGRES\_USER instead of POSTGRES\_USERNAME solved this for me.
|
You should use POSTGRES\_USER instead of POSTGRES\_USERNAME.
Here is my postgres docker-compose configuration for your reference.
```
version: '3'
services:
postgres:
image: 'mdillon/postgis:latest'
environment:
- TZ=Asia/Shanghai
- POSTGRES_USER=postgres
- POSTGRES_PASSWOR=postgres
ports:
- '15432:5432'
```
|
46,708,708
|
I'm looking at the best way to compare strings in a python function compiled using numba jit (no python mode, python 3).
The use case is the following :
```
import numba as nb
@nb.jit(nopython = True, cache = True)
def foo(a, t = 'default'):
if t == 'awesome':
return(a**2)
elif t == 'default':
return(a**3)
else:
...
```
However, the following error is returned:
```
Invalid usage of == with parameters (str, const('awesome'))
```
I tried using bytes but couldn't succeed.
Thanks !
---
Maurice pointed out the question [Python: can numba work with arrays of strings in nopython mode?](https://stackoverflow.com/questions/32056337/python-can-numba-work-with-arrays-of-strings-in-nopython-mode) but I'm looking at native python and not the numpy subset supported in numba.
|
2017/10/12
|
[
"https://Stackoverflow.com/questions/46708708",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3640767/"
] |
For newer numba versions (0.41.0 and later)
===========================================
Numba (since version 0.41.0) support [`str` in nopython mode](http://numba.pydata.org/numba-doc/0.42.0/reference/pysupported.html#str) and the code as written in the question will "just work". However for your example comparing the strings is **much** slower than your operation, so if you want to use strings in numba functions make sure the overhead is worth it.
```
import numba as nb
@nb.njit
def foo_string(a, t):
if t == 'awesome':
return(a**2)
elif t == 'default':
return(a**3)
else:
return a
@nb.njit
def foo_int(a, t):
if t == 1:
return(a**2)
elif t == 0:
return(a**3)
else:
return a
assert foo_string(100, 'default') == foo_int(100, 0)
%timeit foo_string(100, 'default')
# 2.82 µs ± 45.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit foo_int(100, 0)
# 213 ns ± 10.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
In your case the code is more than 10 times slower using strings.
Since your function doesn't do much it could be better and faster to do the string comparison in Python instead of numba:
```
def foo_string2(a, t):
if t == 'awesome':
sec = 1
elif t == 'default':
sec = 0
else:
sec = -1
return foo_int(a, sec)
assert foo_string2(100, 'default') == foo_string(100, 'default')
%timeit foo_string2(100, 'default')
# 323 ns ± 10.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
This is still a bit slower than the pure integer version but it's almost 10 times faster than using the string in the numba function.
But if you do a lot of numerical work in the numba function the string comparison overhead won't matter. But simply putting `numba.njit` on a function, especially if it doesn't do many array operations or number crunching, won't make it automatically faster!
For older numba versions (before 0.41.0):
=========================================
Numba doesn't support strings in `nopython` mode.
From the [documentation](http://numba.pydata.org/numba-doc/0.40.0/reference/pysupported.html#built-in-types):
>
> 2.6.2. Built-in types
> ---------------------
>
>
> ### 2.6.2.1. int, bool [...]
>
>
> ### 2.6.2.2. float, complex [...]
>
>
> ### 2.6.2.3. tuple [...]
>
>
> ### 2.6.2.4. list [...]
>
>
> ### 2.6.2.5. set [...]
>
>
> ### 2.6.2.7. bytes, bytearray, memoryview
>
>
> The `bytearray` type and, on Python 3, the `bytes` type support indexing, iteration and retrieving the `len()`.
>
>
> [...]
>
>
>
So strings aren't supported at all and bytes don't support equality checks.
However you can pass in `bytes` and iterate over them. That makes it possible to write your own comparison function:
```
import numba as nb
@nb.njit
def bytes_equal(a, b):
if len(a) != len(b):
return False
for char1, char2 in zip(a, b):
if char1 != char2:
return False
return True
```
Unfortunately the next problem is that numba cannot "lower" bytes, so you cannot hardcode the bytes in the function directly. But bytes are basically just integers, and the `bytes_equal` function works for all types that numba supports, that have a length and can be iterated over. So you could simply store them as lists:
```
import numba as nb
@nb.njit
def foo(a, t):
if bytes_equal(t, [97, 119, 101, 115, 111, 109, 101]):
return a**2
elif bytes_equal(t, [100, 101, 102, 97, 117, 108, 116]):
return a**3
else:
return a
```
or as global arrays (thanks @chrisb - see comments):
```
import numba as nb
import numpy as np
AWESOME = np.frombuffer(b'awesome', dtype='uint8')
DEFAULT = np.frombuffer(b'default', dtype='uint8')
@nb.njit
def foo(a, t):
if bytes_equal(t, AWESOME):
return a**2
elif bytes_equal(t, DEFAULT):
return a**3
else:
return a
```
Both will work correctly:
```
>>> foo(10, b'default')
1000
>>> foo(10, b'awesome')
100
>>> foo(10, b'awe')
10
```
However, you cannot specify a bytes array as default, so you need to explicitly provide the `t` variable. Also it feels hacky to do it that way.
My opinion: Just do the `if t == ...` checks in a normal function and call specialized numba functions inside the `if`s. String comparisons are really fast in Python, just wrap the math/array-intensive stuff in a numba function:
```
import numba as nb
@nb.njit
def awesome_func(a):
return a**2
@nb.njit
def default_func(a):
return a**3
@nb.njit
def other_func(a):
return a
def foo(a, t='default'):
if t == 'awesome':
return awesome_func(a)
elif t == 'default':
return default_func(a)
else:
return other_func(a)
```
But make sure you actually need numba for the functions. Sometimes normal Python/NumPy will be fast enough. Just profile the numba solution and a Python/NumPy solution and see if numba makes it significantly faster. :)
|
I'd suggest accepting @MSeifert's answer, but as a another option for these types of problems, consider using an `enum`.
In python, strings are often used as a sort of enum, and you `numba` has builtin support for enums so they can be used directly.
```
import enum
class FooOptions(enum.Enum):
AWESOME = 1
DEFAULT = 2
import numba
@numba.njit
def foo(a, t=FooOptions.DEFAULT):
if t == FooOptions.AWESOME:
return a**2
elif t == FooOptions.DEFAULT:
return a**2
else:
return a
foo(10, FooOptions.AWESOME)
Out[5]: 100
```
|
33,009,295
|
Got kinda surprised with:
```
$ node -p 'process.argv' $SHELL '$SHELL' \t '\t' '\\t'
[ 'node', '/bin/bash', '$SHELL', 't', '\\t', '\\\\t' ]
$ python -c 'import sys; print sys.argv' $SHELL '$SHELL' \t '\t' '\\t'
['-c', '/bin/bash', '$SHELL', 't', '\\t', '\\\\t']
```
Expected the same behavior as with:
```
$ echo $SHELL '$SHELL' \t '\t' '\\t'
/bin/bash $SHELL t \t \\t
```
Which is how I need the stuff to be passed in.
Why the extra escape with `'\t'`, `'\\t'` in process argv? Why handled differently than `'$SHELL'`? Where's this actually coming from? Why different from the `echo` behavior?
First I thought this to be some *extras* on the [minimist](https://github.com/substack/minimist) part, but then got the same with both bare Node.js and Python. Might be missing something obvious here.
|
2015/10/08
|
[
"https://Stackoverflow.com/questions/33009295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681785/"
] |
Use `$'...'` form to pass escape sequences like `\t`, `\n`, `\r`, `\0` etc in BASH:
```
python -c 'import sys; print sys.argv' $SHELL '$SHELL' \t $'\t' $'\\t'
['-c', '/bin/bash', '$SHELL', 't', '\t', '\\t']
```
As per `man bash`:
>
> Words of the form `$'string'` are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard. Backslash escape sequences, if present, are decoded as follows:
>
>
>
```
\a alert (bell)
\b backspace
\e
\E an escape character
\f form feed
\n new line
\r carriage return
\t horizontal tab
\v vertical tab
\\ backslash
\' single quote
\" double quote
\nnn the eight-bit character whose value is the octal value nnn (one to three digits)
\xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits)
\uHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHH (one to four hex digits)
\UHHHHHHHH the Unicode (ISO/IEC 10646) character whose value is the hexadecimal value HHHHHHHH (one to eight hex digits)
\cx a control-x character
```
|
In both python and node.js, there is a difference between the way `print` works with scalar strings and the way it works with collections.
Strings are printed simply as a sequence of characters. The resulting output is generally what the user expects to see, but it cannot be used as the representation of the string in the language. But when a list/array is printed out, what you get is a valid list/array literal, which can be used in a program.
For example, in python:
```
>>> print("x")
x
>>> print(["x"])
['x']
```
When printing the string, you just see the characters. But when printing the list containing the string, python adds quote characters, so that the output is a valid list literal. Similarly, it would add backslashes, if necessary:
```
>>> print("\\")
\
>>> print(["\\"])
['\\']
```
node.js works in exactly the same way:
```
$ node -p '"\\"'
\
$ node -p '["\\"]'
[ '\\' ]
```
When you print the string containing a single backslash, you just get a single backslash. But when you print a list/array containing a string consisting of a single backslash, you get a quoted string in which the backslash is escaped with a backslash, allowing it to be used as a literal in a program.
As with the printing of strings in node and python, the standard `echo` shell utility just prints the actual characters in the string. In a standard shell, there is no mechanism similar to node and python printing of arrays. Bash, however, does provide a mechanism for printing out the value of a variable in a format which could be used as part of a bash program:
```
$ quote=\"
# $quote is a single character:
$ echo "${#quote}"
1
# $quote prints out as a single quote, as you would expect
$ echo "$quote"
"
# If you needed a representation, use the 'declare' builtin:
$ declare -p quote
declare -- quote="\""
# You can also use the "%q" printf format (a bash extension)
$ printf "%q\n" "$quote"
\"
```
(References: bash manual on [`declare`](http://www.gnu.org/software/bash/manual/bash.html#index-declare) and [`printf`](http://www.gnu.org/software/bash/manual/bash.html#index-printf). Or type `help declare` and `help printf` in a bash session.)
---
That's not the full story, though. It is also important to understand how the shell interprets what you type. In other words, when you write
```
some_utility \" "\"" '\"'
```
What does `some_utility` actually see in the argv array?
In most contexts in a standard shell (including bash), C-style escapes sequences like `\t` are not interpreted as such. (The standard shell utility `printf` does interpret these sequences when they appear in a format string, and some other standard utilities also interpret the sequences, but the shell itself does not.) The handling of backslash by a standard shell depends on the context:
* Unquoted strings: the backslash quotes the following character, whatever it is (unless it is a newline, in which case both the backslash and the newline are removed from the input).
* Double-quoted strings: backslash can be used to escape the characters `$`, `\`, `"`, ```; also, a backslash followed by a newline is removed from the input, as in an unquoted string. In bash, if history expansion is enabled (as it is by default in *interactive* shells), backslash can also be used to avoid history expansion of `!`, but the backslash is retained in the final string.
* Single-quoted strings: backslash is treated as a normal character. (As a result, there is no way to include a single quote in a single-quoted string.)
Bash adds two more quoting mechanisms:
* C-style quoting, `$'...'`. If a single-quoted string is preceded by a dollar sign, then C-style escape sequences inside the string *are* interpreted in roughly the same way a C compiler would. This includes the standard whitespace characters such as newline (`\n`), octal, hexadecimal and unicode escapes (`\010`, `\x0a`, `\u000A`, `\U0000000A`), plus a few non-C sequences including "control" characters (`\cJ`) and the ESC character `\e` or `\E` (the same as `\x1b`). Backslashes can also be used to escape `\`, `'` and `"`. (Note that this is a different list from the list of backslashable characters in double-quoted strings; here, a backslash before a dollar sign or a backtic is *not* special, while a backslash before a single quote is special; moreover, the backslash-newline sequence is not interpreted.)
* Locale-specific Translation: `$"..."`. If a double-quoted string is preceded by a dollar sign, backslashes (and variable expansions and command substitutions) are interpreted as with a normal double-quoted strings, and then the string is looked up in a message catalog determined by the current locale.
(References: [Posix standard](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_02), [Bash manual](http://www.gnu.org/software/bash/manual/bash.html#Quoting).)
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
[itertools.groupby()](https://docs.python.org/3/library/itertools.html#itertools.groupby) is your solution.
```
newlst = [k for k, g in itertools.groupby(lst)]
```
---
If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it:
```
import itertools
def special_groupby(iterable):
last_element = 0
count = 0
state = False
def key_func(x):
nonlocal last_element
nonlocal count
nonlocal state
if last_element != x or x >= count:
last_element = x
count = 1
state = not state
else:
count += 1
return state
return [next(g) for k, g in itertools.groupby(iterable, key=key_func)]
special_groupby(lst)
```
OR
```
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst)))
```
Choose whichever you deem appropriate. Both methods are for numbers > 0.
|
You'd probably want something like this.
```
lst = [1, 1, 2, 2, 2, 2, 3, 3, 4, 1, 2]
prev_value = None
for number in lst[:]: # the : means we're slicing it, making a copy in other words
if number == prev_value:
lst.remove(number)
else:
prev_value = number
```
So, we're going through the list, and if it's the same as the previous number, we remove it from the list, otherwise, we update the previous number.
There may be a more succinct way, but this is the way that looked most apparent to me.
HTH.
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
```
list1 = ['a', 'a', 'a', 'b', 'b' , 'a', 'f', 'c', 'a','a']
temp_list = []
for item in list1:
if len(temp_list) == 0:
temp_list.append(item)
elif len(temp_list) > 0:
if temp_list[-1] != item:
temp_list.append(item)
print(temp_list)
```
1. Fetch each item from the main list(list1).
2. If the 'temp\_list' is empty add that item.
3. If not , check whether the last item in the temp\_list is
not same as the item we fetched from 'list1'.
4. if items are different append into temp\_list.
|
You'd probably want something like this.
```
lst = [1, 1, 2, 2, 2, 2, 3, 3, 4, 1, 2]
prev_value = None
for number in lst[:]: # the : means we're slicing it, making a copy in other words
if number == prev_value:
lst.remove(number)
else:
prev_value = number
```
So, we're going through the list, and if it's the same as the previous number, we remove it from the list, otherwise, we update the previous number.
There may be a more succinct way, but this is the way that looked most apparent to me.
HTH.
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
```
list1 = ['a', 'a', 'a', 'b', 'b' , 'a', 'f', 'c', 'a','a']
temp_list = []
for item in list1:
if len(temp_list) == 0:
temp_list.append(item)
elif len(temp_list) > 0:
if temp_list[-1] != item:
temp_list.append(item)
print(temp_list)
```
1. Fetch each item from the main list(list1).
2. If the 'temp\_list' is empty add that item.
3. If not , check whether the last item in the temp\_list is
not same as the item we fetched from 'list1'.
4. if items are different append into temp\_list.
|
```
newlist=[]
prev=lst[0]
newlist.append(prev)
for each in lst[:1]: #to skip 1st lst[0]
if(each!=prev):
newlist.append(each)
prev=each
```
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
[itertools.groupby()](https://docs.python.org/3/library/itertools.html#itertools.groupby) is your solution.
```
newlst = [k for k, g in itertools.groupby(lst)]
```
---
If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it:
```
import itertools
def special_groupby(iterable):
last_element = 0
count = 0
state = False
def key_func(x):
nonlocal last_element
nonlocal count
nonlocal state
if last_element != x or x >= count:
last_element = x
count = 1
state = not state
else:
count += 1
return state
return [next(g) for k, g in itertools.groupby(iterable, key=key_func)]
special_groupby(lst)
```
OR
```
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst)))
```
Choose whichever you deem appropriate. Both methods are for numbers > 0.
|
```
newlist=[]
prev=lst[0]
newlist.append(prev)
for each in lst[:1]: #to skip 1st lst[0]
if(each!=prev):
newlist.append(each)
prev=each
```
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
```
list1 = ['a', 'a', 'a', 'b', 'b' , 'a', 'f', 'c', 'a','a']
temp_list = []
for item in list1:
if len(temp_list) == 0:
temp_list.append(item)
elif len(temp_list) > 0:
if temp_list[-1] != item:
temp_list.append(item)
print(temp_list)
```
1. Fetch each item from the main list(list1).
2. If the 'temp\_list' is empty add that item.
3. If not , check whether the last item in the temp\_list is
not same as the item we fetched from 'list1'.
4. if items are different append into temp\_list.
|
```
st = ['']
[st.append(a) for a in [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5] if a != st[-1]]
print(st[1:])
```
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
If you want to use the `itertools` method @MaxU suggested, a possible code implementation is:
```
import itertools as it
lst=[1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
unique_lst = [i[0] for i in it.groupby(lst)]
print(unique_lst)
```
|
Check if the next element always is not equal to item. If so append.
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
new_item = lst[0]
new_list = [lst[0]]
for l in lst:
if new_item != l:
new_list.append(l)
new_item = l
print new_list
print lst
```
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
```
list1 = ['a', 'a', 'a', 'b', 'b' , 'a', 'f', 'c', 'a','a']
temp_list = []
for item in list1:
if len(temp_list) == 0:
temp_list.append(item)
elif len(temp_list) > 0:
if temp_list[-1] != item:
temp_list.append(item)
print(temp_list)
```
1. Fetch each item from the main list(list1).
2. If the 'temp\_list' is empty add that item.
3. If not , check whether the last item in the temp\_list is
not same as the item we fetched from 'list1'.
4. if items are different append into temp\_list.
|
Check if the next element always is not equal to item. If so append.
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
new_item = lst[0]
new_list = [lst[0]]
for l in lst:
if new_item != l:
new_list.append(l)
new_item = l
print new_list
print lst
```
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
If you want to use the `itertools` method @MaxU suggested, a possible code implementation is:
```
import itertools as it
lst=[1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
unique_lst = [i[0] for i in it.groupby(lst)]
print(unique_lst)
```
|
You'd probably want something like this.
```
lst = [1, 1, 2, 2, 2, 2, 3, 3, 4, 1, 2]
prev_value = None
for number in lst[:]: # the : means we're slicing it, making a copy in other words
if number == prev_value:
lst.remove(number)
else:
prev_value = number
```
So, we're going through the list, and if it's the same as the previous number, we remove it from the list, otherwise, we update the previous number.
There may be a more succinct way, but this is the way that looked most apparent to me.
HTH.
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
[itertools.groupby()](https://docs.python.org/3/library/itertools.html#itertools.groupby) is your solution.
```
newlst = [k for k, g in itertools.groupby(lst)]
```
---
If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it:
```
import itertools
def special_groupby(iterable):
last_element = 0
count = 0
state = False
def key_func(x):
nonlocal last_element
nonlocal count
nonlocal state
if last_element != x or x >= count:
last_element = x
count = 1
state = not state
else:
count += 1
return state
return [next(g) for k, g in itertools.groupby(iterable, key=key_func)]
special_groupby(lst)
```
OR
```
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst)))
```
Choose whichever you deem appropriate. Both methods are for numbers > 0.
|
Check if the next element always is not equal to item. If so append.
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
new_item = lst[0]
new_list = [lst[0]]
for l in lst:
if new_item != l:
new_list.append(l)
new_item = l
print new_list
print lst
```
|
39,237,350
|
How do I remove consecutive duplicates from a list like this in python?
```
lst = [1,2,2,4,4,4,4,1,3,3,3,5,5,5,5,5]
```
Having a unique list or set wouldn't solve the problem as there are some repeated values like 1,...,1 in the previous list.
I want the result to be like this:
```
newlst = [1,2,4,1,3,5]
```
Would you also please consider the case when I have a list like this
`[4, 4, 4, 4, 2, 2, 3, 3, 3, 3, 3, 3]`
and I want the result to be `[4,2,3,3]`
rather than `[4,2,3]` .
|
2016/08/30
|
[
"https://Stackoverflow.com/questions/39237350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5758484/"
] |
[itertools.groupby()](https://docs.python.org/3/library/itertools.html#itertools.groupby) is your solution.
```
newlst = [k for k, g in itertools.groupby(lst)]
```
---
If you wish to group and limit the group size by the item's value, meaning 8 4's will be [4,4], and 9 3's will be [3,3,3] here are 2 options that does it:
```
import itertools
def special_groupby(iterable):
last_element = 0
count = 0
state = False
def key_func(x):
nonlocal last_element
nonlocal count
nonlocal state
if last_element != x or x >= count:
last_element = x
count = 1
state = not state
else:
count += 1
return state
return [next(g) for k, g in itertools.groupby(iterable, key=key_func)]
special_groupby(lst)
```
OR
```
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
newlst = list(itertools.chain.from_iterable(next(zip(*grouper(g, k))) for k, g in itertools.groupby(lst)))
```
Choose whichever you deem appropriate. Both methods are for numbers > 0.
|
```
list1 = ['a', 'a', 'a', 'b', 'b' , 'a', 'f', 'c', 'a','a']
temp_list = []
for item in list1:
if len(temp_list) == 0:
temp_list.append(item)
elif len(temp_list) > 0:
if temp_list[-1] != item:
temp_list.append(item)
print(temp_list)
```
1. Fetch each item from the main list(list1).
2. If the 'temp\_list' is empty add that item.
3. If not , check whether the last item in the temp\_list is
not same as the item we fetched from 'list1'.
4. if items are different append into temp\_list.
|
44,859,860
|
I want to implement the following function in python:
[](https://i.stack.imgur.com/MJfQu.png)
I will write the code using 2-loops:
```
for i in range(5):
for j in range(5):
sum += f(i, j)
```
But the issue is that I have 20 such sigmas, so I will have to write 20 nested for loops. It makes the code unreadable. In my case, all i and j variables take same range (0 to 4). Is there some better of coding it?
|
2017/07/01
|
[
"https://Stackoverflow.com/questions/44859860",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7737948/"
] |
You can use [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product) to get cartesian product (of indexes for your cases):
```
>>> import itertools
>>> for i, j, k in itertools.product(range(1, 3), repeat=3):
... print(i, j, k)
...
1 1 1
1 1 2
1 2 1
1 2 2
2 1 1
2 1 2
2 2 1
2 2 2
```
---
```
import itertools
total = 0
for indexes in itertools.product(range(5), repeat=20):
total += f(*indexes)
```
* You should use `range(1,6)` instead of `range(5)` to mean `1` to `5`. (unless you meant indexes)
* Do not use `sum` as a variable name, it shadows builtin function [`sum`](https://docs.python.org/3/library/functions.html#sum).
|
Create Arrays by using Numpy .
```
import numpy as np
i = np.asarray([i for i in range(5)])
j = np.asarray([i for i in range(5)])
res = np.sum(f(i,j))
```
so you can avoide all loops. Important to note is that the function f needs to be able to work with array (a so called ufunc). If your f is more complicated and i doesnt allow arrays you can use numpys vectorize functions. Not as fast as a ufunc but better that nested loops :
```
from numpy import vectorize
f_vec = vectorize(f)
```
If you want to stay with plain python because you don't want arrays but lists or the types don't match for an array, there is always list comprehension which speeds up the loop. Say I and J are the iterable for i and j respectively then:
```
ij = [f(i,j) for i in I for j in J ]
res = sum(ij)
```
|
11,706,505
|
I just started learning python and I am hoping you guys can help me comprehend things a little better. If you have ever played a pokemon game for the gameboy you'll understand more as to what I am trying to do. I started off with a text adventure where you do simple stuff, but now I am at the point of pokemon battling eachother. So this is what I am trying to achieve.
* Pokemon battle starts
* You attack target
* Target loses HP and attacks back
* First one to 0 hp loses
Of course all of this is printed out.
This is what I have for the battle so far, I am not sure how accurate I am right now. Just really looking to see how close I am to doing this correctly.
```
class Pokemon(object):
sName = "pidgy"
nAttack = 5
nHealth = 10
nEvasion = 1
def __init__(self, name, atk, hp, evd):
self.sName = name
self.nAttack = atk
self.nHealth = hp
self.nEvasion = evd
def fight(target, self):
target.nHealth - self.nAttack
def battle():
print "A wild appeared"
#pikachu = Pokemon("Pikafaggot", 18, 80, 21)
pidgy = Pokemon("Pidgy", 18, 80, 21)
pidgy.fight(pikachu)
#pikachu.fight(pidgy)
```
**Full code here:** <http://pastebin.com/ikmRuE5z>
I am also looking for advice on how to manage variables; I seem to be having a grocery list of variables at the top and I assume that is not good practice, where should they go?
|
2012/07/29
|
[
"https://Stackoverflow.com/questions/11706505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1364915/"
] |
If I was to have `fight` as a instance method (which I'm not sure I would), I would probably code it up something like this:
```
class Pokemon(object):
def __init__(self,name,hp,damage):
self.name = name #pokemon name
self.hp = hp #hit-points of this particular pokemon
self.damage = damage #amount of damage this pokemon does every attack
def fight(self,other):
if(self.hp > 0):
print("%s did %d damage to %s"%(self.name,self.damage,other.name))
print("%s has %d hp left"%(other.name,other.hp))
other.hp -= self.damage
return other.fight(self) #Now the other pokemon fights back!
else:
print("%s wins! (%d hp left)"%(other.name,other.hp))
return other,self #return a tuple (winner,loser)
pikachu=Pokemon('pikachu', 100, 10)
pidgy=Pokemon('pidgy', 200, 12)
winner,loser = pidgy.fight(pikachu)
```
Of course, this is somewhat boring since the amount of damage does not depend on type of pokemon and isn't randomized in any way ... but hopefully it illustrates the point.
As for your class structure:
```
class Foo(object):
attr1=1
attr2=2
def __init__(self,attr1,attr2):
self.attr1 = attr1
self.attr2 = attr2
```
It doesn't really make sense (to me) to declare the class attributes if you're guaranteed to overwrite them in `__init__`. Just use instance attributes and you should be fine (i.e.):
```
class Foo(object):
def __init__(self,attr1,attr2):
self.attr1 = attr1
self.attr2 = attr2v
```
|
1. You don't need the variables up the top. You just need them in the **init**() method.
2. The fight method should return a value:
```
def fight(self, target):
target.nHealth -= self.nAttack
return target
```
3. You probably want to also check if someone has lost the battle:
```
def checkWin(myPoke, target):
# Return 1 if myPoke wins, 0 if target wins, -1 if no winner yet.
winner = -1
if myPoke.nHealth == 0:
winner = 0
elif target.nHealth == 0:
winner = 1
return winner
```
Hope I helped.
|
11,706,505
|
I just started learning python and I am hoping you guys can help me comprehend things a little better. If you have ever played a pokemon game for the gameboy you'll understand more as to what I am trying to do. I started off with a text adventure where you do simple stuff, but now I am at the point of pokemon battling eachother. So this is what I am trying to achieve.
* Pokemon battle starts
* You attack target
* Target loses HP and attacks back
* First one to 0 hp loses
Of course all of this is printed out.
This is what I have for the battle so far, I am not sure how accurate I am right now. Just really looking to see how close I am to doing this correctly.
```
class Pokemon(object):
sName = "pidgy"
nAttack = 5
nHealth = 10
nEvasion = 1
def __init__(self, name, atk, hp, evd):
self.sName = name
self.nAttack = atk
self.nHealth = hp
self.nEvasion = evd
def fight(target, self):
target.nHealth - self.nAttack
def battle():
print "A wild appeared"
#pikachu = Pokemon("Pikafaggot", 18, 80, 21)
pidgy = Pokemon("Pidgy", 18, 80, 21)
pidgy.fight(pikachu)
#pikachu.fight(pidgy)
```
**Full code here:** <http://pastebin.com/ikmRuE5z>
I am also looking for advice on how to manage variables; I seem to be having a grocery list of variables at the top and I assume that is not good practice, where should they go?
|
2012/07/29
|
[
"https://Stackoverflow.com/questions/11706505",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1364915/"
] |
If I was to have `fight` as a instance method (which I'm not sure I would), I would probably code it up something like this:
```
class Pokemon(object):
def __init__(self,name,hp,damage):
self.name = name #pokemon name
self.hp = hp #hit-points of this particular pokemon
self.damage = damage #amount of damage this pokemon does every attack
def fight(self,other):
if(self.hp > 0):
print("%s did %d damage to %s"%(self.name,self.damage,other.name))
print("%s has %d hp left"%(other.name,other.hp))
other.hp -= self.damage
return other.fight(self) #Now the other pokemon fights back!
else:
print("%s wins! (%d hp left)"%(other.name,other.hp))
return other,self #return a tuple (winner,loser)
pikachu=Pokemon('pikachu', 100, 10)
pidgy=Pokemon('pidgy', 200, 12)
winner,loser = pidgy.fight(pikachu)
```
Of course, this is somewhat boring since the amount of damage does not depend on type of pokemon and isn't randomized in any way ... but hopefully it illustrates the point.
As for your class structure:
```
class Foo(object):
attr1=1
attr2=2
def __init__(self,attr1,attr2):
self.attr1 = attr1
self.attr2 = attr2
```
It doesn't really make sense (to me) to declare the class attributes if you're guaranteed to overwrite them in `__init__`. Just use instance attributes and you should be fine (i.e.):
```
class Foo(object):
def __init__(self,attr1,attr2):
self.attr1 = attr1
self.attr2 = attr2v
```
|
*I am only going to comment on a few obvious aspects, because a complete code review is beyond the scope of this site (try [codereview.stackexchange.com](https://codereview.stackexchange.com/))*
Your `fight()` method isn't saving the results of the subtraction, so nothing is changed. You would need to do something like this:
```
def fight(target, self):
target.nHealth -= self.nAttack
# check if target is dead now?
```
I might even recommend not imposing a modification on your target directly. It may be better if you can call an `attack(power)` on your target, and let it determine how much damage is done. You can then check if the target is dead yet. Ultimately I would think you would have some "dice" object that would determine the outcomes for you.
As for globals... just stop using them. It is a bad habit to have them unless you really have a good reason. Have functions that return results to the caller, which you then make use of:
```
def func(foo):
return 'bar'
```
You can however have a module of constants. These are a bunch of values that don't change for the life of the application. They are merely variables that provide common values. You might create a `constants.py` and have stuff like:
```
UP = "up"
DOWN = "down"
DEAD = 0
...
```
... And in your other modules you do:
```
from constants import *
```
|
4,414,767
|
I'm trying to modify an existing [Django Mezzanine](http://mezzanine.jupo.org/) setup to allow me to blog in Markdown. Mezzanine has a "Core" model that has content as an HtmlField which is defined like so:
```
from django.db.models import TextField
class HtmlField(TextField):
"""
TextField that stores HTML.
"""
def formfield(self, **kwargs):
"""
Apply the class to the widget that will render the field as a
TincyMCE Editor.
"""
formfield = super(HtmlField, self).formfield(**kwargs)
formfield.widget.attrs["class"] = "mceEditor"
return formfield
```
The problem comes from the widget.attrs["class"] of mceEditor. My thoughts were to monkey patch the Content field on the Blog object
```
class BlogPost(Displayable, Ownable, Content):
def __init__(self, *args, **kwargs):
super(BlogPost, self).__init__(*args, **kwargs)
self._meta.get_field('content').formfield = XXX
```
My problems are my python skills aren't up to the task of replacing a bound method with a lambda that calls `super`.
formfield is called by the admin when it wants to create a field for display on the admin pages, so I need to patch that to make the BlogPost widget objects NOT have the class of mceEditor (I'm trying to leave mceEditor on all the other things)
How do you craft the replacement function? I'm pretty sure I attach it with
```
setattr(self._meta.get_field('content'), 'formfield', method_i_dont_know_how_to_write)
```
|
2010/12/11
|
[
"https://Stackoverflow.com/questions/4414767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/147562/"
] |
If you mean back references, then yes Java has this. You can refer to a capturing group inside a regular expression using the notation `\1` for the first group, `\2` for the second, etc. Note that inside a string literal the backslashes must be escaped.
|
The Java `java.util.regex.Pattern` class supports backreferences using the `\n` syntax.
See [the documentation](http://download.oracle.com/javase/1.5.0/docs/api/java/util/regex/Pattern.html) for more details.
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
Well here're some workarounds
**1.** In your particular case you could do it with one extra:
```
if use_date_due:
sum_qs = sum_qs.extra(select={
'year': 'EXTRACT(year FROM coalesce(date_due, date))',
'month': 'EXTRACT(month FROM coalesce(date_due, date))',
'is_paid':'date_paid IS NOT NULL'
})
```
**2.** It's also possible to use plain python to get data you need:
```
for x in sum_qs:
chosen_date = x.date_due if use_date_due and x.date_due else x.date
print chosen_date.year, chosen_date.month
```
or
```
[(y.year, y.month) for y in (x.date_due if use_date_due and x.date_due else x.date for x in sum_qs)]
```
**3.** In the SQL world this type of calculating new fields is usually done by uing subquery or [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html). I like cte more because of it's readability. It could be like:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1
```
you can also chain as many cte as you want:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
), cte2 as (
select
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte2
)
select
year, month, sum(is_paid) as paid_count
from cte2
group by year, month
```
so in django you can use [raw query](https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries) like:
```
Invoice.objects.raw('
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1')
```
and you will have Invoice objects with some additional properties.
**4.** Or you can simply substitute fields in your query with plain python
```
if use_date_due:
chosen_date = 'coalesce(date_due, date)'
else:
chosen_date = 'date'
year = 'extract(year from {})'.format(chosen_date)
month = 'extract(month from {})'.format(chosen_date)
fields = {'year': year, 'month': month, 'is_paid':'date_paid is not null'}, 'chosen_date':chosen_date)
sum_qs = sum_qs.extra(select = fields)
```
|
Would this work?:
```
from django.db import connection, transaction
cursor = connection.cursor()
sql = """
SELECT
%s AS year,
%s AS month,
date_paid IS NOT NULL as is_paid
FROM (
SELECT
(CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date, *
FROM
invoice_invoice
) as t1;
""" % (connection.ops.date_extract_sql('year', 'chosen_date'),
connection.ops.date_extract_sql('month', 'chosen_date'))
# Data retrieval operation - no commit required
cursor.execute(sql)
rows = cursor.fetchall()
```
I think it's pretty save both CASE WHEN and IS NOT NULL are pretty db agnostic, at least I assume they are, since they are used in django test in raw format..
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
Well here're some workarounds
**1.** In your particular case you could do it with one extra:
```
if use_date_due:
sum_qs = sum_qs.extra(select={
'year': 'EXTRACT(year FROM coalesce(date_due, date))',
'month': 'EXTRACT(month FROM coalesce(date_due, date))',
'is_paid':'date_paid IS NOT NULL'
})
```
**2.** It's also possible to use plain python to get data you need:
```
for x in sum_qs:
chosen_date = x.date_due if use_date_due and x.date_due else x.date
print chosen_date.year, chosen_date.month
```
or
```
[(y.year, y.month) for y in (x.date_due if use_date_due and x.date_due else x.date for x in sum_qs)]
```
**3.** In the SQL world this type of calculating new fields is usually done by uing subquery or [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html). I like cte more because of it's readability. It could be like:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1
```
you can also chain as many cte as you want:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
), cte2 as (
select
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte2
)
select
year, month, sum(is_paid) as paid_count
from cte2
group by year, month
```
so in django you can use [raw query](https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries) like:
```
Invoice.objects.raw('
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1')
```
and you will have Invoice objects with some additional properties.
**4.** Or you can simply substitute fields in your query with plain python
```
if use_date_due:
chosen_date = 'coalesce(date_due, date)'
else:
chosen_date = 'date'
year = 'extract(year from {})'.format(chosen_date)
month = 'extract(month from {})'.format(chosen_date)
fields = {'year': year, 'month': month, 'is_paid':'date_paid is not null'}, 'chosen_date':chosen_date)
sum_qs = sum_qs.extra(select = fields)
```
|
You could add a property to your model definition and then do :
```
@property
def chosen_date(self):
return self.due_date if self.due_date else self.date
```
This assumes you can always fallback to date.If you prefer you can catch a DoesNotExist exception on due\_date and then check for the second one.
You access the property as you would anything else.
As for the other query, I wouldn't use SQL to extract the y/m/d from the date, just use
```
model_instance.chosen_date.year
```
chosen\_date should be a python date object (if you're using DateField in the ORM and this field is in a model)
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
Well here're some workarounds
**1.** In your particular case you could do it with one extra:
```
if use_date_due:
sum_qs = sum_qs.extra(select={
'year': 'EXTRACT(year FROM coalesce(date_due, date))',
'month': 'EXTRACT(month FROM coalesce(date_due, date))',
'is_paid':'date_paid IS NOT NULL'
})
```
**2.** It's also possible to use plain python to get data you need:
```
for x in sum_qs:
chosen_date = x.date_due if use_date_due and x.date_due else x.date
print chosen_date.year, chosen_date.month
```
or
```
[(y.year, y.month) for y in (x.date_due if use_date_due and x.date_due else x.date for x in sum_qs)]
```
**3.** In the SQL world this type of calculating new fields is usually done by uing subquery or [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html). I like cte more because of it's readability. It could be like:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1
```
you can also chain as many cte as you want:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
), cte2 as (
select
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte2
)
select
year, month, sum(is_paid) as paid_count
from cte2
group by year, month
```
so in django you can use [raw query](https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries) like:
```
Invoice.objects.raw('
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1')
```
and you will have Invoice objects with some additional properties.
**4.** Or you can simply substitute fields in your query with plain python
```
if use_date_due:
chosen_date = 'coalesce(date_due, date)'
else:
chosen_date = 'date'
year = 'extract(year from {})'.format(chosen_date)
month = 'extract(month from {})'.format(chosen_date)
fields = {'year': year, 'month': month, 'is_paid':'date_paid is not null'}, 'chosen_date':chosen_date)
sum_qs = sum_qs.extra(select = fields)
```
|
Just use the raw sql. The raw() manager method can be used to perform raw SQL queries that return model instances.
<https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries>
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
**Short answer:**
If you create an aliased (or computed) column using `extra(select=...)`
then you cannot use the aliased column in a subsequent call to `filter()`.
Also, as you've discovered, you can't use the aliased column in later calls to
`extra(select=...)` or `extra(where=...)`.
**An attempt to explain why:**
For example:
```
qs = MyModel.objects.extra(select={'alias_col': 'title'})
#FieldError: Cannot resolve keyword 'alias_col' into field...
filter_qs = qs.filter(alias_col='Camembert')
#DatabaseError: column "alias_col" does not exist
extra_qs = qs.extra(select={'another_alias': 'alias_col'})
```
`filter_qs` will try to produce a query like:
```
SELECT (title) AS "alias_col", "myapp_mymodel"."title"
FROM "myapp_mymodel"
WHERE alias_col = "Camembert";
```
And `extra_qs` tries something like:
```
SELECT (title) AS "alias_col", (alias_col) AS "another_alias",
"myapp_mymodel"."title"
FROM "myapp_mymodel";
```
Neither of those is valid SQL. In general, if you want to use a computed column's alias multiple times in the SELECT or WHERE clauses of query you actually need to compute it each time. This is why Roman Pekar's answer solves your specific problem - instead of trying to compute `chosen_date` once and then use it again later he computes it each time it's needed.
---
You mention Annotation/Aggregation in your question. You can use `filter()` on the aliases created by `annotate()` (so I'd be interested in seeing the similar errors you're talking about, it's been fairly robust in my experience). That's because when you try to filter on an alias created by annotate, the ORM recognizes what you're doing and replaces the alias with the computation that created it.
So as an example:
```
qs = MyModel.objects.annotate(alias_col=Max('id'))
qs = qs.filter(alias_col__gt=0)
```
Produces something like:
```
SELECT "myapp_mymodel"."id", "myapp_mymodel"."title",
MAX("myapp_mymodel"."id") AS "alias_col"
FROM "myapp_mymodel"
GROUP BY "myapp_mymodel"."id", "myapp_mymodel"."title"
HAVING MAX("myapp_mymodel"."id") > 0;
```
Using "HAVING MAX alias\_col > 0" wouldn't work.
---
I hope that's helpful. If there's anything I've explained badly let me know and I'll see if I can improve it.
|
Well here're some workarounds
**1.** In your particular case you could do it with one extra:
```
if use_date_due:
sum_qs = sum_qs.extra(select={
'year': 'EXTRACT(year FROM coalesce(date_due, date))',
'month': 'EXTRACT(month FROM coalesce(date_due, date))',
'is_paid':'date_paid IS NOT NULL'
})
```
**2.** It's also possible to use plain python to get data you need:
```
for x in sum_qs:
chosen_date = x.date_due if use_date_due and x.date_due else x.date
print chosen_date.year, chosen_date.month
```
or
```
[(y.year, y.month) for y in (x.date_due if use_date_due and x.date_due else x.date for x in sum_qs)]
```
**3.** In the SQL world this type of calculating new fields is usually done by uing subquery or [common table expression](http://www.postgresql.org/docs/current/static/queries-with.html). I like cte more because of it's readability. It could be like:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1
```
you can also chain as many cte as you want:
```
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
), cte2 as (
select
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte2
)
select
year, month, sum(is_paid) as paid_count
from cte2
group by year, month
```
so in django you can use [raw query](https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries) like:
```
Invoice.objects.raw('
with cte1 as (
select
*, coalesce(date_due, date) as chosen_date
from polls_invoice
)
select
*,
extract(year from chosen_date) as year,
extract(month from chosen_date) as month,
case when date_paid is not null then 1 else 0 end as is_paid
from cte1')
```
and you will have Invoice objects with some additional properties.
**4.** Or you can simply substitute fields in your query with plain python
```
if use_date_due:
chosen_date = 'coalesce(date_due, date)'
else:
chosen_date = 'date'
year = 'extract(year from {})'.format(chosen_date)
month = 'extract(month from {})'.format(chosen_date)
fields = {'year': year, 'month': month, 'is_paid':'date_paid is not null'}, 'chosen_date':chosen_date)
sum_qs = sum_qs.extra(select = fields)
```
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
**Short answer:**
If you create an aliased (or computed) column using `extra(select=...)`
then you cannot use the aliased column in a subsequent call to `filter()`.
Also, as you've discovered, you can't use the aliased column in later calls to
`extra(select=...)` or `extra(where=...)`.
**An attempt to explain why:**
For example:
```
qs = MyModel.objects.extra(select={'alias_col': 'title'})
#FieldError: Cannot resolve keyword 'alias_col' into field...
filter_qs = qs.filter(alias_col='Camembert')
#DatabaseError: column "alias_col" does not exist
extra_qs = qs.extra(select={'another_alias': 'alias_col'})
```
`filter_qs` will try to produce a query like:
```
SELECT (title) AS "alias_col", "myapp_mymodel"."title"
FROM "myapp_mymodel"
WHERE alias_col = "Camembert";
```
And `extra_qs` tries something like:
```
SELECT (title) AS "alias_col", (alias_col) AS "another_alias",
"myapp_mymodel"."title"
FROM "myapp_mymodel";
```
Neither of those is valid SQL. In general, if you want to use a computed column's alias multiple times in the SELECT or WHERE clauses of query you actually need to compute it each time. This is why Roman Pekar's answer solves your specific problem - instead of trying to compute `chosen_date` once and then use it again later he computes it each time it's needed.
---
You mention Annotation/Aggregation in your question. You can use `filter()` on the aliases created by `annotate()` (so I'd be interested in seeing the similar errors you're talking about, it's been fairly robust in my experience). That's because when you try to filter on an alias created by annotate, the ORM recognizes what you're doing and replaces the alias with the computation that created it.
So as an example:
```
qs = MyModel.objects.annotate(alias_col=Max('id'))
qs = qs.filter(alias_col__gt=0)
```
Produces something like:
```
SELECT "myapp_mymodel"."id", "myapp_mymodel"."title",
MAX("myapp_mymodel"."id") AS "alias_col"
FROM "myapp_mymodel"
GROUP BY "myapp_mymodel"."id", "myapp_mymodel"."title"
HAVING MAX("myapp_mymodel"."id") > 0;
```
Using "HAVING MAX alias\_col > 0" wouldn't work.
---
I hope that's helpful. If there's anything I've explained badly let me know and I'll see if I can improve it.
|
Would this work?:
```
from django.db import connection, transaction
cursor = connection.cursor()
sql = """
SELECT
%s AS year,
%s AS month,
date_paid IS NOT NULL as is_paid
FROM (
SELECT
(CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date, *
FROM
invoice_invoice
) as t1;
""" % (connection.ops.date_extract_sql('year', 'chosen_date'),
connection.ops.date_extract_sql('month', 'chosen_date'))
# Data retrieval operation - no commit required
cursor.execute(sql)
rows = cursor.fetchall()
```
I think it's pretty save both CASE WHEN and IS NOT NULL are pretty db agnostic, at least I assume they are, since they are used in django test in raw format..
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
**Short answer:**
If you create an aliased (or computed) column using `extra(select=...)`
then you cannot use the aliased column in a subsequent call to `filter()`.
Also, as you've discovered, you can't use the aliased column in later calls to
`extra(select=...)` or `extra(where=...)`.
**An attempt to explain why:**
For example:
```
qs = MyModel.objects.extra(select={'alias_col': 'title'})
#FieldError: Cannot resolve keyword 'alias_col' into field...
filter_qs = qs.filter(alias_col='Camembert')
#DatabaseError: column "alias_col" does not exist
extra_qs = qs.extra(select={'another_alias': 'alias_col'})
```
`filter_qs` will try to produce a query like:
```
SELECT (title) AS "alias_col", "myapp_mymodel"."title"
FROM "myapp_mymodel"
WHERE alias_col = "Camembert";
```
And `extra_qs` tries something like:
```
SELECT (title) AS "alias_col", (alias_col) AS "another_alias",
"myapp_mymodel"."title"
FROM "myapp_mymodel";
```
Neither of those is valid SQL. In general, if you want to use a computed column's alias multiple times in the SELECT or WHERE clauses of query you actually need to compute it each time. This is why Roman Pekar's answer solves your specific problem - instead of trying to compute `chosen_date` once and then use it again later he computes it each time it's needed.
---
You mention Annotation/Aggregation in your question. You can use `filter()` on the aliases created by `annotate()` (so I'd be interested in seeing the similar errors you're talking about, it's been fairly robust in my experience). That's because when you try to filter on an alias created by annotate, the ORM recognizes what you're doing and replaces the alias with the computation that created it.
So as an example:
```
qs = MyModel.objects.annotate(alias_col=Max('id'))
qs = qs.filter(alias_col__gt=0)
```
Produces something like:
```
SELECT "myapp_mymodel"."id", "myapp_mymodel"."title",
MAX("myapp_mymodel"."id") AS "alias_col"
FROM "myapp_mymodel"
GROUP BY "myapp_mymodel"."id", "myapp_mymodel"."title"
HAVING MAX("myapp_mymodel"."id") > 0;
```
Using "HAVING MAX alias\_col > 0" wouldn't work.
---
I hope that's helpful. If there's anything I've explained badly let me know and I'll see if I can improve it.
|
You could add a property to your model definition and then do :
```
@property
def chosen_date(self):
return self.due_date if self.due_date else self.date
```
This assumes you can always fallback to date.If you prefer you can catch a DoesNotExist exception on due\_date and then check for the second one.
You access the property as you would anything else.
As for the other query, I wouldn't use SQL to extract the y/m/d from the date, just use
```
model_instance.chosen_date.year
```
chosen\_date should be a python date object (if you're using DateField in the ORM and this field is in a model)
|
18,269,218
|
I'm trying to use django's queryset API to emulate the following query:
```
SELECT EXTRACT(year FROM chosen_date) AS year,
EXTRACT(month FROM chosen_date) AS month,
date_paid IS NOT NULL as is_paid FROM
(SELECT (CASE WHEN date_due IS NULL THEN date_due ELSE date END) AS chosen_date,* FROM invoice_invoice) as t1;
```
The idea is mainly that in certain situations, I'd rather use the `date_due` column rather than the `date` column in some situations, but that , since `date_due` is optional, I sometimes have to use `date` as a fallback anyways, and create a computed column `chosen_date` to not have to change the rest of the queries.
Here was a first stab I did at emulating this, I was unable to really see how to properly due the null test with the base api so I went with `extra`:
```
if(use_date_due):
sum_qs = sum_qs.extra(select={'chosen_date': 'CASE WHEN date_due IS NULL THEN date ELSE date_due END'})
else:
sum_qs = sum_qs.extra(select={'chosen_date':'date'})
sum_qs = sum_qs.extra(select={'year': 'EXTRACT(year FROM chosen_date)',
'month': 'EXTRACT(month FROM chosen_date)',
'is_paid':'date_paid IS NOT NULL'})
```
But the issue I'm having is when I run the second query, I get an error on how the `chosen_date` column doesn't exist. I've had similar errors later on when trying to use computed columns (like from within `annotate()` calls), but haven't found anything in the documentation about how computed columns differ from "base" ones. Does anyone have any insight on this?
(edited python code because previous version had an obvious logic flaw (forgot the else branch). still doesn't work)
|
2013/08/16
|
[
"https://Stackoverflow.com/questions/18269218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/122757/"
] |
**Short answer:**
If you create an aliased (or computed) column using `extra(select=...)`
then you cannot use the aliased column in a subsequent call to `filter()`.
Also, as you've discovered, you can't use the aliased column in later calls to
`extra(select=...)` or `extra(where=...)`.
**An attempt to explain why:**
For example:
```
qs = MyModel.objects.extra(select={'alias_col': 'title'})
#FieldError: Cannot resolve keyword 'alias_col' into field...
filter_qs = qs.filter(alias_col='Camembert')
#DatabaseError: column "alias_col" does not exist
extra_qs = qs.extra(select={'another_alias': 'alias_col'})
```
`filter_qs` will try to produce a query like:
```
SELECT (title) AS "alias_col", "myapp_mymodel"."title"
FROM "myapp_mymodel"
WHERE alias_col = "Camembert";
```
And `extra_qs` tries something like:
```
SELECT (title) AS "alias_col", (alias_col) AS "another_alias",
"myapp_mymodel"."title"
FROM "myapp_mymodel";
```
Neither of those is valid SQL. In general, if you want to use a computed column's alias multiple times in the SELECT or WHERE clauses of query you actually need to compute it each time. This is why Roman Pekar's answer solves your specific problem - instead of trying to compute `chosen_date` once and then use it again later he computes it each time it's needed.
---
You mention Annotation/Aggregation in your question. You can use `filter()` on the aliases created by `annotate()` (so I'd be interested in seeing the similar errors you're talking about, it's been fairly robust in my experience). That's because when you try to filter on an alias created by annotate, the ORM recognizes what you're doing and replaces the alias with the computation that created it.
So as an example:
```
qs = MyModel.objects.annotate(alias_col=Max('id'))
qs = qs.filter(alias_col__gt=0)
```
Produces something like:
```
SELECT "myapp_mymodel"."id", "myapp_mymodel"."title",
MAX("myapp_mymodel"."id") AS "alias_col"
FROM "myapp_mymodel"
GROUP BY "myapp_mymodel"."id", "myapp_mymodel"."title"
HAVING MAX("myapp_mymodel"."id") > 0;
```
Using "HAVING MAX alias\_col > 0" wouldn't work.
---
I hope that's helpful. If there's anything I've explained badly let me know and I'll see if I can improve it.
|
Just use the raw sql. The raw() manager method can be used to perform raw SQL queries that return model instances.
<https://docs.djangoproject.com/en/1.5/topics/db/sql/#performing-raw-sql-queries>
|
67,800,225
|
I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67800225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8989219/"
] |
```
SELECT id
FROM books EXCEPT (
SELECT bookid FROM books_authors WHERE authorId='A2'
)
```
|
```
SELECT * FROM books
WHERE id NOT IN
(SELECT bookid FROM books_authors WHERE authorid = 'A2')
```
|
67,800,225
|
I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67800225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8989219/"
] |
```
SELECT * FROM books
WHERE id NOT IN
(SELECT bookid FROM books_authors WHERE authorid = 'A2')
```
|
Query using JOIN:
```
SELECT DISTINCT books.*
FROM books
LEFT JOIN books_authors ON books_authors.bookId = books.id AND authorId <> 'A2'
WHERE authorId IS NOT NULL;
```
[PostgreSQL fiddle](https://sqlize.online/?phpses=null&sqlses=5d017a400d41f0abd33c3f0e1d83e69a&php_version=null&sql_version=psql12)
Result:
```
+====+==================+
| id | name |
+====+==================+
| B4 | how to be a book |
+----+------------------+
| B1 | the book |
+----+------------------+
```
|
67,800,225
|
I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67800225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8989219/"
] |
```
SELECT * FROM books
WHERE id NOT IN
(SELECT bookid FROM books_authors WHERE authorid = 'A2')
```
|
I would suggest `not exists`:
```
select b.*
from books b
where not exists (select 1
from book_authors ba join
authors a
on ba.authorid = a.id
where ba.bookid = b.id and
a.name = 'Dewey'
);
```
This assumes that you want the query to refer to the author by *name* and not by the id.
|
67,800,225
|
I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67800225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8989219/"
] |
```
SELECT id
FROM books EXCEPT (
SELECT bookid FROM books_authors WHERE authorId='A2'
)
```
|
Query using JOIN:
```
SELECT DISTINCT books.*
FROM books
LEFT JOIN books_authors ON books_authors.bookId = books.id AND authorId <> 'A2'
WHERE authorId IS NOT NULL;
```
[PostgreSQL fiddle](https://sqlize.online/?phpses=null&sqlses=5d017a400d41f0abd33c3f0e1d83e69a&php_version=null&sql_version=psql12)
Result:
```
+====+==================+
| id | name |
+====+==================+
| B4 | how to be a book |
+----+------------------+
| B1 | the book |
+----+------------------+
```
|
67,800,225
|
I have elastic search cluster.
Currently designing a python service for client for read and write query to my elastic search. The python service will not be maintained by me. Only internally python service will call our elastic search for fetching and writing
Is there any way to configure the elastic search so that we get to know that the requests are coming from python service, Or any way we can pass some extra fields while querying based on that fields we will get the logs
|
2021/06/02
|
[
"https://Stackoverflow.com/questions/67800225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8989219/"
] |
```
SELECT id
FROM books EXCEPT (
SELECT bookid FROM books_authors WHERE authorId='A2'
)
```
|
I would suggest `not exists`:
```
select b.*
from books b
where not exists (select 1
from book_authors ba join
authors a
on ba.authorid = a.id
where ba.bookid = b.id and
a.name = 'Dewey'
);
```
This assumes that you want the query to refer to the author by *name* and not by the id.
|
63,483,417
|
Say I have a dataframe
```
id category
1 A
2 A
3 B
4 C
5 A
```
And I want to create a new column with incremental values where `category == 'A'`. So it should be something like.
```
id category value
1 A 1
2 A 2
3 B NaN
4 C NaN
5 A 3
```
Currently I am able to do this with
```
df['value'] = pd.nan
df.loc[df.category == "A", ['value']] = range(1, len(df[df.category == "A"]) + 1)
```
Is there a better/pythonic way to do this (i.e. I don't have to initialize the value column with nan? And currently, this method assigns me a float type instead of integer which is what I want.
|
2020/08/19
|
[
"https://Stackoverflow.com/questions/63483417",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1676881/"
] |
The following code for GNU sed:
```
sed 's/EndHere/&\n/g; s/\(StartHere\)[^\n]*\(EndHere\|$\)/\1\2/g; s/\n//g' <<EOF
StartHere Word1 EndHere
StartHere Word2
StartHere Word2 EndHere something else
something else StartHere Word2 EndHere something else
EOF
```
outputs:
```
StartHereEndHere
StartHere
StartHereEndHere something else
something else StartHereEndHere something else
```
>
> I am sure here that the word that i am deleting after is there only once per line
>
>
>
Then you could:
```
sed 's/\(StartHere\).*\(EndHere\)/\1\2/; t; s/\(StartHere\).*$/\1/'
```
The `t` command will end processing of the current line if the last `s` command was successful. So... it will work.
|
Instead of using `sed`, you could do it with Perl, which supports [negative lookahead](http://www.regular-expressions.info/lookaround.html).
Using the example you gave in your comment:
```
$ echo "oooo StartHere=Yo9897 EndHereYo" \
| perl -pe 's/(StartHere) (?: .*(EndHere) | .*(?!EndHere) )/$1$2/x'
```
would output "oooo StartHereEndHereYo".
`(?!...)` is a "negative lookahead"
The Perl's `x` regex option allows using spaces in the regex to make it (slightly) more readable
|
42,357,563
|
I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this?
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] |
Think about what `reduce` does. It takes the output of one call and uses it as the first argument when calling the same function again. So imagine n is 4. Suppose you call your lambda `f`. Then you are doing `f(f(1, 2), 3)`. That is equivalent to:
```
(1**2 + 2**2)**2 + 3**2
```
Because the first argument to your lambda is squared, your first sum of squares will be squared again on the next call, and then that sum will be squared again on the next, and so on.
|
You're only supposed to square each successive element. `x**2 + y**2` squares the running total (`x`) as well as each successive element (`y`). Change that to `x + y**2` and you'll get the correct result. Note that, as mentioned in comments, this requires a proper initial value as well, so you should pass `0` as the optional third argument.
```
>>> sum(x ** 2 for x in range(5,15))
985
>>> reduce(lambda x, y: x + y**2, range(5,15))
965
>>> reduce(lambda x, y: x + y**2, range(5,15), 0)
985
```
|
42,357,563
|
I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this?
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] |
You're only supposed to square each successive element. `x**2 + y**2` squares the running total (`x`) as well as each successive element (`y`). Change that to `x + y**2` and you'll get the correct result. Note that, as mentioned in comments, this requires a proper initial value as well, so you should pass `0` as the optional third argument.
```
>>> sum(x ** 2 for x in range(5,15))
985
>>> reduce(lambda x, y: x + y**2, range(5,15))
965
>>> reduce(lambda x, y: x + y**2, range(5,15), 0)
985
```
|
(for python 3 users)
```
from functools import reduce
```
To be able to use `reduce`.
Second, your function `sum_of_squares_lambda` won't work the way you expect it to because `reduce` will take twice (instead of once) every argument except the first and the last one.
|
42,357,563
|
I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this?
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] |
You're only supposed to square each successive element. `x**2 + y**2` squares the running total (`x`) as well as each successive element (`y`). Change that to `x + y**2` and you'll get the correct result. Note that, as mentioned in comments, this requires a proper initial value as well, so you should pass `0` as the optional third argument.
```
>>> sum(x ** 2 for x in range(5,15))
985
>>> reduce(lambda x, y: x + y**2, range(5,15))
965
>>> reduce(lambda x, y: x + y**2, range(5,15), 0)
985
```
|
Suppose you plug in 4:
```
1^2 + 2^2 = 5 (you then take this result and put it back in the lambda)
5^2 + 3^2 = 34 (for reference the correct answer is 14)
```
It squares the "previous" sum as well in the `lambda`.
(You also have an off by one issue, it should be `n+1` in your range calls.)
|
42,357,563
|
I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this?
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] |
Think about what `reduce` does. It takes the output of one call and uses it as the first argument when calling the same function again. So imagine n is 4. Suppose you call your lambda `f`. Then you are doing `f(f(1, 2), 3)`. That is equivalent to:
```
(1**2 + 2**2)**2 + 3**2
```
Because the first argument to your lambda is squared, your first sum of squares will be squared again on the next call, and then that sum will be squared again on the next, and so on.
|
(for python 3 users)
```
from functools import reduce
```
To be able to use `reduce`.
Second, your function `sum_of_squares_lambda` won't work the way you expect it to because `reduce` will take twice (instead of once) every argument except the first and the last one.
|
42,357,563
|
I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this?
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] |
Think about what `reduce` does. It takes the output of one call and uses it as the first argument when calling the same function again. So imagine n is 4. Suppose you call your lambda `f`. Then you are doing `f(f(1, 2), 3)`. That is equivalent to:
```
(1**2 + 2**2)**2 + 3**2
```
Because the first argument to your lambda is squared, your first sum of squares will be squared again on the next call, and then that sum will be squared again on the next, and so on.
|
Suppose you plug in 4:
```
1^2 + 2^2 = 5 (you then take this result and put it back in the lambda)
5^2 + 3^2 = 34 (for reference the correct answer is 14)
```
It squares the "previous" sum as well in the `lambda`.
(You also have an off by one issue, it should be `n+1` in your range calls.)
|
42,357,563
|
I am thinking of different ways to take the sum of squares in python. I have found that the following works using list comprehensions:
```
def sum_of_squares(n):
return sum(x ** 2 for x in range(1, n))
```
But, when using lambda functions, the following does not compute:
```
def sum_of_squares_lambda(n):
return reduce(lambda x, y: x**2 + y**2, range(1, n))
```
Why is this?
|
2017/02/21
|
[
"https://Stackoverflow.com/questions/42357563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4382972/"
] |
Suppose you plug in 4:
```
1^2 + 2^2 = 5 (you then take this result and put it back in the lambda)
5^2 + 3^2 = 34 (for reference the correct answer is 14)
```
It squares the "previous" sum as well in the `lambda`.
(You also have an off by one issue, it should be `n+1` in your range calls.)
|
(for python 3 users)
```
from functools import reduce
```
To be able to use `reduce`.
Second, your function `sum_of_squares_lambda` won't work the way you expect it to because `reduce` will take twice (instead of once) every argument except the first and the last one.
|
36,663,727
|
I have two UI windows created with QT Designer. I have two separate python scripts for each UI. What I'm trying to do is the first script opens a window, creates a thread that looks for a certain condition, then when found, opens the second UI. Then the second UI creates a thread, and when done, opens the first UI.
This seems to work fine, here's the partial code that is fired when the signal is called:
```
def run_fuel(self):
self.st_window = L79Fuel.FuelWindow(self)
self.thread.exit()
self.st_window.show()
self.destroy()
```
So that appears to work fine. I am still unsure of the proper way to kill the thread, the docs seem to state exit() or quit(). But...the new window from the other script (L79Fuel.py) is shown and the old window destroyed.
Then the new window does some things, and again when a signal is called, it triggers an similar function that I'd like to close that window, and reopen the first window.
```
def start_first(self):
self.r_window = L79Tools.FirstWindow(self)
self.thread.exit()
self.r_window.show()
self.destroy()
```
And this just exits with a code 0. I stepped through it with a debugger, and what seems to be happening is it runs through `start_first`, does everything in the function, and then goes back to the first window's `sys.exit(app.exec_())`, does that line, and then loops back to the `start_first` function (the second window) and executes that code again, in a loop, over and over.
I'm stumped. I've read as much as I could find, but nothing seems to address this. I'm guessing there's something I'm doing wrong with the threading (both windows have a thread going) and I'm not killing the threads correctly, or something along those lines.
|
2016/04/16
|
[
"https://Stackoverflow.com/questions/36663727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5897068/"
] |
Checkout the docs here:
<https://code.visualstudio.com/Docs/customization/colorizer>
You basically either get one from the marketplace or generate a basic editable file with yeoman.
You can also add themes even from color sublime as described here:
<https://code.visualstudio.com/docs/customization/themes>
|
Install theme from extensions from which you wish to start.
Then find where the theme got installed. On Windows it would be `%USERPROFILE%\.vscode\extensions`, see details in [Installing extensions](https://code.visualstudio.com/docs/extensions/install-extension).
There you'll find folder with theme, inside is `themes` folder and `<something>.tmTheme` file which is actually xml file. Open it inside VSCode and start editing :)
You'll find items and colors, syntax is described elsewhere, but common sense will help you.
To test change, open desired .cs file in same editor. Changes are applied after restart, so it's also good to make [key shortcut](https://code.visualstudio.com/docs/customization/keybindings) to restart the editor:
keybindings.json
```
...
{
"key": "ctrl+shift+alt+r",
"command": "workbench.action.reloadWindow"
}
...
```
Then try color, restart, see result, continue...
|
26,259,870
|
I am new to java (well I played with it a few times), and I am wondering:
=> How to do *fast* independent prototypes ? something like one file projects.
The last few years, I worked with python. Each time I had to develop some new functionality or algorithm, I would make a simple python module (i.e. file) just for it. I could then integrate all or part of it into my projects. So, how should I translate such "modular-development" workflow into a java context?
Now I am working on some relatively complex java DB+web project using spring, maven and intelliJ. And I can't see how to easily develop and run independent code into this context.
**Edit:**
I think my question is unclear because I confused two things:
1. fast developement and test of code snippets
2. incremental development
In my experience (with python, no web), I could pass from the first to the second seemlessly.
For the sake of consistency with the title, **the 1st has priority**. However it is good only for exploration purpose. In practice, the 2nd is more important.
|
2014/10/08
|
[
"https://Stackoverflow.com/questions/26259870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1206998/"
] |
Definitely take a look at [Spring Boot](http://docs.spring.io/spring-boot/docs/1.2.2.RELEASE/reference/htmlsingle/#getting-started-introducing-spring-boot). Relatively new project. Its aim is to remove initial configuration phase and spin up Spring apps quickly. You can think about it as convention over configuration wrapper on top of Spring Framework.
It's also considered as good fit for micro-services architecture.
It has embedded servlet container (Jetty/Tomcat), so you don't need to configure it.
It also has various different bulk dependencies for different technology combinations/stacks. So you can pick good fit for your needs.
|
What does "develop and run independent code in this context" mean?
Do you mean "small standalone example code snippets?"
* Use the Maven exec plugin
* Write unit/integration tests
* Bring your Maven dependencies into something like a JRuby REPL
|
32,934,653
|
I am trying to load a CSV file into HDFS and read the same into Spark as RDDs. I am using Hortonworks Sandbox and trying these through the command line. I loaded the data as follows:
```
hadoop fs -put data.csv /
```
The data seems to have loaded properly as seen by the following command:
```
[root@sandbox temp]# hadoop fs -ls /data.csv
-rw-r--r-- 1 hdfs hdfs 70085496 2015-10-04 14:17 /data.csv
```
In pyspark, I tried reading this file as follows:
```
data = sc.textFile('/data.csv')
```
However, the following take command throws an error:
```
data.take(5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.3.0.0-2557/spark/python/pyspark/rdd.py", line 1194, in take
totalParts = self._jrdd.partitions().size()
File "/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1- src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o35.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data.csv
```
Can someone help me with this error?
|
2015/10/04
|
[
"https://Stackoverflow.com/questions/32934653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002903/"
] |
I figured the answer out. I had to enter the complete path name of the HDFS file as follows:
```
data = sc.textFile('hdfs://sandbox.hortonworks.com:8020/data.csv')
```
The full path name is obtained from conf/core-site.xml
|
Error `org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data.csv`
It is reading from you local file system instead of hdfs.
Try providing file path like below,
```
data = sc.textFile("hdfs://data.csv")
```
|
32,934,653
|
I am trying to load a CSV file into HDFS and read the same into Spark as RDDs. I am using Hortonworks Sandbox and trying these through the command line. I loaded the data as follows:
```
hadoop fs -put data.csv /
```
The data seems to have loaded properly as seen by the following command:
```
[root@sandbox temp]# hadoop fs -ls /data.csv
-rw-r--r-- 1 hdfs hdfs 70085496 2015-10-04 14:17 /data.csv
```
In pyspark, I tried reading this file as follows:
```
data = sc.textFile('/data.csv')
```
However, the following take command throws an error:
```
data.take(5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/hdp/2.3.0.0-2557/spark/python/pyspark/rdd.py", line 1194, in take
totalParts = self._jrdd.partitions().size()
File "/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1- src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o35.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/data.csv
```
Can someone help me with this error?
|
2015/10/04
|
[
"https://Stackoverflow.com/questions/32934653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1002903/"
] |
I figured the answer out. I had to enter the complete path name of the HDFS file as follows:
```
data = sc.textFile('hdfs://sandbox.hortonworks.com:8020/data.csv')
```
The full path name is obtained from conf/core-site.xml
|
In case you want to create an rdd for any text or csv file available in the local system, use
`rdd = sc.textFile("file://path/to/csv or text file")`
|
69,395,204
|
I have a list of dict I want to group by multiple keys.
I have used sort by default in python dict
```
data = [
[],
[{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}, {'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 126, 'bot': 'DB', 'month':8, 'year': 2021}],
[],
[{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}, {'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}, {'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}],
[{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}],
[{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}, {'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}, {'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}, {'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}, {'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}]
]
output_dict = {}
for i in data:
if not i:
pass
for j in i:
for key,val in sorted(j.items()):
output_dict.setdefault(val, []).append(key)
print(output_dict)
{'DB': ['bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot'], 9: ['month', 'month', 'month'], 8: ['value'], 2020: ['year', 'year', 'year', 'year', 'year'], 10: ['month', 'month'], 79: ['value'], 126: ['value'], 2021: ['year', 'year', 'year', 'year', 'year', 'year', 'year', 'year'], 'GEMBOT': ['bot', 'bot', 'bot', 'bot'], 11: ['month', 'month'], 222: ['value'], 4: ['month', 'month', 'month'], 623: ['value'], 628: ['value'], 0: ['value'], 703: ['value'], 3: ['month'], 1081: ['value'], 1335: ['value'], 1920: ['value'], 1: ['month'], 2132: ['value'], 2: ['month'], 2383: ['value']}
```
But I want the output like this.
```
[{ "bot": "DB",
"date": "Sept 20",
"value": 134
},{"bot": "DB",
"date": "Oct 20",
"value": 79
}.. So on ]
```
Is there an efficient way to flatten this list ?
Thanks in advance
|
2021/09/30
|
[
"https://Stackoverflow.com/questions/69395204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9538877/"
] |
Maybe try:
```
from pprint import pprint
import datetime
output_dict = []
for i in data:
if i:
for j in i:
for key, val in sorted(j.items()):
if key == "bot":
temp["bot"] = val
elif key == "value":
temp["value"] = val
elif key == "month":
month = datetime.datetime.strptime(str(val), "%m")
temp["date"] = month.strftime("%b")
elif key == "year":
temp["date"] = str(temp["date"]) + " " + str(val)
output_dict.append(temp)
temp = {}
pprint(output_dict)
```
The final results are shown as follows:
```
[{'bot': 'DB', 'date': 'Sep 2020', 'value': 8},
{'bot': 'DB', 'date': 'Oct 2020', 'value': 79},
{'bot': 'DB', 'date': 'Aug 2021', 'value': 126},
{'bot': 'GEMBOT', 'date': 'Nov 2020', 'value': 222},
{'bot': 'GEMBOT', 'date': 'Apr 2021', 'value': 623},
{'bot': 'GEMBOT', 'date': 'Sep 2021', 'value': 628},
{'bot': 'GEMBOT', 'date': 'Apr 2021', 'value': 0},
{'bot': 'DB', 'date': 'Nov 2020', 'value': 703},
{'bot': 'DB', 'date': 'Mar 2021', 'value': 1081},
{'bot': 'DB', 'date': 'Oct 2020', 'value': 1335},
{'bot': 'DB', 'date': 'Apr 2021', 'value': 1920},
{'bot': 'DB', 'date': 'Jan 2021', 'value': 2132},
{'bot': 'DB', 'date': 'Feb 2021', 'value': 2383}]
```
|
Maybe try:
```
output = []
for i in data:
if not i:
pass
for j in i:
output.append(j)
```
And then if you want to sort it, then you can use `sorted_output = sorted(ouput, key=lambda k: k['bot'])` to sort it by `bot` for example. If you want to sort it by date, maybe create a value that calculates the date in months and then sorts it from there.
|
69,395,204
|
I have a list of dict I want to group by multiple keys.
I have used sort by default in python dict
```
data = [
[],
[{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}, {'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 126, 'bot': 'DB', 'month':8, 'year': 2021}],
[],
[{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}, {'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}, {'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}],
[{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}],
[{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}, {'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}, {'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}, {'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}, {'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}]
]
output_dict = {}
for i in data:
if not i:
pass
for j in i:
for key,val in sorted(j.items()):
output_dict.setdefault(val, []).append(key)
print(output_dict)
{'DB': ['bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot'], 9: ['month', 'month', 'month'], 8: ['value'], 2020: ['year', 'year', 'year', 'year', 'year'], 10: ['month', 'month'], 79: ['value'], 126: ['value'], 2021: ['year', 'year', 'year', 'year', 'year', 'year', 'year', 'year'], 'GEMBOT': ['bot', 'bot', 'bot', 'bot'], 11: ['month', 'month'], 222: ['value'], 4: ['month', 'month', 'month'], 623: ['value'], 628: ['value'], 0: ['value'], 703: ['value'], 3: ['month'], 1081: ['value'], 1335: ['value'], 1920: ['value'], 1: ['month'], 2132: ['value'], 2: ['month'], 2383: ['value']}
```
But I want the output like this.
```
[{ "bot": "DB",
"date": "Sept 20",
"value": 134
},{"bot": "DB",
"date": "Oct 20",
"value": 79
}.. So on ]
```
Is there an efficient way to flatten this list ?
Thanks in advance
|
2021/09/30
|
[
"https://Stackoverflow.com/questions/69395204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9538877/"
] |
Two things will make this easier to answer. The first is a list comprehension that will promote sub-items:
```
data_reshaped = [cell for row in data for cell in row]
```
this will take your original `data` and flatten it a bit to:
```
[
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020},
{'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 126, 'bot': 'DB', 'month': 8, 'year': 2021},
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020},
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021},
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021},
{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021},
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020},
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021},
{'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021},
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021},
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
]
```
Now we can iterate over that using an compound key and `setdefault()` to aggregate the results. Note if you rather use `collections.defaultdict()` as I do then swap that out for `setdefault()`.
```
results = {}
for cell in data_reshaped:
key = f"{cell['bot']}_{cell['year']}_{cell['month']}"
value = cell["value"] # save the value so we can reset cell next
cell["value"] = 0 # setting this to 0 cleans up the next line.
results.setdefault(key, cell)["value"] += value
```
This should allow you to:
```
for result in results.values():
print(result)
```
**Giving:**
```
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}
{'value': 1414, 'bot': 'DB', 'month': 10, 'year': 2020}
{'value': 126, 'bot': 'DB', 'month': 8, 'year': 2021}
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
```
**Full solution:**
```
data = [
[],
[
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020},
{'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 126, 'bot': 'DB', 'month':8, 'year': 2021}
],
[],
[
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020},
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021},
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}
],
[
{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}
],
[
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020},
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021},
{'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021},
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021},
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
]
]
data_reshaped = [cell for row in data for cell in row]
results = {}
for cell in data_reshaped:
key = f"{cell['bot']}_{cell['year']}_{cell['month']}"
value = cell["value"]
cell["value"] = 0
results.setdefault(key, cell)["value"] += value
for result in results.values():
print(result)
```
**Again Giving:**
```
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}
{'value': 1414, 'bot': 'DB', 'month': 10, 'year': 2020}
{'value': 126, 'bot': 'DB', 'month': 8, 'year': 2021}
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
```
I will leave it to you to figure out casting the two date fields to some other presentation as that seems out of context with the question at hand.
|
Maybe try:
```
output = []
for i in data:
if not i:
pass
for j in i:
output.append(j)
```
And then if you want to sort it, then you can use `sorted_output = sorted(ouput, key=lambda k: k['bot'])` to sort it by `bot` for example. If you want to sort it by date, maybe create a value that calculates the date in months and then sorts it from there.
|
69,395,204
|
I have a list of dict I want to group by multiple keys.
I have used sort by default in python dict
```
data = [
[],
[{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}, {'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 126, 'bot': 'DB', 'month':8, 'year': 2021}],
[],
[{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}, {'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}, {'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}],
[{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}],
[{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}, {'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}, {'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020}, {'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}, {'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}, {'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}]
]
output_dict = {}
for i in data:
if not i:
pass
for j in i:
for key,val in sorted(j.items()):
output_dict.setdefault(val, []).append(key)
print(output_dict)
{'DB': ['bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot', 'bot'], 9: ['month', 'month', 'month'], 8: ['value'], 2020: ['year', 'year', 'year', 'year', 'year'], 10: ['month', 'month'], 79: ['value'], 126: ['value'], 2021: ['year', 'year', 'year', 'year', 'year', 'year', 'year', 'year'], 'GEMBOT': ['bot', 'bot', 'bot', 'bot'], 11: ['month', 'month'], 222: ['value'], 4: ['month', 'month', 'month'], 623: ['value'], 628: ['value'], 0: ['value'], 703: ['value'], 3: ['month'], 1081: ['value'], 1335: ['value'], 1920: ['value'], 1: ['month'], 2132: ['value'], 2: ['month'], 2383: ['value']}
```
But I want the output like this.
```
[{ "bot": "DB",
"date": "Sept 20",
"value": 134
},{"bot": "DB",
"date": "Oct 20",
"value": 79
}.. So on ]
```
Is there an efficient way to flatten this list ?
Thanks in advance
|
2021/09/30
|
[
"https://Stackoverflow.com/questions/69395204",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9538877/"
] |
Two things will make this easier to answer. The first is a list comprehension that will promote sub-items:
```
data_reshaped = [cell for row in data for cell in row]
```
this will take your original `data` and flatten it a bit to:
```
[
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020},
{'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 126, 'bot': 'DB', 'month': 8, 'year': 2021},
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020},
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021},
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021},
{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021},
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020},
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021},
{'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021},
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021},
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
]
```
Now we can iterate over that using an compound key and `setdefault()` to aggregate the results. Note if you rather use `collections.defaultdict()` as I do then swap that out for `setdefault()`.
```
results = {}
for cell in data_reshaped:
key = f"{cell['bot']}_{cell['year']}_{cell['month']}"
value = cell["value"] # save the value so we can reset cell next
cell["value"] = 0 # setting this to 0 cleans up the next line.
results.setdefault(key, cell)["value"] += value
```
This should allow you to:
```
for result in results.values():
print(result)
```
**Giving:**
```
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}
{'value': 1414, 'bot': 'DB', 'month': 10, 'year': 2020}
{'value': 126, 'bot': 'DB', 'month': 8, 'year': 2021}
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
```
**Full solution:**
```
data = [
[],
[
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020},
{'value': 79, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 126, 'bot': 'DB', 'month':8, 'year': 2021}
],
[],
[
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020},
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021},
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}
],
[
{'value': 0, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}
],
[
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020},
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021},
{'value': 1335, 'bot': 'DB', 'month': 10, 'year': 2020},
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021},
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021},
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
]
]
data_reshaped = [cell for row in data for cell in row]
results = {}
for cell in data_reshaped:
key = f"{cell['bot']}_{cell['year']}_{cell['month']}"
value = cell["value"]
cell["value"] = 0
results.setdefault(key, cell)["value"] += value
for result in results.values():
print(result)
```
**Again Giving:**
```
{'value': 8, 'bot': 'DB', 'month': 9, 'year': 2020}
{'value': 1414, 'bot': 'DB', 'month': 10, 'year': 2020}
{'value': 126, 'bot': 'DB', 'month': 8, 'year': 2021}
{'value': 222, 'bot': 'GEMBOT', 'month': 11, 'year': 2020}
{'value': 623, 'bot': 'GEMBOT', 'month': 4, 'year': 2021}
{'value': 628, 'bot': 'GEMBOT', 'month': 9, 'year': 2021}
{'value': 703, 'bot': 'DB', 'month': 11, 'year': 2020}
{'value': 1081, 'bot': 'DB', 'month': 3, 'year': 2021}
{'value': 1920, 'bot': 'DB', 'month': 4, 'year': 2021}
{'value': 2132, 'bot': 'DB', 'month': 1, 'year': 2021}
{'value': 2383, 'bot': 'DB', 'month': 2, 'year': 2021}
```
I will leave it to you to figure out casting the two date fields to some other presentation as that seems out of context with the question at hand.
|
Maybe try:
```
from pprint import pprint
import datetime
output_dict = []
for i in data:
if i:
for j in i:
for key, val in sorted(j.items()):
if key == "bot":
temp["bot"] = val
elif key == "value":
temp["value"] = val
elif key == "month":
month = datetime.datetime.strptime(str(val), "%m")
temp["date"] = month.strftime("%b")
elif key == "year":
temp["date"] = str(temp["date"]) + " " + str(val)
output_dict.append(temp)
temp = {}
pprint(output_dict)
```
The final results are shown as follows:
```
[{'bot': 'DB', 'date': 'Sep 2020', 'value': 8},
{'bot': 'DB', 'date': 'Oct 2020', 'value': 79},
{'bot': 'DB', 'date': 'Aug 2021', 'value': 126},
{'bot': 'GEMBOT', 'date': 'Nov 2020', 'value': 222},
{'bot': 'GEMBOT', 'date': 'Apr 2021', 'value': 623},
{'bot': 'GEMBOT', 'date': 'Sep 2021', 'value': 628},
{'bot': 'GEMBOT', 'date': 'Apr 2021', 'value': 0},
{'bot': 'DB', 'date': 'Nov 2020', 'value': 703},
{'bot': 'DB', 'date': 'Mar 2021', 'value': 1081},
{'bot': 'DB', 'date': 'Oct 2020', 'value': 1335},
{'bot': 'DB', 'date': 'Apr 2021', 'value': 1920},
{'bot': 'DB', 'date': 'Jan 2021', 'value': 2132},
{'bot': 'DB', 'date': 'Feb 2021', 'value': 2383}]
```
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Time under 0.05 can be a simple noise. Repeat the main program enough times to actually get ~1s of execution time in C. (I mean repeating it in a loop in the program itself, not by running it again)
Did you compile your code with optimisations turned on? Did you try reducing the number of branches? (and comparisons)
```
if (i % 3 == 0) {
if (i % 5 == 0) {
printf("Hop\n");
continue;
}
printf("Hoppity\n");
} else if (i % 5 == 0){
printf("Hophop\n");
}
```
Did you try looking at the assembler output?
Also printf is pretty slow. Try `puts("Hop")` instead, since you don't use the formating anyways.
|
I would be interested to see how much time is spent in get\_count().
I'm not sure how much it would matter, but you're reading in a long as a string, which means the string cannot be larger than 20 bytes, or 10 bytes (2^64 = some 20 character long decimal number, or 2^32 = some 10 character long decimal number), so you don't need your while loop in get\_count. Also, you could allocate file\_text on the stack, rather than calling calloc - but I guess you'd still need to zero it out, or otherwise find the length and set the last byte to null.
```
file_length = lseek(fileptr, 0, SEEK_END);
```
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Time under 0.05 can be a simple noise. Repeat the main program enough times to actually get ~1s of execution time in C. (I mean repeating it in a loop in the program itself, not by running it again)
Did you compile your code with optimisations turned on? Did you try reducing the number of branches? (and comparisons)
```
if (i % 3 == 0) {
if (i % 5 == 0) {
printf("Hop\n");
continue;
}
printf("Hoppity\n");
} else if (i % 5 == 0){
printf("Hophop\n");
}
```
Did you try looking at the assembler output?
Also printf is pretty slow. Try `puts("Hop")` instead, since you don't use the formating anyways.
|
Any program which mostly involves opening a file and reading it is limited by the speed of opening a file and reading it. The C computations you are doing here will take between 1 millionth and one thousandth of the time of opening the file and reading it.
I thought this site was useful: <http://norvig.com/21-days.html#answers>
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Your C code isn't the equivalent of the OCaml code - you used 'else if' in the OCaml to avoid having to recompute moduli quite so much.
There's an awful lot of code in that 'read the long integer'. Why not just use `fscanf()`; it skips blanks and all that automatically, and avoids you doing the `malloc()` etc. I don't often recommend using `fscanf()`, but this looks like a setup for it - a single line, possibly with spaces either side, no funny stuff.
---
Curiosity killed the cat - but in this case, not the Leopard.
I downloaded OCaml 3.11.1 for MacOS X Intel and copied the OCaml code from the question into xxx.ml (OCaml), and compiled that into an object file xxx (using "ocamlc -o xxx xxx.ml"); I copied the C code verbatim into yyy.c and created a variant zzz.c using `fscanf()` and `fclose()`, and compiled them using "gcc -O -o yyy yyy.c" and "gcc -O -o zzz zzz.c". I created a file 'file3' containing: "`987654`" plus a newline. I created a shell script runthem.sh as shown. Note that 'time' is a pesky command that thinks its output must go to stderr even when you'd rather it didn't - you have to work quite hard to get the output where you want it to go. (The range command generates numbers in the given range, inclusive - hence 11 values per program.)
```
Osiris JL: cat runthem.sh
for prog in "ocaml xxx.ml" ./xxx ./yyy ./zzz
do
for iter in $(range 0 10)
do
r=$(sh -c "time $prog file3 >/dev/null" 2>&1)
echo $prog: $r
done
done
Osiris JL:
```
I ran all this on a modern MacBook Pro (3 GHz Core 2 Duo etc, 4GB RAM) running Leopard (10.5.8). The timings I got are shown by:
```
Osiris JL: sh runthem.sh
ocaml xxx.ml: real 0m0.961s user 0m0.524s sys 0m0.432s
ocaml xxx.ml: real 0m0.953s user 0m0.516s sys 0m0.430s
ocaml xxx.ml: real 0m0.959s user 0m0.517s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.517s sys 0m0.430s
ocaml xxx.ml: real 0m0.952s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.959s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.950s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.956s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.432s
./xxx: real 0m0.928s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.938s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.933s user 0m0.497s sys 0m0.428s
./xxx: real 0m0.926s user 0m0.494s sys 0m0.429s
./xxx: real 0m0.921s user 0m0.492s sys 0m0.428s
./xxx: real 0m0.925s user 0m0.494s sys 0m0.428s
./yyy: real 0m0.027s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.030s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
Osiris JL:
```
I don't see the OCaml code running faster than the C code. I ran the tests with smaller numbers in the file that was read, and the results were similarly in favour of the C code:
Stop number: 345
```
ocaml xxx.ml: real 0m0.027s user 0m0.020s sys 0m0.005s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.005s
ocaml xxx.ml: real 0m0.025s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.022s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.019s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.002s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.005s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.001s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.001s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
```
Stop number: 87654
```
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.060s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.105s user 0m0.059s sys 0m0.041s
./xxx: real 0m0.092s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.087s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.086s user 0m0.045s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.083s user 0m0.044s sys 0m0.038s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.006s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.001s
```
Obviously, YMMV - but it appears that OCaml is slower than C by a considerable margin, but that if the number in the given file is small enough, then start up and file reading dominate the process time.
The C timings, especially at the smaller numbers, are so fast that they are not all that reliable.
|
Time under 0.05 can be a simple noise. Repeat the main program enough times to actually get ~1s of execution time in C. (I mean repeating it in a loop in the program itself, not by running it again)
Did you compile your code with optimisations turned on? Did you try reducing the number of branches? (and comparisons)
```
if (i % 3 == 0) {
if (i % 5 == 0) {
printf("Hop\n");
continue;
}
printf("Hoppity\n");
} else if (i % 5 == 0){
printf("Hophop\n");
}
```
Did you try looking at the assembler output?
Also printf is pretty slow. Try `puts("Hop")` instead, since you don't use the formating anyways.
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Time under 0.05 can be a simple noise. Repeat the main program enough times to actually get ~1s of execution time in C. (I mean repeating it in a loop in the program itself, not by running it again)
Did you compile your code with optimisations turned on? Did you try reducing the number of branches? (and comparisons)
```
if (i % 3 == 0) {
if (i % 5 == 0) {
printf("Hop\n");
continue;
}
printf("Hoppity\n");
} else if (i % 5 == 0){
printf("Hophop\n");
}
```
Did you try looking at the assembler output?
Also printf is pretty slow. Try `puts("Hop")` instead, since you don't use the formating anyways.
|
In a tiny program like this, it's often hard to guess why things work out the way they do. I think if I was doing it, I'd write the code like this (leaving out error checking for the moment):
```
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
static char buffer[20];
int limit, i;
freopen(argv[1], "r", stdin);
fgets(buffer, sizeof(buffer), stdin);
limit = atoi(buffer);
for (i=1; i<=limit; i++) {
int div3=i%3==0;
int div5=i%5==0;
if (div3 && div5)
puts("Hop");
else if (div3)
puts("Hoppity");
else if (div5)
puts("HopHop");
}
return 0;
}
```
Using `freopen` avoids creating another file stream, and instead just connects standard input to the specified file. No guarantee that it's faster, but it's unlikely to be any slower anyway.
Likewise, a good compiler may note that `i` is constant throughout the loop body, and factor out the two remainder operations so it only does each once. Here I've done that by hand, which might not be any faster, but almost certainly won't be any slower.
Using `puts` instead of `printf` is fairly similar -- it might not be any faster, but it almost certainly won't be any slower. Using `printf` the way you have, it has to scan through the entire string looking for a '%', in case you're asking for a conversion, but since `puts` doesn't do any conversions, it doesn't have to do that.
With such a tiny program as this, there's another factor that could be considerably more significant: `puts` will usually be considerably smaller than `printf`. You haven't said how you're doing the timing, but if it includes the time to load the code, really small code may well make more difference than the execution time.
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Your C code isn't the equivalent of the OCaml code - you used 'else if' in the OCaml to avoid having to recompute moduli quite so much.
There's an awful lot of code in that 'read the long integer'. Why not just use `fscanf()`; it skips blanks and all that automatically, and avoids you doing the `malloc()` etc. I don't often recommend using `fscanf()`, but this looks like a setup for it - a single line, possibly with spaces either side, no funny stuff.
---
Curiosity killed the cat - but in this case, not the Leopard.
I downloaded OCaml 3.11.1 for MacOS X Intel and copied the OCaml code from the question into xxx.ml (OCaml), and compiled that into an object file xxx (using "ocamlc -o xxx xxx.ml"); I copied the C code verbatim into yyy.c and created a variant zzz.c using `fscanf()` and `fclose()`, and compiled them using "gcc -O -o yyy yyy.c" and "gcc -O -o zzz zzz.c". I created a file 'file3' containing: "`987654`" plus a newline. I created a shell script runthem.sh as shown. Note that 'time' is a pesky command that thinks its output must go to stderr even when you'd rather it didn't - you have to work quite hard to get the output where you want it to go. (The range command generates numbers in the given range, inclusive - hence 11 values per program.)
```
Osiris JL: cat runthem.sh
for prog in "ocaml xxx.ml" ./xxx ./yyy ./zzz
do
for iter in $(range 0 10)
do
r=$(sh -c "time $prog file3 >/dev/null" 2>&1)
echo $prog: $r
done
done
Osiris JL:
```
I ran all this on a modern MacBook Pro (3 GHz Core 2 Duo etc, 4GB RAM) running Leopard (10.5.8). The timings I got are shown by:
```
Osiris JL: sh runthem.sh
ocaml xxx.ml: real 0m0.961s user 0m0.524s sys 0m0.432s
ocaml xxx.ml: real 0m0.953s user 0m0.516s sys 0m0.430s
ocaml xxx.ml: real 0m0.959s user 0m0.517s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.517s sys 0m0.430s
ocaml xxx.ml: real 0m0.952s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.959s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.950s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.956s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.432s
./xxx: real 0m0.928s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.938s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.933s user 0m0.497s sys 0m0.428s
./xxx: real 0m0.926s user 0m0.494s sys 0m0.429s
./xxx: real 0m0.921s user 0m0.492s sys 0m0.428s
./xxx: real 0m0.925s user 0m0.494s sys 0m0.428s
./yyy: real 0m0.027s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.030s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
Osiris JL:
```
I don't see the OCaml code running faster than the C code. I ran the tests with smaller numbers in the file that was read, and the results were similarly in favour of the C code:
Stop number: 345
```
ocaml xxx.ml: real 0m0.027s user 0m0.020s sys 0m0.005s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.005s
ocaml xxx.ml: real 0m0.025s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.022s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.019s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.002s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.005s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.001s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.001s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
```
Stop number: 87654
```
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.060s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.105s user 0m0.059s sys 0m0.041s
./xxx: real 0m0.092s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.087s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.086s user 0m0.045s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.083s user 0m0.044s sys 0m0.038s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.006s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.001s
```
Obviously, YMMV - but it appears that OCaml is slower than C by a considerable margin, but that if the number in the given file is small enough, then start up and file reading dominate the process time.
The C timings, especially at the smaller numbers, are so fast that they are not all that reliable.
|
I would be interested to see how much time is spent in get\_count().
I'm not sure how much it would matter, but you're reading in a long as a string, which means the string cannot be larger than 20 bytes, or 10 bytes (2^64 = some 20 character long decimal number, or 2^32 = some 10 character long decimal number), so you don't need your while loop in get\_count. Also, you could allocate file\_text on the stack, rather than calling calloc - but I guess you'd still need to zero it out, or otherwise find the length and set the last byte to null.
```
file_length = lseek(fileptr, 0, SEEK_END);
```
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
In a tiny program like this, it's often hard to guess why things work out the way they do. I think if I was doing it, I'd write the code like this (leaving out error checking for the moment):
```
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
static char buffer[20];
int limit, i;
freopen(argv[1], "r", stdin);
fgets(buffer, sizeof(buffer), stdin);
limit = atoi(buffer);
for (i=1; i<=limit; i++) {
int div3=i%3==0;
int div5=i%5==0;
if (div3 && div5)
puts("Hop");
else if (div3)
puts("Hoppity");
else if (div5)
puts("HopHop");
}
return 0;
}
```
Using `freopen` avoids creating another file stream, and instead just connects standard input to the specified file. No guarantee that it's faster, but it's unlikely to be any slower anyway.
Likewise, a good compiler may note that `i` is constant throughout the loop body, and factor out the two remainder operations so it only does each once. Here I've done that by hand, which might not be any faster, but almost certainly won't be any slower.
Using `puts` instead of `printf` is fairly similar -- it might not be any faster, but it almost certainly won't be any slower. Using `printf` the way you have, it has to scan through the entire string looking for a '%', in case you're asking for a conversion, but since `puts` doesn't do any conversions, it doesn't have to do that.
With such a tiny program as this, there's another factor that could be considerably more significant: `puts` will usually be considerably smaller than `printf`. You haven't said how you're doing the timing, but if it includes the time to load the code, really small code may well make more difference than the execution time.
|
I would be interested to see how much time is spent in get\_count().
I'm not sure how much it would matter, but you're reading in a long as a string, which means the string cannot be larger than 20 bytes, or 10 bytes (2^64 = some 20 character long decimal number, or 2^32 = some 10 character long decimal number), so you don't need your while loop in get\_count. Also, you could allocate file\_text on the stack, rather than calling calloc - but I guess you'd still need to zero it out, or otherwise find the length and set the last byte to null.
```
file_length = lseek(fileptr, 0, SEEK_END);
```
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Your C code isn't the equivalent of the OCaml code - you used 'else if' in the OCaml to avoid having to recompute moduli quite so much.
There's an awful lot of code in that 'read the long integer'. Why not just use `fscanf()`; it skips blanks and all that automatically, and avoids you doing the `malloc()` etc. I don't often recommend using `fscanf()`, but this looks like a setup for it - a single line, possibly with spaces either side, no funny stuff.
---
Curiosity killed the cat - but in this case, not the Leopard.
I downloaded OCaml 3.11.1 for MacOS X Intel and copied the OCaml code from the question into xxx.ml (OCaml), and compiled that into an object file xxx (using "ocamlc -o xxx xxx.ml"); I copied the C code verbatim into yyy.c and created a variant zzz.c using `fscanf()` and `fclose()`, and compiled them using "gcc -O -o yyy yyy.c" and "gcc -O -o zzz zzz.c". I created a file 'file3' containing: "`987654`" plus a newline. I created a shell script runthem.sh as shown. Note that 'time' is a pesky command that thinks its output must go to stderr even when you'd rather it didn't - you have to work quite hard to get the output where you want it to go. (The range command generates numbers in the given range, inclusive - hence 11 values per program.)
```
Osiris JL: cat runthem.sh
for prog in "ocaml xxx.ml" ./xxx ./yyy ./zzz
do
for iter in $(range 0 10)
do
r=$(sh -c "time $prog file3 >/dev/null" 2>&1)
echo $prog: $r
done
done
Osiris JL:
```
I ran all this on a modern MacBook Pro (3 GHz Core 2 Duo etc, 4GB RAM) running Leopard (10.5.8). The timings I got are shown by:
```
Osiris JL: sh runthem.sh
ocaml xxx.ml: real 0m0.961s user 0m0.524s sys 0m0.432s
ocaml xxx.ml: real 0m0.953s user 0m0.516s sys 0m0.430s
ocaml xxx.ml: real 0m0.959s user 0m0.517s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.517s sys 0m0.430s
ocaml xxx.ml: real 0m0.952s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.959s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.950s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.956s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.432s
./xxx: real 0m0.928s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.938s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.933s user 0m0.497s sys 0m0.428s
./xxx: real 0m0.926s user 0m0.494s sys 0m0.429s
./xxx: real 0m0.921s user 0m0.492s sys 0m0.428s
./xxx: real 0m0.925s user 0m0.494s sys 0m0.428s
./yyy: real 0m0.027s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.030s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
Osiris JL:
```
I don't see the OCaml code running faster than the C code. I ran the tests with smaller numbers in the file that was read, and the results were similarly in favour of the C code:
Stop number: 345
```
ocaml xxx.ml: real 0m0.027s user 0m0.020s sys 0m0.005s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.005s
ocaml xxx.ml: real 0m0.025s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.022s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.019s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.002s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.005s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.001s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.001s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
```
Stop number: 87654
```
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.060s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.105s user 0m0.059s sys 0m0.041s
./xxx: real 0m0.092s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.087s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.086s user 0m0.045s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.083s user 0m0.044s sys 0m0.038s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.006s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.001s
```
Obviously, YMMV - but it appears that OCaml is slower than C by a considerable margin, but that if the number in the given file is small enough, then start up and file reading dominate the process time.
The C timings, especially at the smaller numbers, are so fast that they are not all that reliable.
|
Any program which mostly involves opening a file and reading it is limited by the speed of opening a file and reading it. The C computations you are doing here will take between 1 millionth and one thousandth of the time of opening the file and reading it.
I thought this site was useful: <http://norvig.com/21-days.html#answers>
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
In a tiny program like this, it's often hard to guess why things work out the way they do. I think if I was doing it, I'd write the code like this (leaving out error checking for the moment):
```
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
static char buffer[20];
int limit, i;
freopen(argv[1], "r", stdin);
fgets(buffer, sizeof(buffer), stdin);
limit = atoi(buffer);
for (i=1; i<=limit; i++) {
int div3=i%3==0;
int div5=i%5==0;
if (div3 && div5)
puts("Hop");
else if (div3)
puts("Hoppity");
else if (div5)
puts("HopHop");
}
return 0;
}
```
Using `freopen` avoids creating another file stream, and instead just connects standard input to the specified file. No guarantee that it's faster, but it's unlikely to be any slower anyway.
Likewise, a good compiler may note that `i` is constant throughout the loop body, and factor out the two remainder operations so it only does each once. Here I've done that by hand, which might not be any faster, but almost certainly won't be any slower.
Using `puts` instead of `printf` is fairly similar -- it might not be any faster, but it almost certainly won't be any slower. Using `printf` the way you have, it has to scan through the entire string looking for a '%', in case you're asking for a conversion, but since `puts` doesn't do any conversions, it doesn't have to do that.
With such a tiny program as this, there's another factor that could be considerably more significant: `puts` will usually be considerably smaller than `printf`. You haven't said how you're doing the timing, but if it includes the time to load the code, really small code may well make more difference than the execution time.
|
Any program which mostly involves opening a file and reading it is limited by the speed of opening a file and reading it. The C computations you are doing here will take between 1 millionth and one thousandth of the time of opening the file and reading it.
I thought this site was useful: <http://norvig.com/21-days.html#answers>
|
1,586,423
|
I wrote a basic [Hippity Hop](http://www.facebook.com/careers/puzzles.php?puzzle_id=7) program in C, Python, and OCaml. Granted, this is probably not a very good benchmark of these three languages. But the results I got were something like this:
* Python: .350 seconds
* C: .050 seconds
* *interpreted* OCaml: .040 seconds
* compiled OCaml: .010
The python performance doesn't really surprise me, but I'm rather shocked at how fast the OCaml is (especially the interpreted version). For comparison, I'll post the C version and the OCaml version.
C
=
```
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
long get_count(char *name);
int main(int argc, char *argv[])
{
if (argc != 2){
printf("Filename must be specified as a positional argument.\n");
exit(EXIT_FAILURE);
}
long count_no = get_count(argv[1]);
int i;
for (i = 1; i <= count_no; i++){
if (((i % 3) == 0) && ((i % 5) == 0)){
printf("Hop\n");
continue;
}
if ((i % 3) == 0){
printf("Hoppity\n");
}
if ((i % 5) == 0){
printf("Hophop\n");
}
}
return 0;
}
long get_count(char *name){
FILE *fileptr = fopen(name, "r");
if (!fileptr){
printf("Unable to open file %s.\n", name);
exit(EXIT_FAILURE);
}
size_t text_len = 20;
char *file_text = calloc(text_len, sizeof(char));
while (!feof(fileptr)){
fread(file_text, sizeof(char), text_len, fileptr);
assert(!ferror(fileptr));
text_len += 20;
file_text = realloc(file_text, text_len * sizeof(char));
}
long file_as_int = strtol(file_text, NULL, 10);
free(file_text);
return file_as_int;
}
```
OCaml
=====
```
open String;;
let trim str =
if str = "" then "" else
let search_pos init p next =
let rec search i =
if p i then raise(Failure "empty") else
match str.[i] with
| ' ' | '\n' | '\r' | '\t' -> search (next i)
| _ -> i
in
search init
in
let len = String.length str in
try
let left = search_pos 0 (fun i -> i >= len) (succ)
and right = search_pos (len - 1) (fun i -> i < 0) (pred)
in
String.sub str left (right - left + 1)
with
| Failure "empty" -> ""
;;
let rec iterate_over_numbers curr_num max_num =
(
if curr_num <= max_num then (
if ((curr_num mod 3) == 0) && ((curr_num mod 5) == 0) then
print_endline "Hop"
else if (curr_num mod 3) == 0 then
print_endline "Hoppity"
else if (curr_num mod 5) == 0 then
print_endline "Hophop";
iterate_over_numbers (curr_num + 1) max_num
))
;;
let fname = Sys.argv.(1);;
let infile = open_in fname;;
let file_text = trim (input_line infile);;
close_in infile;;
let input_number = int_of_string file_text;;
iterate_over_numbers 1 input_number;;
```
But I'm curious to know why I'm getting these results. Am I doing something dumb in my C program, or is this just something OCaml is faster at? It seems to me a bit strange that an interpreted program is running a little faster than the C version, and the compiled program is running 5 times as fast.
|
2009/10/19
|
[
"https://Stackoverflow.com/questions/1586423",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Your C code isn't the equivalent of the OCaml code - you used 'else if' in the OCaml to avoid having to recompute moduli quite so much.
There's an awful lot of code in that 'read the long integer'. Why not just use `fscanf()`; it skips blanks and all that automatically, and avoids you doing the `malloc()` etc. I don't often recommend using `fscanf()`, but this looks like a setup for it - a single line, possibly with spaces either side, no funny stuff.
---
Curiosity killed the cat - but in this case, not the Leopard.
I downloaded OCaml 3.11.1 for MacOS X Intel and copied the OCaml code from the question into xxx.ml (OCaml), and compiled that into an object file xxx (using "ocamlc -o xxx xxx.ml"); I copied the C code verbatim into yyy.c and created a variant zzz.c using `fscanf()` and `fclose()`, and compiled them using "gcc -O -o yyy yyy.c" and "gcc -O -o zzz zzz.c". I created a file 'file3' containing: "`987654`" plus a newline. I created a shell script runthem.sh as shown. Note that 'time' is a pesky command that thinks its output must go to stderr even when you'd rather it didn't - you have to work quite hard to get the output where you want it to go. (The range command generates numbers in the given range, inclusive - hence 11 values per program.)
```
Osiris JL: cat runthem.sh
for prog in "ocaml xxx.ml" ./xxx ./yyy ./zzz
do
for iter in $(range 0 10)
do
r=$(sh -c "time $prog file3 >/dev/null" 2>&1)
echo $prog: $r
done
done
Osiris JL:
```
I ran all this on a modern MacBook Pro (3 GHz Core 2 Duo etc, 4GB RAM) running Leopard (10.5.8). The timings I got are shown by:
```
Osiris JL: sh runthem.sh
ocaml xxx.ml: real 0m0.961s user 0m0.524s sys 0m0.432s
ocaml xxx.ml: real 0m0.953s user 0m0.516s sys 0m0.430s
ocaml xxx.ml: real 0m0.959s user 0m0.517s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.517s sys 0m0.430s
ocaml xxx.ml: real 0m0.952s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.431s
ocaml xxx.ml: real 0m0.951s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.959s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.950s user 0m0.515s sys 0m0.431s
ocaml xxx.ml: real 0m0.956s user 0m0.516s sys 0m0.431s
ocaml xxx.ml: real 0m0.952s user 0m0.514s sys 0m0.432s
./xxx: real 0m0.928s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.938s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.494s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.927s user 0m0.493s sys 0m0.430s
./xxx: real 0m0.928s user 0m0.492s sys 0m0.430s
./xxx: real 0m0.933s user 0m0.497s sys 0m0.428s
./xxx: real 0m0.926s user 0m0.494s sys 0m0.429s
./xxx: real 0m0.921s user 0m0.492s sys 0m0.428s
./xxx: real 0m0.925s user 0m0.494s sys 0m0.428s
./yyy: real 0m0.027s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.029s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.031s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./yyy: real 0m0.030s user 0m0.026s sys 0m0.002s
./yyy: real 0m0.028s user 0m0.026s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
./zzz: real 0m0.029s user 0m0.027s sys 0m0.001s
./zzz: real 0m0.030s user 0m0.027s sys 0m0.002s
Osiris JL:
```
I don't see the OCaml code running faster than the C code. I ran the tests with smaller numbers in the file that was read, and the results were similarly in favour of the C code:
Stop number: 345
```
ocaml xxx.ml: real 0m0.027s user 0m0.020s sys 0m0.005s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.005s
ocaml xxx.ml: real 0m0.025s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.022s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.019s user 0m0.015s sys 0m0.003s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
ocaml xxx.ml: real 0m0.020s user 0m0.015s sys 0m0.004s
ocaml xxx.ml: real 0m0.021s user 0m0.016s sys 0m0.004s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.002s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.005s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.002s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./xxx: real 0m0.003s user 0m0.001s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.001s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.002s user 0m0.000s sys 0m0.001s
./yyy: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.001s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.003s user 0m0.000s sys 0m0.002s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
./zzz: real 0m0.002s user 0m0.000s sys 0m0.001s
```
Stop number: 87654
```
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.060s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.041s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.103s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.101s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.102s user 0m0.059s sys 0m0.040s
ocaml xxx.ml: real 0m0.105s user 0m0.059s sys 0m0.041s
./xxx: real 0m0.092s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.087s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.086s user 0m0.045s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.085s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.038s
./xxx: real 0m0.084s user 0m0.044s sys 0m0.039s
./xxx: real 0m0.083s user 0m0.044s sys 0m0.038s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.005s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.004s user 0m0.003s sys 0m0.001s
./yyy: real 0m0.006s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.002s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.004s user 0m0.003s sys 0m0.001s
./zzz: real 0m0.005s user 0m0.003s sys 0m0.001s
```
Obviously, YMMV - but it appears that OCaml is slower than C by a considerable margin, but that if the number in the given file is small enough, then start up and file reading dominate the process time.
The C timings, especially at the smaller numbers, are so fast that they are not all that reliable.
|
In a tiny program like this, it's often hard to guess why things work out the way they do. I think if I was doing it, I'd write the code like this (leaving out error checking for the moment):
```
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
static char buffer[20];
int limit, i;
freopen(argv[1], "r", stdin);
fgets(buffer, sizeof(buffer), stdin);
limit = atoi(buffer);
for (i=1; i<=limit; i++) {
int div3=i%3==0;
int div5=i%5==0;
if (div3 && div5)
puts("Hop");
else if (div3)
puts("Hoppity");
else if (div5)
puts("HopHop");
}
return 0;
}
```
Using `freopen` avoids creating another file stream, and instead just connects standard input to the specified file. No guarantee that it's faster, but it's unlikely to be any slower anyway.
Likewise, a good compiler may note that `i` is constant throughout the loop body, and factor out the two remainder operations so it only does each once. Here I've done that by hand, which might not be any faster, but almost certainly won't be any slower.
Using `puts` instead of `printf` is fairly similar -- it might not be any faster, but it almost certainly won't be any slower. Using `printf` the way you have, it has to scan through the entire string looking for a '%', in case you're asking for a conversion, but since `puts` doesn't do any conversions, it doesn't have to do that.
With such a tiny program as this, there's another factor that could be considerably more significant: `puts` will usually be considerably smaller than `printf`. You haven't said how you're doing the timing, but if it includes the time to load the code, really small code may well make more difference than the execution time.
|
47,489,567
|
I've been stuck on this one for hours now.
I'm making a homemade smarthome terminal, and have been tinkering with kivy for about 2 weeks and it's been great so far. I'm at the point where I want to show the temperature inside a label inside a screen. I've made an actionbar with 4 buttons that slides through screens when clicked. In the first screen called "Thermostaat", I want to display a label with a temperature read from another external script I've written. I can't seem to get the temperature inside a label, even with a dummy value'
Here's my main.py:
```
#!/usr/bin/env python3
from kivy.app import App
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.clock import Clock
from kivy.uix.label import Label
from kivy.lang import Builder
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.ids.TempLabel.text = str(thetemp)
print(thetemp)
class ScreenThermo(Screen):
pass
class ScreenLight(Screen):
pass
class ScreenEnergy(Screen):
pass
class ScreenWeather(Screen):
pass
class Manager(ScreenManager):
screen_thermo = ObjectProperty(None)
screen_light = ObjectProperty(None)
screen_energy = ObjectProperty(None)
screen_weather = ObjectProperty(None)
class MenuApp(App):
def thermostaat(self):
print("Thermostaat")
def verlichting(self):
print("Verlichting")
def energie(self):
print("Energie")
def weer(self):
print("Het Weer")
def build(self):
Builder.load_file("test.kv")
return Menu()
if __name__ == '__main__':
MenuApp().run()
```
And here's my .kv file:
```
#:kivy 1.10.0
<Menu>:
manager: screen_manager
orientation: "vertical"
ActionBar:
size_hint_y: 0.05
ActionView:
ActionPrevious:
ActionButton:
text: "Thermostaat"
on_press: root.manager.current= 'thermo'
on_release: app.thermostaat()
ActionButton:
text: "Verlichting"
#I want my screens to switch when clicking on this actionbar button
on_press: root.manager.current= 'light'
on_release: app.verlichting()
ActionButton:
text: "Energieverbruik"
on_press: root.manager.current= 'energy'
on_release: app.energie()
ActionButton:
text: "Het Weer"
on_press: root.manager.current= 'weather'
on_release: app.weer()
Manager:
id: screen_manager
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: "stuff1"
<ScreenLight>:
Button:
text: "stuff2"
<ScreenEnergy>:
Button:
text: "stuff3"
<ScreenWeather>:
Button:
text: "stuff4"
<Manager>:
id: screen_manager
screen_thermo: screen_thermo
screen_light: screen_light
screen_energy: screen_energy
screen_weather: screen_weather
ScreenThermo:
id: screen_thermo
name: 'thermo'
manager: screen_manager
ScreenLight:
id: screen_light
name: 'light'
manager: screen_manager
ScreenEnergy:
id: screen_energy
name: 'energy'
manager: screen_manager
ScreenWeather:
id: screen_weather
name: 'weather'
manager: screen_manager
```
I'm constantly getting the follow error:
```
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
Here's my traceback incase you're wondering:
```
Traceback (most recent call last):
File "main.py", line 57, in <module>
MenuApp().run()
File "/home/default/kivy/kivy/app.py", line 829, in run
runTouchApp()
File "/home/default/kivy/kivy/base.py", line 502, in runTouchApp
EventLoop.window.mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 403, in mainloop
self._mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 289, in _mainloop
EventLoop.idle()
File "/home/default/kivy/kivy/base.py", line 337, in idle
Clock.tick()
File "/home/default/kivy/kivy/clock.py", line 581, in tick
self._process_events()
File "kivy/_clock.pyx", line 367, in kivy._clock.CyClockBase._process_events
cpdef _process_events(self):
File "kivy/_clock.pyx", line 397, in kivy._clock.CyClockBase._process_events
raise
File "kivy/_clock.pyx", line 395, in kivy._clock.CyClockBase._process_events
event.tick(self._last_tick)
File "kivy/_clock.pyx", line 167, in kivy._clock.ClockEvent.tick
ret = callback(self._dt)
File "main.py", line 17, in getTemp
self.ids.TempLabel.text = str(thetemp)
File "kivy/properties.pyx", line 839, in kivy.properties.ObservableDict.__getattr__
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
I hope anyone is willing to help me with this, because I want to continue with this project, I've got most of the functionality of my SmartHome equipment working already, so the last part is to make a decent GUI to fit it all in(controlling the lights, controlling the temperature in the house etc...)
|
2017/11/25
|
[
"https://Stackoverflow.com/questions/47489567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4977709/"
] |
there is no id TempLabel in your menu
first thing, you must add the TempLabel in your kv:
```
...
<ScreenThermo>:
Label:
id: TempLabel
text: "stuff1"
...
```
then update the right label:
```
...
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.manager.screen_thermo.ids.TempLabel.text = str(thetemp)
print(thetemp)
...
```
|
use a StringProperty
```
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: root.thermo_text
class ScreenThermo(BoxLayout):
thermo_text = StringProperty("stuff")
...
```
then any time you want to seet the text just do
```
my_screen.thermo_text = "ASD"
```
|
47,489,567
|
I've been stuck on this one for hours now.
I'm making a homemade smarthome terminal, and have been tinkering with kivy for about 2 weeks and it's been great so far. I'm at the point where I want to show the temperature inside a label inside a screen. I've made an actionbar with 4 buttons that slides through screens when clicked. In the first screen called "Thermostaat", I want to display a label with a temperature read from another external script I've written. I can't seem to get the temperature inside a label, even with a dummy value'
Here's my main.py:
```
#!/usr/bin/env python3
from kivy.app import App
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.clock import Clock
from kivy.uix.label import Label
from kivy.lang import Builder
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.ids.TempLabel.text = str(thetemp)
print(thetemp)
class ScreenThermo(Screen):
pass
class ScreenLight(Screen):
pass
class ScreenEnergy(Screen):
pass
class ScreenWeather(Screen):
pass
class Manager(ScreenManager):
screen_thermo = ObjectProperty(None)
screen_light = ObjectProperty(None)
screen_energy = ObjectProperty(None)
screen_weather = ObjectProperty(None)
class MenuApp(App):
def thermostaat(self):
print("Thermostaat")
def verlichting(self):
print("Verlichting")
def energie(self):
print("Energie")
def weer(self):
print("Het Weer")
def build(self):
Builder.load_file("test.kv")
return Menu()
if __name__ == '__main__':
MenuApp().run()
```
And here's my .kv file:
```
#:kivy 1.10.0
<Menu>:
manager: screen_manager
orientation: "vertical"
ActionBar:
size_hint_y: 0.05
ActionView:
ActionPrevious:
ActionButton:
text: "Thermostaat"
on_press: root.manager.current= 'thermo'
on_release: app.thermostaat()
ActionButton:
text: "Verlichting"
#I want my screens to switch when clicking on this actionbar button
on_press: root.manager.current= 'light'
on_release: app.verlichting()
ActionButton:
text: "Energieverbruik"
on_press: root.manager.current= 'energy'
on_release: app.energie()
ActionButton:
text: "Het Weer"
on_press: root.manager.current= 'weather'
on_release: app.weer()
Manager:
id: screen_manager
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: "stuff1"
<ScreenLight>:
Button:
text: "stuff2"
<ScreenEnergy>:
Button:
text: "stuff3"
<ScreenWeather>:
Button:
text: "stuff4"
<Manager>:
id: screen_manager
screen_thermo: screen_thermo
screen_light: screen_light
screen_energy: screen_energy
screen_weather: screen_weather
ScreenThermo:
id: screen_thermo
name: 'thermo'
manager: screen_manager
ScreenLight:
id: screen_light
name: 'light'
manager: screen_manager
ScreenEnergy:
id: screen_energy
name: 'energy'
manager: screen_manager
ScreenWeather:
id: screen_weather
name: 'weather'
manager: screen_manager
```
I'm constantly getting the follow error:
```
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
Here's my traceback incase you're wondering:
```
Traceback (most recent call last):
File "main.py", line 57, in <module>
MenuApp().run()
File "/home/default/kivy/kivy/app.py", line 829, in run
runTouchApp()
File "/home/default/kivy/kivy/base.py", line 502, in runTouchApp
EventLoop.window.mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 403, in mainloop
self._mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 289, in _mainloop
EventLoop.idle()
File "/home/default/kivy/kivy/base.py", line 337, in idle
Clock.tick()
File "/home/default/kivy/kivy/clock.py", line 581, in tick
self._process_events()
File "kivy/_clock.pyx", line 367, in kivy._clock.CyClockBase._process_events
cpdef _process_events(self):
File "kivy/_clock.pyx", line 397, in kivy._clock.CyClockBase._process_events
raise
File "kivy/_clock.pyx", line 395, in kivy._clock.CyClockBase._process_events
event.tick(self._last_tick)
File "kivy/_clock.pyx", line 167, in kivy._clock.ClockEvent.tick
ret = callback(self._dt)
File "main.py", line 17, in getTemp
self.ids.TempLabel.text = str(thetemp)
File "kivy/properties.pyx", line 839, in kivy.properties.ObservableDict.__getattr__
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
I hope anyone is willing to help me with this, because I want to continue with this project, I've got most of the functionality of my SmartHome equipment working already, so the last part is to make a decent GUI to fit it all in(controlling the lights, controlling the temperature in the house etc...)
|
2017/11/25
|
[
"https://Stackoverflow.com/questions/47489567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4977709/"
] |
use a StringProperty
```
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: root.thermo_text
class ScreenThermo(BoxLayout):
thermo_text = StringProperty("stuff")
...
```
then any time you want to seet the text just do
```
my_screen.thermo_text = "ASD"
```
|
Using ObjectProperty
====================
In the example, I used an ObjectProperty to hook up to the label for temperature because an id is a weakref to the widget. Using an ***ObjectProperty*** creates a direct reference, provides faster access and is more explicit.
main.py
-------
### 1. Import External Script
```
from temperature import read_temperature
```
### 2. Declare ObjectProperty
```
class ScreenThermo(Screen):
temperature = ObjectProperty(None)
```
### 3. Update Temperature Text
```
def getTemp(self, dt):
temp = read_temperature()
print(temp)
self.manager.screen_thermo.temperature.text = str(temp)
```
menu.kv - kv File
-----------------
### 4. Hook up ObjectProperty to id
```
<ScreenThermo>:
temperature: temperature
Label:
id: temperature
#this is where i want my label that shows the temperature my sensor reads
text: "stuff1"
```
temperature.py - Simulator
--------------------------
This is an external script simulating the temperature.
```
import random
def read_temperature():
return random.randint(0, 100)
```
Output
------
[](https://i.stack.imgur.com/6WbRX.png)
|
47,489,567
|
I've been stuck on this one for hours now.
I'm making a homemade smarthome terminal, and have been tinkering with kivy for about 2 weeks and it's been great so far. I'm at the point where I want to show the temperature inside a label inside a screen. I've made an actionbar with 4 buttons that slides through screens when clicked. In the first screen called "Thermostaat", I want to display a label with a temperature read from another external script I've written. I can't seem to get the temperature inside a label, even with a dummy value'
Here's my main.py:
```
#!/usr/bin/env python3
from kivy.app import App
from kivy.uix.screenmanager import ScreenManager,Screen
from kivy.properties import ObjectProperty, StringProperty
from kivy.uix.boxlayout import BoxLayout
from kivy.clock import Clock
from kivy.uix.label import Label
from kivy.lang import Builder
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.ids.TempLabel.text = str(thetemp)
print(thetemp)
class ScreenThermo(Screen):
pass
class ScreenLight(Screen):
pass
class ScreenEnergy(Screen):
pass
class ScreenWeather(Screen):
pass
class Manager(ScreenManager):
screen_thermo = ObjectProperty(None)
screen_light = ObjectProperty(None)
screen_energy = ObjectProperty(None)
screen_weather = ObjectProperty(None)
class MenuApp(App):
def thermostaat(self):
print("Thermostaat")
def verlichting(self):
print("Verlichting")
def energie(self):
print("Energie")
def weer(self):
print("Het Weer")
def build(self):
Builder.load_file("test.kv")
return Menu()
if __name__ == '__main__':
MenuApp().run()
```
And here's my .kv file:
```
#:kivy 1.10.0
<Menu>:
manager: screen_manager
orientation: "vertical"
ActionBar:
size_hint_y: 0.05
ActionView:
ActionPrevious:
ActionButton:
text: "Thermostaat"
on_press: root.manager.current= 'thermo'
on_release: app.thermostaat()
ActionButton:
text: "Verlichting"
#I want my screens to switch when clicking on this actionbar button
on_press: root.manager.current= 'light'
on_release: app.verlichting()
ActionButton:
text: "Energieverbruik"
on_press: root.manager.current= 'energy'
on_release: app.energie()
ActionButton:
text: "Het Weer"
on_press: root.manager.current= 'weather'
on_release: app.weer()
Manager:
id: screen_manager
<ScreenThermo>:
Label:
#this is where i want my label that shows the temperature my sensor reads
text: "stuff1"
<ScreenLight>:
Button:
text: "stuff2"
<ScreenEnergy>:
Button:
text: "stuff3"
<ScreenWeather>:
Button:
text: "stuff4"
<Manager>:
id: screen_manager
screen_thermo: screen_thermo
screen_light: screen_light
screen_energy: screen_energy
screen_weather: screen_weather
ScreenThermo:
id: screen_thermo
name: 'thermo'
manager: screen_manager
ScreenLight:
id: screen_light
name: 'light'
manager: screen_manager
ScreenEnergy:
id: screen_energy
name: 'energy'
manager: screen_manager
ScreenWeather:
id: screen_weather
name: 'weather'
manager: screen_manager
```
I'm constantly getting the follow error:
```
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
Here's my traceback incase you're wondering:
```
Traceback (most recent call last):
File "main.py", line 57, in <module>
MenuApp().run()
File "/home/default/kivy/kivy/app.py", line 829, in run
runTouchApp()
File "/home/default/kivy/kivy/base.py", line 502, in runTouchApp
EventLoop.window.mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 403, in mainloop
self._mainloop()
File "/home/default/kivy/kivy/core/window/window_pygame.py", line 289, in _mainloop
EventLoop.idle()
File "/home/default/kivy/kivy/base.py", line 337, in idle
Clock.tick()
File "/home/default/kivy/kivy/clock.py", line 581, in tick
self._process_events()
File "kivy/_clock.pyx", line 367, in kivy._clock.CyClockBase._process_events
cpdef _process_events(self):
File "kivy/_clock.pyx", line 397, in kivy._clock.CyClockBase._process_events
raise
File "kivy/_clock.pyx", line 395, in kivy._clock.CyClockBase._process_events
event.tick(self._last_tick)
File "kivy/_clock.pyx", line 167, in kivy._clock.ClockEvent.tick
ret = callback(self._dt)
File "main.py", line 17, in getTemp
self.ids.TempLabel.text = str(thetemp)
File "kivy/properties.pyx", line 839, in kivy.properties.ObservableDict.__getattr__
super(ObservableDict, self).__getattr__(attr))
AttributeError: 'super' object has no attribute '__getattr__'
```
I hope anyone is willing to help me with this, because I want to continue with this project, I've got most of the functionality of my SmartHome equipment working already, so the last part is to make a decent GUI to fit it all in(controlling the lights, controlling the temperature in the house etc...)
|
2017/11/25
|
[
"https://Stackoverflow.com/questions/47489567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4977709/"
] |
there is no id TempLabel in your menu
first thing, you must add the TempLabel in your kv:
```
...
<ScreenThermo>:
Label:
id: TempLabel
text: "stuff1"
...
```
then update the right label:
```
...
class Menu(BoxLayout):
manager = ObjectProperty(None)
def __init__(self,**kwargs):
super(Menu, self).__init__(**kwargs)
Clock.schedule_interval(self.getTemp, 1)
def getTemp(self,dt):
thetemp = 55 #will be changed to temp.read()
self.manager.screen_thermo.ids.TempLabel.text = str(thetemp)
print(thetemp)
...
```
|
Using ObjectProperty
====================
In the example, I used an ObjectProperty to hook up to the label for temperature because an id is a weakref to the widget. Using an ***ObjectProperty*** creates a direct reference, provides faster access and is more explicit.
main.py
-------
### 1. Import External Script
```
from temperature import read_temperature
```
### 2. Declare ObjectProperty
```
class ScreenThermo(Screen):
temperature = ObjectProperty(None)
```
### 3. Update Temperature Text
```
def getTemp(self, dt):
temp = read_temperature()
print(temp)
self.manager.screen_thermo.temperature.text = str(temp)
```
menu.kv - kv File
-----------------
### 4. Hook up ObjectProperty to id
```
<ScreenThermo>:
temperature: temperature
Label:
id: temperature
#this is where i want my label that shows the temperature my sensor reads
text: "stuff1"
```
temperature.py - Simulator
--------------------------
This is an external script simulating the temperature.
```
import random
def read_temperature():
return random.randint(0, 100)
```
Output
------
[](https://i.stack.imgur.com/6WbRX.png)
|
60,036,522
|
**I am learning 'Automate the Boring Stuff with Python',
here is the code in the book:**
```
import csv, os
os.makedirs('headerRemoved', exist_ok=True)
#Loop through every file in the current working directory)
for csvFilename in os.listdir('C://Users//Xinxin//Desktop//123'):
if not csvFilename.endswith('.csv'):
continue # skip non-csv files
print('Removing header from ' + csvFilename + '...')
# Read the CSV file in (skipping first row).
csvRows = []
csvFileObj = open(csvFilename)
readerObj = csv.reader(csvFileObj)
for row in readerObj:
if readerObj.line_num == 1:
continue # skip first row
csvRows.append(row)
csvFileObj.close()
# Write out the CSV file.
csvFileObj = open(os.path.join('headerRemoved', csvFilename), 'w', newline='')
csvWriter = csv.writer(csvFileObj)
for row in csvRows:
csvWriter.writerow(row)
csvFileObj.close()
```
According to the book, it is said 'Run the above python program in that folder.'
It works, but when I move the python program out of the csv folder, and run the code,
then it shows
```
C:\Users\Xinxin\PycharmProjects\PPPP\venv\Scripts\python.exe C:/Users/Xinxin/Desktop/removeheader.py
Removing header from NAICS_data_1048.csv...
Traceback (most recent call last):
File "C:/Users/Xinxin/Desktop/removeheader.py", line 44, in <module>
csvFileObj = open(csvFilename)
FileNotFoundError: [Errno 2] No such file or directory: 'NAICS_data_1048.csv'
Process finished with exit code 1
```
Why csv files cnannot open? I already wrote absolute dir in line4...
**Thank you so much for your help.**
|
2020/02/03
|
[
"https://Stackoverflow.com/questions/60036522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12831792/"
] |
>
> but when I move the python program out of the csv folder, and run the code, then it shows
>
>
>
1) This is the problem. Try adding the directory of the files to your removeheader.py (first line):
```
import sys
sys.path.append(r'C:/Users/Xinxin/Desktop/123')
```
2) Store the files in the same location as the script to make your life easier
|
You can need get current directory. Then add current directory with file name.
Example:
```
currentDir = os.getcwd()
currentFileCSV = currentDir +"//" + csvFilename
csvFileObj = open(currentFileCSV)
```
|
31,090,479
|
I think I'm not understanding something basic about python's argparse.
I am trying to use the Google YouTube API for python script, but I am not understanding how to pass values to the script without using the command line.
For example, [here](https://developers.google.com/youtube/v3/docs/videos/insert) is the example for the API. The examples on github and elsewhere show this example as being called from the command line, from where the argparse values are passed when the script is called.
I don't want to use the command line. I am building an app that uses a decorator to obtain login credentials for the user, and when that user wants to upload to their YouTube account, they submit a form which will then call this script and have the argparse values passed to it.
How do I pass values to argparser (see below for portion of code in YouTube upload API script) from another python script?
```
if __name__ == '__main__':
argparser.add_argument("--file", required=True, help="Video file to upload")
argparser.add_argument("--title", help="Video title", default="Test Title")
argparser.add_argument("--description", help="Video description",
default="Test Description")
argparser.add_argument("--category", default="22",
help="Numeric video category. " +
"See https://developers.google.com/youtube/v3/docs/videoCategories/list")
argparser.add_argument("--keywords", help="Video keywords, comma separated",
default="")
argparser.add_argument("--privacyStatus", choices=VALID_PRIVACY_STATUSES,
default=VALID_PRIVACY_STATUSES[0], help="Video privacy status.")
args = argparser.parse_args()
if not os.path.exists(args.file):
exit("Please specify a valid file using the --file= parameter.")
youtube = get_authenticated_service(args)
try:
initialize_upload(youtube, args)
except HttpError, e:
print "An HTTP error %d occurred:\n%s" % (e.resp.status, e.content)
```
---
EDIT: Per request, here is the traceback for the 400 Error I am getting using either the standard method to initialize a dictionary or using argparse to create a dictionary. I thought I was getting this due to badly formed parameters, but perhaps not:
```
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1535, in __call__
rv = self.handle_exception(request, response, e)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1529, in __call__
rv = self.router.dispatch(request, response)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 1102, in __call__
return handler.dispatch()
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "C:\Program Files (x86)\Google\google_appengine\lib\webapp2-2.5.2\webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "C:\Users\...\testapp\oauth2client\appengine.py", line 796, in setup_oauth
resp = method(request_handler, *args, **kwargs)
File "C:\Users\...\testapp\testapp.py", line 116, in get
resumable_upload(insert_request)
File "C:\Users\...\testapp\testapp.py", line 183, in resumable_upload
status, response = insert_request.next_chunk()
File "C:\Users\...\testapp\oauth2client\util.py", line 129, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\Users\...\testapp\apiclient\http.py", line 874, in next_chunk
return self._process_response(resp, content)
File "C:\Users\...\testapp\apiclient\http.py", line 901, in _process_response
raise HttpError(resp, content, uri=self.uri)
HttpError: <HttpError 400 when requesting https://www.googleapis.com/upload/youtube/v3/videos?alt=json&part=status%2Csnippet&uploadType=resumable returned "Bad Request">
```
|
2015/06/27
|
[
"https://Stackoverflow.com/questions/31090479",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3449806/"
] |
Whether it is the best approach or not is really for you to figure out. But using argparse without command line is **easy**. I do it all the time because I have batches that can be run from the command line. Or can also be called by other code - which is great for unit testing, as mentioned. argparse is especially good at defaulting parameters for example.
Starting with your sample.
```
import argparse
argparser = argparse.ArgumentParser()
argparser.add_argument("--file", required=True, help="Video file to upload")
argparser.add_argument("--title", help="Video title", default="Test Title")
argparser.add_argument("--description", help="Video description",
default="Test Description")
argparser.add_argument("--category", default="22",
help="Numeric video category. " +
"See https://developers.google.com/youtube/v3/docs/videoCategories/list")
argparser.add_argument("--keywords", help="Video keywords, comma separated",
default="")
VALID_PRIVACY_STATUSES = ("private","public")
argparser.add_argument("--privacyStatus", choices=VALID_PRIVACY_STATUSES,
default=VALID_PRIVACY_STATUSES[0], help="Video privacy status.")
#pass in any positional or required variables.. as strings in a list
#which corresponds to sys.argv[1:]. Not a string => arcane errors.
args = argparser.parse_args(["--file", "myfile.avi"])
#you can populate other optional parameters, not just positionals/required
#args = argparser.parse_args(["--file", "myfile.avi", "--title", "my title"])
print vars(args)
#modify them as you see fit, but no more validation is taking place
#so best to use parse_args.
args.privacyStatus = "some status not in choices - already parsed"
args.category = 42
print vars(args)
#proceed as before, the system doesn't care if it came from the command line or not
# youtube = get_authenticated_service(args)
```
output:
```
{'category': '22', 'description': 'Test Description', 'title': 'Test Title', 'privacyStatus': 'private', 'file': 'myfile.avi', 'keywords': ''}
{'category': 42, 'description': 'Test Description', 'title': 'Test Title', 'privacyStatus': 'some status not in choices - already parsed', 'file': 'myfile.avi', 'keywords': ''}
```
|
Calling `parse_args` with your own list of strings is a common `argparse` testing method. If you don't give `parse_args` this list, it uses `sys.argv[1:]` - i.e. the strings that the shell gives. `sys.argv[0]` is the strip name.
```
args = argparser.parse_args(['--foo','foovalue','barvalue'])
```
It is also easy to construct an `args` object.
```
args = argparse.Namespace(foo='foovalue', bar='barvalue')
```
In fact, if you print `args` from a `parse_args` call it should look something like that. As described in the documentation, a `Namespace` is a simple object, and the values are artributes. So it is easy to construct your own `namespace` class. All `args` needs to be is something that returns the appropriate value when used as:
```
x = args.foo
b = args.bar
```
Also as noted in the docs, `vars(args)` turns this namespace into a dictionary. Some code likes to use dictionary, but evidently these youtub functions want a `Namespace` (or equivalent).
```
get_authenticated_service(args)
initialize_upload(youtube, args)
```
<https://docs.python.org/3/library/argparse.html#beyond-sys-argv>
<https://docs.python.org/3/library/argparse.html#the-namespace-object>
---
<https://developers.google.com/youtube/v3/guides/uploading_a_video?hl=id-ID>
has `get_authenticated_service` and `initialize_upload` code
```
def initialize_upload(youtube, options):
tags = None
if options.keywords:
tags = options.keywords.split(",")
body=dict(
snippet=dict(
title=options.title,
description=options.description,
tags=tags,
categoryId=options.category
),
status=dict(
privacyStatus=options.privacyStatus
)
)
....
```
The `args` from the parser is `options`, which it uses as `options.category`, `options.title`, etc. You could substitute any other object which has the same behavior and the necessary attributes.
|
16,213,235
|
a methodology question:
I have a "main" python script which runs on an infinite loop on my system, and I want to send information to it (a json data string for example) occasionally with some other python scripts that will be started later by myself or another program and will end just after sending the string.
I can't use subprocess here because my main script doesn't know when the other will run and what code they will execute.
I'm thinking of making the main script listen on a local port and making the other scripts send it the strings on that port, but is there a better way to do it ?
|
2013/04/25
|
[
"https://Stackoverflow.com/questions/16213235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1112326/"
] |
zeromq: <http://www.zeromq.org/> - is best solution for interprocess communications imho and have a excelent binding for python: <http://www.zeromq.org/bindings:python>
|
Since the "main" script looks like a service you can enhance it with a web API. [bottle](http://bottlepy.org/) is the perfect solution for this. With this additional code your python script is able to receive requests and process them:
```
import json
from bottle import run, post, request, response
@post('/process')
def my_process():
req_obj = json.loads(request.body.read())
# do something with req_obj
# ...
return 'All done'
run(host='localhost', port=8080, debug=True)
```
The client script may use the httplib to send a message to the server and read the response:
```
import httplib
c = httplib.HTTPConnection('localhost', 8080)
c.request('POST', '/process', '{}')
doc = c.getresponse().read()
print doc
# 'All done'
```
|
16,213,235
|
a methodology question:
I have a "main" python script which runs on an infinite loop on my system, and I want to send information to it (a json data string for example) occasionally with some other python scripts that will be started later by myself or another program and will end just after sending the string.
I can't use subprocess here because my main script doesn't know when the other will run and what code they will execute.
I'm thinking of making the main script listen on a local port and making the other scripts send it the strings on that port, but is there a better way to do it ?
|
2013/04/25
|
[
"https://Stackoverflow.com/questions/16213235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1112326/"
] |
zeromq: <http://www.zeromq.org/> - is best solution for interprocess communications imho and have a excelent binding for python: <http://www.zeromq.org/bindings:python>
|
In case you are interested in implementing the client script that Mike presented in Python 3.x, you will quickly find that there is no httplib available. Fortunately, the same thing is done with the library http.client.
Otherwise it is the same:
```
import http.client
c = http.client.HTTPConnection('localhost', 8080)
c.request('POST', '/process', '{}')
doc = c.getresponse().read()
print(doc)
```
Though this is old I would figure I would post this since I had a similar question today but using a server.
|
34,471,188
|
Doing python exercises already I've a problem with string:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)-1
#print ("indice is:",indice)
while indice > 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Putting off "-1" the results is:
```
IndexError: string index out of range
```
|
2015/12/26
|
[
"https://Stackoverflow.com/questions/34471188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4075480/"
] |
```
while indice > 0:
```
should be
```
while indice >= 0:
```
to print the first character (index `0`) at last.
---
BTW, if you use [`reversed`](https://docs.python.org/3/library/functions.html#reversed), you don't need to calculate index yourself:
```
s = 'mandarino'
for ch in reversed(s):
print(ch)
```
Side note: Don't use `str` as a varaible name. It will shadow a built in function/type [`str`](https://docs.python.org/3/library/functions.html#str).
|
Though above answers are correct..this is more `pythonic` way...
```
string = 'mandarino'
indice = len(string)
while indice >= 0:
indice -= 1
print (string[indice]),
```
|
34,471,188
|
Doing python exercises already I've a problem with string:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)-1
#print ("indice is:",indice)
while indice > 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Putting off "-1" the results is:
```
IndexError: string index out of range
```
|
2015/12/26
|
[
"https://Stackoverflow.com/questions/34471188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4075480/"
] |
```
while indice > 0:
```
should be
```
while indice >= 0:
```
to print the first character (index `0`) at last.
---
BTW, if you use [`reversed`](https://docs.python.org/3/library/functions.html#reversed), you don't need to calculate index yourself:
```
s = 'mandarino'
for ch in reversed(s):
print(ch)
```
Side note: Don't use `str` as a varaible name. It will shadow a built in function/type [`str`](https://docs.python.org/3/library/functions.html#str).
|
You can also treat strings as list of characters and using reverse indexing, this way:
```
>>> str1 = 'mandarino'
>>> for ch in str1[::-1]:
print ch
o
n
i
r
a
d
n
a
m
```
|
34,471,188
|
Doing python exercises already I've a problem with string:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)-1
#print ("indice is:",indice)
while indice > 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Putting off "-1" the results is:
```
IndexError: string index out of range
```
|
2015/12/26
|
[
"https://Stackoverflow.com/questions/34471188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4075480/"
] |
You can add:
```
indice = indice - 1
```
and change:
```
while indice > 0:
```
to
```
while indice >= 0:
```
so the script will be:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)
indice = indice - 1
#print ("indice is:",indice)
while indice >= 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Better and handy:
```
indice = len(str)-1
```
|
Though above answers are correct..this is more `pythonic` way...
```
string = 'mandarino'
indice = len(string)
while indice >= 0:
indice -= 1
print (string[indice]),
```
|
34,471,188
|
Doing python exercises already I've a problem with string:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)-1
#print ("indice is:",indice)
while indice > 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Putting off "-1" the results is:
```
IndexError: string index out of range
```
|
2015/12/26
|
[
"https://Stackoverflow.com/questions/34471188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4075480/"
] |
You can add:
```
indice = indice - 1
```
and change:
```
while indice > 0:
```
to
```
while indice >= 0:
```
so the script will be:
```
#!/usr/bin/python
str = 'mandarino'
indice = len(str)
indice = indice - 1
#print ("indice is:",indice)
while indice >= 0:
lett = str[indice]
print (lett)
indice = indice -1
```
Better and handy:
```
indice = len(str)-1
```
|
You can also treat strings as list of characters and using reverse indexing, this way:
```
>>> str1 = 'mandarino'
>>> for ch in str1[::-1]:
print ch
o
n
i
r
a
d
n
a
m
```
|
43,957,412
|
Is it possible to extract the text information from a popup page automatically using python?
I have google play store app link :
<https://play.google.com/store/apps/details?id=com.facebook.katana>
If you scroll down to the "ADDITIONAL INFORMATION" section, you will find "Permissions". By clicking 'View details" underneath will popup a page. Are those text information in the popup extractable?
And how do I get the information from the main page source if it is doable?
Thanks a lot.
|
2017/05/13
|
[
"https://Stackoverflow.com/questions/43957412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5090248/"
] |
You'll need to do the following:
1) Set up a webdriver to control the website.
<https://sites.google.com/a/chromium.org/chromedriver/getting-started>
2) Right click "view details" and select inspect source. This will open the source code of the page. The highlighted portion corresponds to that button. You can right click and copy the xpath and use this to call a click function.
3) Once the new page opens, navigate your driver to this page and follow the same instructions as in step 2 to select the text you want. You can then use the innerhtml function to grab the text from this element.
|
This is going to be rather complicated: you'll have to dig through the HTML to find out what the button does (the link is actually a `button` element). The best would be to use a Google Play Store API, which doesn't exist as of right now. The easiest option would therefore be to go through a third-party API which would crawl the play store for you. Here's an [example](https://apptweak.io/api).
I won't walk you through the whole process, but you'll probably have to use the [requests](http://docs.python-requests.org/en/master/) module.
|
9,687,922
|
I recently built an application, for a client, which has several python files. I use ubuntu, and now that I am finished, I would like to give this to the client in a way that would make it easy for her to use in windows.
I have looked into py2exe with wine, as well as cx\_freeze and some other stuff, but cannot find a straightforward tutorial or useful documentation for turning many python files in ubuntu into an easy-to-use windows application or executable or anything really.
Thanks!
|
2012/03/13
|
[
"https://Stackoverflow.com/questions/9687922",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1266969/"
] |
[This page](http://bytes.com/topic/python/answers/26340-linux-wine-py2exe) appears to have a solution, as the asker didn't reply:
1. Install WINE.
2. Use WINE to install Python 2.3.
3. Use WINE to install py2exe.
4. Make a setup.py file for py2exe to compile your script:
>
>
> ```
> from distutils.core import setup
> import py2exe
>
> setup(name="vervang",
> scripts=["vervang.py"],
> )
>
> ```
>
>
* Run `wine python.exe setup.py py2exe`
[This page](http://wiki.python.org/moin/Py2Exe) says the resulting binaries might not be valid Win32 executables, though.
|
py2exe will not work on linux. Try [pyinstaller](http://www.pyinstaller.org/) it is a pure python implementation that will work on linux, mac and windows.
|
71,040,315
|
I have two builds at the same time when doing PR.
[](https://i.stack.imgur.com/QqREB.png)
According to docs, that could be turned off via [web interface](https://docs.travis-ci.com/user/web-ui/#build-pushed-branches)
[](https://i.stack.imgur.com/r9zaa.png)
I turned that off (want to have only PR build) and also added `only` to `.travis.yml`, but still have two builds, but now branch builds just in expecting stage. In web UI Travis - no more builds for branch created.
[](https://i.stack.imgur.com/ukPJt.png)
```
branches:
only:
- master
language: python
os: linux
dist: xenial
jobs:
include:
- name: pytest
python:
- 3.7
install:
- pip install -U pip
- pip install -U pytest
- pip install -U PyYAML
- pip install -U Cerberus
script:
- pytest -vvs
```
|
2022/02/08
|
[
"https://Stackoverflow.com/questions/71040315",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5712858/"
] |
I had to set Gpu Acceleration for the integrated terminal from default `auto` to either `off` or `canvas`.
In the settings.json that is achieved with:
```json
{
"terminal.integrated.gpuAcceleration": "canvas",
}
```
|
Try changing editor font size and window zoom level:
```
{
"window.zoomLevel": -1,
"editor.fontSize": 14,
"terminal.integrated.fontSize": 12,
}
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
[argparse](http://docs.python.org/dev/library/argparse.html) is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts.
|
I believe this would work, and would avoid iterating over sys.argv:
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in sys.argv[1:]:
hlo.append(sh.cell(i, 8).value)
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
You can iterate over `sys.argv[1:]`, e.g. via something like:
```
for grp in sys.argv[1:]:
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == grp:
hlo.append(sh.cell(i, 8).value)
```
|
I would recommend taking a look at Python's [optparse](http://docs.python.org/library/optparse.html) module. It's a nice helper to parse `sys.argv`.
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
I would recommend taking a look at Python's [optparse](http://docs.python.org/library/optparse.html) module. It's a nice helper to parse `sys.argv`.
|
I believe this would work, and would avoid iterating over sys.argv:
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in sys.argv[1:]:
hlo.append(sh.cell(i, 8).value)
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
[argparse](http://docs.python.org/dev/library/argparse.html) is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts.
|
```
# First thing is to get a set of your query strings.
queries = set(argv[1:])
# If using optparse or argparse, queries = set(something_else)
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in queries:
hlo.append(sh.cell(i, 8).value)
```
=== end of answer to question ===
Aside: the OP is using xlrd ... here are a couple of performance hints.
Doesn't matter too much with this simple example, but if you are going to do a lot of coordinate-based accessing of cell values, you can do better than that by using Sheet.cell\_value(rowx, colx) instead of Sheet.cell(rowx, colx).value which builds a Cell object on the fly:
```
queries = set(argv[1:])
hlo = []
for i in range(len(sh.nrows)): # all columns have the same size
if sh.cell_value(i, 1) in queries:
hlo.append(sh.cell_value(i, 8))
```
or you could use a list comprehension along with the Sheet.col\_values(colx) method:
```
hlo = [
v8
for v1, v8 in zip(sh.col_values(1), sh.col_values(8))
if v1 in queries
]
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
```
outputList = [x for x in values if x in sys.argv[1:]]
```
Substitute the bits that are relevant for your (spreadsheet?) situation. This is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions). You can also investigate the [optparse](http://docs.python.org/library/optparse.html) module which has been in the standard library since 2.3.
|
[argparse](http://docs.python.org/dev/library/argparse.html) is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts.
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
I would recommend taking a look at Python's [optparse](http://docs.python.org/library/optparse.html) module. It's a nice helper to parse `sys.argv`.
|
```
# First thing is to get a set of your query strings.
queries = set(argv[1:])
# If using optparse or argparse, queries = set(something_else)
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in queries:
hlo.append(sh.cell(i, 8).value)
```
=== end of answer to question ===
Aside: the OP is using xlrd ... here are a couple of performance hints.
Doesn't matter too much with this simple example, but if you are going to do a lot of coordinate-based accessing of cell values, you can do better than that by using Sheet.cell\_value(rowx, colx) instead of Sheet.cell(rowx, colx).value which builds a Cell object on the fly:
```
queries = set(argv[1:])
hlo = []
for i in range(len(sh.nrows)): # all columns have the same size
if sh.cell_value(i, 1) in queries:
hlo.append(sh.cell_value(i, 8))
```
or you could use a list comprehension along with the Sheet.col\_values(colx) method:
```
hlo = [
v8
for v1, v8 in zip(sh.col_values(1), sh.col_values(8))
if v1 in queries
]
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
You can iterate over `sys.argv[1:]`, e.g. via something like:
```
for grp in sys.argv[1:]:
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == grp:
hlo.append(sh.cell(i, 8).value)
```
|
```
# First thing is to get a set of your query strings.
queries = set(argv[1:])
# If using optparse or argparse, queries = set(something_else)
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in queries:
hlo.append(sh.cell(i, 8).value)
```
=== end of answer to question ===
Aside: the OP is using xlrd ... here are a couple of performance hints.
Doesn't matter too much with this simple example, but if you are going to do a lot of coordinate-based accessing of cell values, you can do better than that by using Sheet.cell\_value(rowx, colx) instead of Sheet.cell(rowx, colx).value which builds a Cell object on the fly:
```
queries = set(argv[1:])
hlo = []
for i in range(len(sh.nrows)): # all columns have the same size
if sh.cell_value(i, 1) in queries:
hlo.append(sh.cell_value(i, 8))
```
or you could use a list comprehension along with the Sheet.col\_values(colx) method:
```
hlo = [
v8
for v1, v8 in zip(sh.col_values(1), sh.col_values(8))
if v1 in queries
]
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
I would recommend taking a look at Python's [optparse](http://docs.python.org/library/optparse.html) module. It's a nice helper to parse `sys.argv`.
|
[argparse](http://docs.python.org/dev/library/argparse.html) is another powerful, easy to use module that parses sys.argv for you. Very useful for creating command line scripts.
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
```
outputList = [x for x in values if x in sys.argv[1:]]
```
Substitute the bits that are relevant for your (spreadsheet?) situation. This is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions). You can also investigate the [optparse](http://docs.python.org/library/optparse.html) module which has been in the standard library since 2.3.
|
I believe this would work, and would avoid iterating over sys.argv:
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value in sys.argv[1:]:
hlo.append(sh.cell(i, 8).value)
```
|
1,643,643
|
I want to test againts multiple command line arguments in a loop
```
> python Read_xls_files.py group1 group2 group3
```
No this code tests only for the first one (group1).
```
hlo = []
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == sys.argv[1]:
hlo.append(sh.cell(i, 8).value)
```
How should I modify this that I can test against one, two or all of these arguments? So, if there is group1 in one sh.cell(i, 1), the list is appended and if there is group1, group2 etc., the hlo is appended.
|
2009/10/29
|
[
"https://Stackoverflow.com/questions/1643643",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/198894/"
] |
You can iterate over `sys.argv[1:]`, e.g. via something like:
```
for grp in sys.argv[1:]:
for i in range(len(sh.col_values(8))):
if sh.cell(i, 1).value == grp:
hlo.append(sh.cell(i, 8).value)
```
|
```
outputList = [x for x in values if x in sys.argv[1:]]
```
Substitute the bits that are relevant for your (spreadsheet?) situation. This is a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions). You can also investigate the [optparse](http://docs.python.org/library/optparse.html) module which has been in the standard library since 2.3.
|
2,832,646
|
my dir location,i am in a.py:
```
my_Project
|----blog
|-----__init__.py
|-----a.py
|-----blog.py
```
when i 'from blog import something' in a.py , it show error:
```
from blog import BaseRequestHandler
ImportError: cannot import name BaseRequestHandler
```
i think it import the blog folder,not the blog.py
so how to import the blog.py
**updated**
when i use 'blog.blog', it show this:
```
from blog.blog import BaseRequestHandler
ImportError: No module named blog
```
**updated2**
my sys.path is :
```
['D:\\zjm_code', 'D:\\Python25\\lib\\site-packages\\setuptools-0.6c11-py2.5.egg', 'D:\\Python25\\lib\\site-packages\\whoosh-0.3.18-py2.5.egg', 'C:\\WINDOWS\\system32\\python25.zip', 'D:\\Python25\\DLLs', 'D:\\Python25\\lib', 'D:\\Python25\\lib\\plat-win', 'D:\\Python25\\lib\\lib-tk', 'D:\\Python25', 'D:\\Python25\\lib\\site-packages', 'D:\\Python25\\lib\\site-packages\\PIL']
zjm_code
|-----a.py
|-----b.py
```
a.py is :
```
c="ccc"
```
b.py is :
```
from a import c
print c
```
and when i execute b.py ,i show this:
```
> "D:\Python25\pythonw.exe" "D:\zjm_code\b.py"
Traceback (most recent call last):
File "D:\zjm_code\b.py", line 2, in <module>
from a import c
ImportError: cannot import name c
```
|
2010/05/14
|
[
"https://Stackoverflow.com/questions/2832646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/234322/"
] |
When you are in `a.py`, `import blog` should import the local `blog.py` and nothing else. Quoting the [docs](http://docs.python.org/tutorial/modules.html#the-module-search-path):
>
> modules are searched in the list of directories given by the variable sys.path which is initialized from the directory containing the input script
>
>
>
So my guess is that somehow, the name `BaseRequestHandler` is not defined in the file `blog.py`.
|
what happens when you:
```
import blog
```
Try outputting your sys.path, in order to make sure that you have the right dir to call the module from.
|
16,168,836
|
Here's a python-code-snippet:
```
import re
VARS='Variables: "OUTPUTFOLDER=installers","SETUP_ORDER=Product 4,Product 4 Library","SUB_CONTENTS=Product 4 Library","SUB_CONTENT_SIZES=9364256","SUB_CONTENT_GROUPS=Product 4 Library","SUB_CONTENT_DESCRIPTIONS=","SUB_CONTENT_GROUP_DESCRIPTIONS=","SUB_DISCS=Product 4,Product Disc",SUB_FILENAMES='
comp = re.findall(r'\w+=".*?"', VARS)
for var in comp:
print var
```
This is the output currently:
```
SUB_CONTENT_DESCRIPTIONS=","
SUB_CONTENT_GROUP_DESCRIPTIONS=","
```
However I'd like the output to extract all elements so it looks like this:
```
"OUTPUTFOLDER=installers"
"SETUP_ORDER=Product 4, Product 4 Library"
"SUB_CONTENTS=Product 4"
"SUB_CONTENT_SIZES=9364256"
...
```
What is wrong with my regex-pattern?
|
2013/04/23
|
[
"https://Stackoverflow.com/questions/16168836",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/446835/"
] |
```
$(document).ready(function(){
$('#commentform').submit(function(){
var postname = $('h2.post-title').text();
ga('send', 'event', 'Engagement', 'Comment', postname, 5);
});
});
```
First of all. This code assigns the text of a `h2` tag with class `post-title` found in the `document`. A way more reliable way to get the title of the post would be an id.
Secondly, it may not work, because the form is submited before the Google Analitycs code fires. So you should stop the default behaviour and submit the form after the analitycs finishes sending it's data. (See: <https://developers.google.com/analytics/devguides/collection/analyticsjs/advanced#hitCallback>)
```
$( document ).ready( function() {
$( document ).on( 'submit', 'form#commentform', function( e ) {
var postname = $( '#post-title' ).text();
ga( 'send', {
'hitType': 'event',
'eventCategory': 'Engagement',
'eventAction': 'Comment',
'eventLabel': postname,
'eventValue': 5,
'hitCallback': function() {
//now you can submit the form
//$('#commentform').off('submit').trigger('submit');
$('#commentform').off('submit'); //unbind the event
$('#commentform')[0].submit(); //fire DOM element event, not jQuery event
}
});
return false;
});
});
```
**Edit:**
I just realised the code from `hitCallback` might not work. The revised version should call the event of the DOM element and in result - send the form.
**Edit2:**
Corrected the event binding in case the form doesn't exist when document.ready() is fired
|
Hard to tell without looking at an actual page, but likely the browser is redirecting to the form's submission before ga's network call is made. You'd need a way to wait for ga to finish, then finish submitting the form.
|
71,672,487
|
Can't import `Quartz` package.
I have installed it with this command `pip install pyobjc-framework-Quartz`. Tried reinstalling python, also tried `python -m pip install ...`. With `python2` or `sudo python3`, everything works fine but `python3` is giving me this error message every time I try importing `Quartz`
Python version - 3.10.4
Mac version - Big Sur 11.6.5
```
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import Quartz
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Quartz/__init__.py", line 6, in <module>
import AppKit
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/AppKit/__init__.py", line 10, in <module>
import Foundation
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/Foundation/__init__.py", line 9, in <module>
import CoreFoundation
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/CoreFoundation/__init__.py", line 9, in <module>
import objc
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/__init__.py", line 6, in <module>
from . import _objc
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/_objc.cpython-310-darwin.so, 2): Symbol not found: _FSPathMakeRef
Referenced from: /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/_objc.cpython-310-darwin.so
Expected in: flat namespace
in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/objc/_objc.cpython-310-darwin.so
```
|
2022/03/30
|
[
"https://Stackoverflow.com/questions/71672487",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16965639/"
] |
For investigation purposes, can you try :
```
cd /tmp
python3 -m venv venv
source venv/bin/activate
pip install pyobjc-framework-Quartz
python your-script.py
```
Can you try this to see if it works :
```
env -i /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 my_script.py
```
You may have files only accessible by `root`, try to change ownership:
```
sudo chown -R $(id -u):$(id -g) /Library/Frameworks/Python.framework/Versions/3.10
```
and run `env -i ...` again.
If you run :
```
/Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -c 'import sys;print(sys.path)'
sudo /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -c 'import sys;print(sys.path)'
```
Is there any difference between the two ?
Try following to see if it improves :
```
sudo /Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -m pip install pyobjc-framework-Quartz
```
Try this to see if there is anything unusual :
```
/Library/Frameworks/Python.framework/Versions/3.10/bin/python3 -X importtest -v -c 'import Quartz'
```
|
You might need to try:
```py
python3 -m pip install [...]
```
Hope this will hope.
|
40,909,099
|
I'm new to image processing and I'm really having a hard time understanding stuff...so the idea is that how do you create a matrix from a binary image in python?

to something like this:

It not the same image though the point is there.
Thank you for helping, I appreciate it cheers
|
2016/12/01
|
[
"https://Stackoverflow.com/questions/40909099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6019385/"
] |
**Using cv2** -Read more [here](http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_gui/py_image_display/py_image_display.html)
```
import cv2
img = cv2.imread('path/to/img.jpg')
resized = cv2.resize(img, (128, 128), cv2.INTER_LINEAR)
print pic
```
**Using skimage** - Read more [here](http://scikit-image.org/docs/dev/api/skimage.html)
```
import skimage.io as skio
faces = skio.imread_collection('path/to/images/*.pgm',conserve_memory=True) # can load multiple images at once
```
**Using Scipy** - Read more [here](http://www.scipy-lectures.org/advanced/image_processing/)
```
from scipy import misc
pic = misc.imread('path/to/img.jpg')
print pic
```
**Plotting Images**
```
import matplotlib.pyplot as plt
plt.imshow(faces[0],cmap=plt.cm.gray_r,interpolation="nearest")
plt.show()
```
|
example I am currently working with.
====================================
```
"""
A set of utilities that are helpful for working with images. These are utilities
needed to actually apply the seam carving algorithm to images
"""
from PIL import Image
class Color:
"""
A simple class representing an RGB value.
"""
def __init__(self, r, g, b):
self.r = r
self.g = g
self.b = b
def __repr__(self):
return f'Color({self.r}, {self.g}, {self.b})'
def __str__(self):
return repr(self)
def read_image_into_array(filename):
"""
Read the given image into a 2D array of pixels. The result is an array,
where each element represents a row. Each row is an array, where each
element is a color.
See: Color
"""
img = Image.open(filename, 'r')
w, h = img.size
pixels = list(Color(*pixel) for pixel in img.getdata())
return [pixels[n:(n + w)] for n in range(0, w * h, w)]
def write_array_into_image(pixels, filename):
"""
Write the given 2D array of pixels into an image with the given filename.
The input pixels are represented as an array, where each element is a row.
Each row is an array, where each element is a color.
See: Color
"""
h = len(pixels)
w = len(pixels[0])
img = Image.new('RGB', (w, h))
output_pixels = img.load()
for y, row in enumerate(pixels):
for x, color in enumerate(row):
output_pixels[x, y] = (color.r, color.g, color.b)
img.save(filename)
```
|
67,740,573
|
I have a huge file of around 5-10 GBs which has syntax as shown below.
"some text" condition1
"some text" condition2
"some text" condition3
"some text" condition1
"some text" condition4
& so on
The intent is to write a fast & efficient code to create separate files to store this text info based on conditions. All lines with condition1 will go to a file named "condition1.txt".
The limitation is that we do not know the unique conditions.
How can I dynamically generate new file handlers on the fly while reading the file line by line and keep track of these handlers using condition as key and handler as the value in a python dictionary? I can use other data structures as well. Need suggestions !
|
2021/05/28
|
[
"https://Stackoverflow.com/questions/67740573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4991138/"
] |
The first thing that comes to mind for me would be to use the append feature on Python file handlers. You could do something like this for each line of text:
```py
def writecond(text, cond):
fname = cond + '.txt'
with open(fname, 'a') as file:
file.write(text)
```
Another thing you could do is have a `dict` which maps you condition text to a list of open file handlers (although I think there might be a hard limit to the number of handlers you can have on some systems), but just be sure to close all of them before your function exits!
EDIT:
If you want the dictionary case, here's the code for that:
```py
fh_assign = {}
def writeline(text, condition):
if condition not in fh_assign.keys():
fh = open(f'{condition}.txt', 'w')
fh.write(text)
fh_assign[condition] = fh
else:
fh_assign[condition].write(text)
```
Once you're done with the calls to `writeline`, just iterate through the list as follows and close all the connections.
```py
for _, fh in fh_assign:
fh.close()
```
|
I figured out one option of handling objects dynamically and keeping track of it.
```
file_handler = {}
with open(file) as f:
for line in f:
if line.split()[1] not in file_handler.keys():
file_handler[line.split()[1]] = open(line.split()[1],"w")
file_handler[line.split()[1]].write(line.split()[0])
else:
file_handler[line.split()[1]].write(line.split()[0])
f.close()
for key in file_handler.keys():
file_handler[key].close()
```
|
42,349,982
|
I am trying to read the JSON file in python and it is successfully however some top values are skipped. I am trying to debug the reason. Here is the the code.
```
data = json.load(open('pre.txt'))
for key,val in data['outputs'].items():
print key
print data['outputs'][key]['feat_left']
```
**EDIT**
Here is the snapshot the file. I want to read `key` and `feat_left` for outputs
```
{
"outputs": {
"/home/113267806.jpg": {
"feat_left": [
2.369331121444702,
-1.1544183492660522
],
"feat_right": [
2.2432730197906494,
-0.896904468536377
]
},
"/home/115061965.jpg": {
"feat_left": [
1.8996189832687378,
-1.3713303804397583
],
"feat_right": [
1.908974051475525,
-1.4422794580459595
]
},
"/home/119306609.jpg": {
"feat_left": [
-0.7765399217605591,
-1.690917730331421
],
"feat_right": [
-1.1964678764343262,
-1.9359161853790283
]
}
}
}
```
P.S: Thanks to Rahul K P for the code
|
2017/02/20
|
[
"https://Stackoverflow.com/questions/42349982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396273/"
] |
No top values are skipped. There are 45875 items in your data['output'] object. Try the following code:
```
len(data['outputs'].items())
```
And there are exactly 45875 items in your JSON file. Just note that JSON object is an unordered collection in python, like `dict`.
|
If you just want to print the content of the file by using a for-loop, you can try like this:
```
data = json.load(open('pre.txt')
for key,val in data['outputs'].items():
print key
print val[0] #this will print the array and its values below "feat_left", if the json is consistent
```
A more robust solution could look like this:
```
data = json.load(open('pre.txt')
for key,val in data['outputs'].items():
print key
for feat_key, feat_val in val.items():
if feat_key == 'feat_left':
print feat_val
```
Code is untested, give it a try.
|
42,349,982
|
I am trying to read the JSON file in python and it is successfully however some top values are skipped. I am trying to debug the reason. Here is the the code.
```
data = json.load(open('pre.txt'))
for key,val in data['outputs'].items():
print key
print data['outputs'][key]['feat_left']
```
**EDIT**
Here is the snapshot the file. I want to read `key` and `feat_left` for outputs
```
{
"outputs": {
"/home/113267806.jpg": {
"feat_left": [
2.369331121444702,
-1.1544183492660522
],
"feat_right": [
2.2432730197906494,
-0.896904468536377
]
},
"/home/115061965.jpg": {
"feat_left": [
1.8996189832687378,
-1.3713303804397583
],
"feat_right": [
1.908974051475525,
-1.4422794580459595
]
},
"/home/119306609.jpg": {
"feat_left": [
-0.7765399217605591,
-1.690917730331421
],
"feat_right": [
-1.1964678764343262,
-1.9359161853790283
]
}
}
}
```
P.S: Thanks to Rahul K P for the code
|
2017/02/20
|
[
"https://Stackoverflow.com/questions/42349982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396273/"
] |
No top values are skipped. There are 45875 items in your data['output'] object. Try the following code:
```
len(data['outputs'].items())
```
And there are exactly 45875 items in your JSON file. Just note that JSON object is an unordered collection in python, like `dict`.
|
i think you want:
```
for key, val in data['outputs'].items():
if 'feat_left' in val:
print key, data['outputs'][key]['feat_left']
```
|
42,349,982
|
I am trying to read the JSON file in python and it is successfully however some top values are skipped. I am trying to debug the reason. Here is the the code.
```
data = json.load(open('pre.txt'))
for key,val in data['outputs'].items():
print key
print data['outputs'][key]['feat_left']
```
**EDIT**
Here is the snapshot the file. I want to read `key` and `feat_left` for outputs
```
{
"outputs": {
"/home/113267806.jpg": {
"feat_left": [
2.369331121444702,
-1.1544183492660522
],
"feat_right": [
2.2432730197906494,
-0.896904468536377
]
},
"/home/115061965.jpg": {
"feat_left": [
1.8996189832687378,
-1.3713303804397583
],
"feat_right": [
1.908974051475525,
-1.4422794580459595
]
},
"/home/119306609.jpg": {
"feat_left": [
-0.7765399217605591,
-1.690917730331421
],
"feat_right": [
-1.1964678764343262,
-1.9359161853790283
]
}
}
}
```
P.S: Thanks to Rahul K P for the code
|
2017/02/20
|
[
"https://Stackoverflow.com/questions/42349982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396273/"
] |
No top values are skipped. There are 45875 items in your data['output'] object. Try the following code:
```
len(data['outputs'].items())
```
And there are exactly 45875 items in your JSON file. Just note that JSON object is an unordered collection in python, like `dict`.
|
Try this:
```
for key, val in data['outputs'].items():
print key, val['feat_left']
#output /home/119306609.jpg [-0.7765399217605591, -1.690917730331421]
```
This get every key and element within 'feat\_left'
Then you can display in the page however you like
|
42,349,982
|
I am trying to read the JSON file in python and it is successfully however some top values are skipped. I am trying to debug the reason. Here is the the code.
```
data = json.load(open('pre.txt'))
for key,val in data['outputs'].items():
print key
print data['outputs'][key]['feat_left']
```
**EDIT**
Here is the snapshot the file. I want to read `key` and `feat_left` for outputs
```
{
"outputs": {
"/home/113267806.jpg": {
"feat_left": [
2.369331121444702,
-1.1544183492660522
],
"feat_right": [
2.2432730197906494,
-0.896904468536377
]
},
"/home/115061965.jpg": {
"feat_left": [
1.8996189832687378,
-1.3713303804397583
],
"feat_right": [
1.908974051475525,
-1.4422794580459595
]
},
"/home/119306609.jpg": {
"feat_left": [
-0.7765399217605591,
-1.690917730331421
],
"feat_right": [
-1.1964678764343262,
-1.9359161853790283
]
}
}
}
```
P.S: Thanks to Rahul K P for the code
|
2017/02/20
|
[
"https://Stackoverflow.com/questions/42349982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396273/"
] |
i think you want:
```
for key, val in data['outputs'].items():
if 'feat_left' in val:
print key, data['outputs'][key]['feat_left']
```
|
If you just want to print the content of the file by using a for-loop, you can try like this:
```
data = json.load(open('pre.txt')
for key,val in data['outputs'].items():
print key
print val[0] #this will print the array and its values below "feat_left", if the json is consistent
```
A more robust solution could look like this:
```
data = json.load(open('pre.txt')
for key,val in data['outputs'].items():
print key
for feat_key, feat_val in val.items():
if feat_key == 'feat_left':
print feat_val
```
Code is untested, give it a try.
|
42,349,982
|
I am trying to read the JSON file in python and it is successfully however some top values are skipped. I am trying to debug the reason. Here is the the code.
```
data = json.load(open('pre.txt'))
for key,val in data['outputs'].items():
print key
print data['outputs'][key]['feat_left']
```
**EDIT**
Here is the snapshot the file. I want to read `key` and `feat_left` for outputs
```
{
"outputs": {
"/home/113267806.jpg": {
"feat_left": [
2.369331121444702,
-1.1544183492660522
],
"feat_right": [
2.2432730197906494,
-0.896904468536377
]
},
"/home/115061965.jpg": {
"feat_left": [
1.8996189832687378,
-1.3713303804397583
],
"feat_right": [
1.908974051475525,
-1.4422794580459595
]
},
"/home/119306609.jpg": {
"feat_left": [
-0.7765399217605591,
-1.690917730331421
],
"feat_right": [
-1.1964678764343262,
-1.9359161853790283
]
}
}
}
```
P.S: Thanks to Rahul K P for the code
|
2017/02/20
|
[
"https://Stackoverflow.com/questions/42349982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7396273/"
] |
i think you want:
```
for key, val in data['outputs'].items():
if 'feat_left' in val:
print key, data['outputs'][key]['feat_left']
```
|
Try this:
```
for key, val in data['outputs'].items():
print key, val['feat_left']
#output /home/119306609.jpg [-0.7765399217605591, -1.690917730331421]
```
This get every key and element within 'feat\_left'
Then you can display in the page however you like
|
52,318,106
|
HTML: I have a 'sign-up' form in a modal (index.html)
JS: The form data is posted to a python flask function: /signup\_user
```
$(function () {
$('#signupButton').click(function () {
$.ajax({
url: '/signup_user',
method: 'POST',
data: $('#signupForm').serialize()
})
.done(function (data) {
console.log('success callback 1', data)
})
.fail(function (xhr) {
console.log('error callback 1', xhr);
})
})
});
```
Python/Flask:
```
@app.route('/signup_user', methods=['POST'])
def signup_user():
#Request data from form and send to database
#Check that username isn't already taken
#if the username is not already taken
if new_user:
users.insert_one(new_user)
message = "Success"
return message
#else if ussername is taken, send message to user viewable in the modal
else:
message = "Failure"
return message
return redirect(url_for('index'))
```
I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username.
Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure.
How do I get the error message back to display in the form/modal?
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52318106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5632508/"
] |
I would do something like this:
```
student_marks.group_by { |k, v| v }.map { |k, v| [k, v.map(&:first)] }.to_h
#=> { 50 => ["Alex", "Matt"], 54 => ["Beth"]}
```
|
Another way could be
```
student_marks.each.with_object(Hash.new([])){ |(k,v), h| h[v] += [k] }
#=> {50=>["Alex", "Matt"], 54=>["Beth"]}
```
|
52,318,106
|
HTML: I have a 'sign-up' form in a modal (index.html)
JS: The form data is posted to a python flask function: /signup\_user
```
$(function () {
$('#signupButton').click(function () {
$.ajax({
url: '/signup_user',
method: 'POST',
data: $('#signupForm').serialize()
})
.done(function (data) {
console.log('success callback 1', data)
})
.fail(function (xhr) {
console.log('error callback 1', xhr);
})
})
});
```
Python/Flask:
```
@app.route('/signup_user', methods=['POST'])
def signup_user():
#Request data from form and send to database
#Check that username isn't already taken
#if the username is not already taken
if new_user:
users.insert_one(new_user)
message = "Success"
return message
#else if ussername is taken, send message to user viewable in the modal
else:
message = "Failure"
return message
return redirect(url_for('index'))
```
I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username.
Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure.
How do I get the error message back to display in the form/modal?
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52318106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5632508/"
] |
I would do something like this:
```
student_marks.group_by { |k, v| v }.map { |k, v| [k, v.map(&:first)] }.to_h
#=> { 50 => ["Alex", "Matt"], 54 => ["Beth"]}
```
|
```
student_marks.group_by(&:last).transform_values { |v| v.map(&:first) }
#=> {50=>["Alex", "Matt"], 54=>["Beth"]}
```
[Hash#transform\_values](https://ruby-doc.org/core-2.4.0/Hash.html) made its debut in Ruby MRI v2.4.0.
|
52,318,106
|
HTML: I have a 'sign-up' form in a modal (index.html)
JS: The form data is posted to a python flask function: /signup\_user
```
$(function () {
$('#signupButton').click(function () {
$.ajax({
url: '/signup_user',
method: 'POST',
data: $('#signupForm').serialize()
})
.done(function (data) {
console.log('success callback 1', data)
})
.fail(function (xhr) {
console.log('error callback 1', xhr);
})
})
});
```
Python/Flask:
```
@app.route('/signup_user', methods=['POST'])
def signup_user():
#Request data from form and send to database
#Check that username isn't already taken
#if the username is not already taken
if new_user:
users.insert_one(new_user)
message = "Success"
return message
#else if ussername is taken, send message to user viewable in the modal
else:
message = "Failure"
return message
return redirect(url_for('index'))
```
I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username.
Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure.
How do I get the error message back to display in the form/modal?
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52318106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5632508/"
] |
I would do something like this:
```
student_marks.group_by { |k, v| v }.map { |k, v| [k, v.map(&:first)] }.to_h
#=> { 50 => ["Alex", "Matt"], 54 => ["Beth"]}
```
|
Another easy way
```
student_marks.keys.group_by{ |v| student_marks[v] }
{50=>["Alex", "Matt"], 54=>["Beth"]}
```
|
52,318,106
|
HTML: I have a 'sign-up' form in a modal (index.html)
JS: The form data is posted to a python flask function: /signup\_user
```
$(function () {
$('#signupButton').click(function () {
$.ajax({
url: '/signup_user',
method: 'POST',
data: $('#signupForm').serialize()
})
.done(function (data) {
console.log('success callback 1', data)
})
.fail(function (xhr) {
console.log('error callback 1', xhr);
})
})
});
```
Python/Flask:
```
@app.route('/signup_user', methods=['POST'])
def signup_user():
#Request data from form and send to database
#Check that username isn't already taken
#if the username is not already taken
if new_user:
users.insert_one(new_user)
message = "Success"
return message
#else if ussername is taken, send message to user viewable in the modal
else:
message = "Failure"
return message
return redirect(url_for('index'))
```
I cannot figure out how to get the flask function to return the "Failure" message to the form in the modal so that the user can change the username.
Right now, as soon as I click the submit button the form/modal disappears and the 'Failure' message refreshes the entire page to a blank page with the word Failure.
How do I get the error message back to display in the form/modal?
|
2018/09/13
|
[
"https://Stackoverflow.com/questions/52318106",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5632508/"
] |
```
student_marks.group_by(&:last).transform_values { |v| v.map(&:first) }
#=> {50=>["Alex", "Matt"], 54=>["Beth"]}
```
[Hash#transform\_values](https://ruby-doc.org/core-2.4.0/Hash.html) made its debut in Ruby MRI v2.4.0.
|
Another way could be
```
student_marks.each.with_object(Hash.new([])){ |(k,v), h| h[v] += [k] }
#=> {50=>["Alex", "Matt"], 54=>["Beth"]}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.