qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
32,043,990
|
I'm trying to get the `SPF records` of a `domains` and the domains are read from a file.When i am trying to get the spf contents and write it to a file and the code gives me the results of last domain got from input file.
```
Example `Input_Domains.txt`
blah.com
box.com
marketo.com
```
The output,I get is only for the `marketo.com`
```
#!/usr/bin/python
import sys
import socket
import dns.resolver
import re
def getspf (domain):
answers = dns.resolver.query(domain, 'TXT')
for rdata in answers:
for txt_string in rdata.strings:
if txt_string.startswith('v=spf1'):
return txt_string.replace('v=spf1','')
with open('Input_Domains.txt','r') as f:
for line in f:
full_spf=getspf(line.strip())
my_file=open("out_spf.txt","w")
my_file.write(full_spf)
my_file.close()
```
How can i solve this by writing all the spf contents of domains which i got it to file,Any suggestions please ?
|
2015/08/17
|
[
"https://Stackoverflow.com/questions/32043990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5093018/"
] |
It is because you are rewriting `full_spf` all the time so only last value is stored
```
with open('Input_Domains.txt','r') as f:
for line in f:
full_spf=getspf(line.strip())
```
**Modification:**
```
with open('Input_Domains.txt','r') as f:
full_spf=""
for line in f:
full_spf+=getspf(line.strip())+"\n"
```
|
Try using [a generator expression](https://wiki.python.org/moin/Generators) inside your `with` block, instead of a regular `for` loop:
```
full_spf = '\n'.join(getspf(line.strip()) for line in f)
```
This will grab all the lines at once, do your custom `getspf` operations to them, and then join them with newlines between.
The advantage to doing it this way is that conceptually you're doing a single transformation over the data. There's nothing inherently "loopy" about taking a block of data and processing it line-by-line, since it could be done in any order, all lines are independent. By doing it with a generator expression you are expressing your algorithm as a single transformation-and-assignment operation.
Edit: Small oversight, since `join` needs a list of strings, you'll have to return at least an empty string in every case from your `getspf` function, rather than defaulting to `None` when you don't return anything.
| 12,974
|
25,927,804
|
Hey guys For my school project I need to web scrape slideshare.net for page views using python. However, it wont let me scrape the page views of the user name (which the professor specifically told us to scrape) for example if I go to slideshare.net/Username on the bottom there will be a page view counter when i go into page source the code is
```
<span class="noWrap"> xxxx views </span>
```
when I plug this into python as
```
<span class="noWrap"> (.+?) </span>
```
Nothing happens all I get is [] in the out put window
HERE IS FULL CODE -
===================
```
import urllib
import re
symbolfile = open("viewpage.txt")
symbolslist = symbolfile.read()
for symbol in symbolslist:
print symbol
htmlfile = urllib.urlopen("http://www.slideshare.net/xxxxxxx")
htmltext = htmlfile.read()
regex = ' <span class="noWrap">(.+?)</span>'
regex_a = '<title>(.+?)</title>'
pattern = re.compile(regex)
pattern_a = re.compile(regex_a)
view = re.findall(pattern,htmltext)
view_a = re.findall(pattern_a,htmltext)
print (view, view_a)
```
|
2014/09/19
|
[
"https://Stackoverflow.com/questions/25927804",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4057203/"
] |
You have a space at the start of your regex string, so it will only match if there's (at least) one space before the `<span`...
So instead of
`regex = ' <span class="noWrap">(.+?)</span>'`
try
`regex = '<span class="noWrap">(.+?)</span>'`
or even better
`regex = r'<span class="noWrap">\s*(.+?)\s*</span>'`
Raw strings like `r'stuff'` are preferred for regex use, so you don't have to escape too much stuff inside the regex string.
The `\s` patterns will consume spaces so your won't need to use `strip()` on the data you capture with `findall()`.
I should also mention that `pattern.findall(text)` is a bit nicer syntax than `re.findall(pattern, text)`.
|
Although this in not an technically an answer you will need to change your regular expression. I suggest you look at the python regex chapters.
What I will tell you is that your line
```
regex = ' <span class="noWrap">(.+?)</span>'
```
will not match what you are after based on the output of the webpage since there are carriage returns in the html and your regex will not match these, hence the empty list when you run your script.
Or you could remove the carriage returns before you run your regex with
```
htmltext = htmltext.replace("\n","")
```
| 12,975
|
7,621,897
|
I was wondering how to implement a global logger that could be used everywhere with your own settings:
I currently have a custom logger class:
```
class customLogger(logging.Logger):
...
```
The class is in a separate file with some formatters and other stuff.
The logger works perfectly on its own.
I import this module in my main python file and create an object like this:
```
self.log = logModule.customLogger(arguments)
```
But obviously, I cannot access this object from other parts of my code.
Am i using a wrong approach? Is there a better way to do this?
|
2011/10/01
|
[
"https://Stackoverflow.com/questions/7621897",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/815382/"
] |
Create an instance of `customLogger` in your log module and use it as a singleton - just use the imported instance, rather than the class.
|
You can just pass it a string with a common sub-string before the first period. The parts of the string separated by the period (".") can be used for different classes / modules / files / etc. Like so (specifically the `logger = logging.getLogger(loggerName)` part):
```
def getLogger(name, logdir=LOGDIR_DEFAULT, level=logging.DEBUG, logformat=FORMAT):
base = os.path.basename(__file__)
loggerName = "%s.%s" % (base, name)
logFileName = os.path.join(logdir, "%s.log" % loggerName)
logger = logging.getLogger(loggerName)
logger.setLevel(level)
i = 0
while os.path.exists(logFileName) and not os.access(logFileName, os.R_OK | os.W_OK):
i += 1
logFileName = "%s.%s.log" % (logFileName.replace(".log", ""), str(i).zfill((len(str(i)) + 1)))
try:
#fh = logging.FileHandler(logFileName)
fh = RotatingFileHandler(filename=logFileName, mode="a", maxBytes=1310720, backupCount=50)
except IOError, exc:
errOut = "Unable to create/open log file \"%s\"." % logFileName
if exc.errno is 13: # Permission denied exception
errOut = "ERROR ** Permission Denied ** - %s" % errOut
elif exc.errno is 2: # No such directory
errOut = "ERROR ** No such directory \"%s\"** - %s" % (os.path.split(logFileName)[0], errOut)
elif exc.errno is 24: # Too many open files
errOut = "ERROR ** Too many open files ** - Check open file descriptors in /proc/<PID>/fd/ (PID: %s)" % os.getpid()
else:
errOut = "Unhandled Exception ** %s ** - %s" % (str(exc), errOut)
raise LogException(errOut)
else:
formatter = logging.Formatter(logformat)
fh.setLevel(level)
fh.setFormatter(formatter)
logger.addHandler(fh)
return logger
class MainThread:
def __init__(self, cfgdefaults, configdir, pidfile, logdir, test=False):
self.logdir = logdir
logLevel = logging.DEBUG
logPrefix = "MainThread_TEST" if self.test else "MainThread"
try:
self.logger = getLogger(logPrefix, self.logdir, logLevel, FORMAT)
except LogException, exc:
sys.stderr.write("%s\n" % exc)
sys.stderr.flush()
os._exit(0)
else:
self.logger.debug("-------------------- MainThread created. Starting __init__() --------------------")
def run(self):
self.logger.debug("Initializing ReportThreads..")
for (group, cfg) in self.config.items():
self.logger.debug(" ------------------------------ GROUP '%s' CONFIG ------------------------------ " % group)
for k2, v2 in cfg.items():
self.logger.debug("%s <==> %s: %s" % (group, k2, v2))
try:
rt = ReportThread(self, group, cfg, self.logdir, self.test)
except LogException, exc:
sys.stderr.write("%s\n" % exc)
sys.stderr.flush()
self.logger.exception("Exception when creating ReportThread (%s)" % group)
logging.shutdown()
os._exit(1)
else:
self.threads.append(rt)
self.logger.debug("Threads initialized.. \"%s\"" % ", ".join([t.name for t in self.threads]))
for t in self.threads:
t.Start()
if not self.test:
self.loop()
class ReportThread:
def __init__(self, mainThread, name, config, logdir, test):
self.mainThread = mainThread
self.name = name
logLevel = logging.DEBUG
self.logger = getLogger("MainThread%s.ReportThread_%s" % ("_TEST" if self.test else "", self.name), logdir, logLevel, FORMAT)
self.logger.info("init database...")
self.initDB()
# etc....
if __name__ == "__main__":
# .....
MainThread(cfgdefaults=options.cfgdefaults, configdir=options.configdir, pidfile=options.pidfile, logdir=options.logdir, test=options.test)
```
| 12,976
|
69,410,508
|
I wanted to save all the text which I enter in the listbox. "my\_list" is the listbox over here.
But when I save my file, I get the output in the form of a tuple, as shown below:
```
("Some","Random","Values")
```
Below is the python code. I have added comments to it.
The add\_items function adds data into the entry box.
The savefile function is supposed to save all the data entered in the entry box (each entry must be in a new line)
```py
from tkinter import *
from tkinter.font import Font
from tkinter import filedialog
root = Tk()
root.title('TODO List!')
root.geometry("500x500")
name = StringVar()
############### Fonts ################
my_font = Font(
family="Brush Script MT",
size=30,
weight="bold")
################# Frame #################
my_frame = Frame(root)
my_frame.pack(pady=10)
################# List Box #############
my_list = Listbox(my_frame,
font=my_font,
width=25,
height=5,
bg="SystemButtonFace",
bd=0,
fg="#464646",
highlightthickness=0,
selectbackground="grey",
activestyle="none")
my_list.pack(side=LEFT, fill=BOTH)
############### Dummy List ##################
#stuff = ["Do daily Checkin","Do Event checkin","Complete Daily Task","Complete Weekly Task","Take a break"]
############# Add dummmy list to list box ##############
#for item in stuff:
# my_list.insert(END, item)
################# Ceate Scrollbar ###########################
my_scrollbar= Scrollbar(my_frame)
my_scrollbar.pack(side=RIGHT, fill=BOTH)
#################### Add Scrollbar ######################
my_list.config(yscrollcommand=my_scrollbar.set)
my_scrollbar.config(command=my_list.yview)
################### ADD item entry box#################
my_entry = Entry(root, font=("Helvetica", 24),width=24, textvariable=name)
my_entry.pack(pady=20)
######################## Crete button frame ##########
button_frame=Frame(root)
button_frame.pack(pady=20)
##################### Funnctions ###################
def add_item():
my_list.insert(END, my_entry.get())
name1 = name.get()
my_entry.delete(0, END)
def saveFile():
file = filedialog.asksaveasfile(initialdir="C:\\Users\\Deepu John\\OneDrive\\Deepu 2020\\Projects\\rough",
defaultextension='.txt',
filetypes=[
("Text file",".txt"),
("HTML file", ".html"),
("All files", ".*"),
])
if file is None:
return
#fob = open(file,'w')
filetext = str(my_list.get('1', 'end'))
file.write(filetext)
file.close()
def delete_list():
my_list.delete(0,END)
################# Add Buttons ################
add_button = Button(button_frame, text="Add Item",command=add_item)
save_button = Button(button_frame, text="Save",width=8,command=saveFile)
add_button.grid(row=0,column=1, padx=20)
save_button.grid(row=0,column=2,padx=5)
root.mainloop()
```
I want each value to be in a new line. Like this:
```
Some
Random
Values
```
How do I do this?
|
2021/10/01
|
[
"https://Stackoverflow.com/questions/69410508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16781901/"
] |
The main part is that `listbox.get(first, last)` returns a tuple so you could use `.join` string method to create a string where each item in the tuple is separated by the given string:
```py
'\n'.join(listbox.get('0', 'end'))
```
Complete example:
```py
from tkinter import Tk, Entry, Listbox, Button
def add_item(event=None):
item = entry.get() or None
entry.delete('0', 'end')
listbox.insert('end', item)
def save_listbox():
"""the main part:
as `listbox.get(first, last)` returns a tuple
one can simply use the `.join` method
to create a string where each item is separated
by the given string"""
data = '\n'.join(listbox.get('0', 'end'))
with open('my_list.txt', 'w') as file:
file.write(data)
root = Tk()
entry = Entry(root)
entry.pack(padx=10, pady=10)
entry.bind('<Return>', add_item)
listbox = Listbox(root)
listbox.pack(padx=10, pady=10)
button = Button(root, text='Save', command=save_listbox)
button.pack(padx=10, pady=10)
root.mainloop()
```
Also:
I strongly advise against using wildcard (`*`) when importing something, You should either import what You need, e.g. `from module import Class1, func_1, var_2` and so on or import the whole module: `import module` then You can also use an alias: `import module as md` or sth like that, the point is that don't import everything unless You actually know what You are doing; name clashes are the issue.
|
This is a very simple task.
Where you are saving the file, write this code:
```
newlist = []
for item in mylist.get(0, END):
newlist.append(item)
f = open(file, 'w')
for line in newlist:
f.write(line+'\n')
f.close()
```
To know more about file saving in tkinter or to make it much easier, you can check my txtopp repository on GitHub. It is a module I made:
<https://github.com/armaanPYTHON/txt-file-management-module>
| 12,986
|
60,838,550
|
I've written a script in python using selenium to log in to a website and then go on to the target page in order to upload a pdf file. The script can log in successfully but throws `element not interactable` error when it comes to upload the pdf file. This is the [landing\_page](https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/) in which the script first clicks on the button right next to `Your Profile` and uses `SIM.iqbal_123` and `SShift_123` respectively to log in to that site and then uses this [target\_link](https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/2/) to upload that file. To upload that file it is necessary to click on `select` button first and then `cv` button. However, the script throws the following error when it is supposed to click on the `cv` button in order to upload the `pdf` file.
I've tried with:
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
landing_page = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/'
target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/2/'
driver = webdriver.Chrome()
wait = WebDriverWait(driver,30)
driver.get(landing_page)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".profileContainer > button.trigger"))).click()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='alias']"))).send_keys("SIM.iqbal_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='password']"))).send_keys("SShift_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button.loginBtn"))).click()
driver.get(target_link)
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
elem = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"form[class='fileForm'] > label[data-type='12']")))
elem.send_keys("C://Users/WCS/Desktop/CV.pdf")
```
Error that the script encounters pointing at the last line:
```
Traceback (most recent call last):
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\keep_it.py", line 22, in <module>
elem.send_keys("C://Users/WCS/Desktop/CV.pdf")
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 479, in send_keys
'value': keys_to_typing(value)})
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\WCS\AppData\Local\Programs\Python\Python37-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
(Session info: chrome=80.0.3987.149)
```
This is how I tried using requests which could not upload the file either:
```
import requests
from bs4 import BeautifulSoup
aplication_link = 'https://jobs.allianz.com/sap/opu/odata/hcmx/erc_ui_auth_srv/AttachmentSet?sap-client=100&sap-language=en'
with requests.Session() as s:
s.auth = ("SIM.iqbal_123", "SShift_123")
s.post("https://jobs.allianz.com/sap/hcmx/validate_ea?sap-client=100&sap-language={2}")
r = s.get("https://jobs.allianz.com/sap/opu/odata/hcmx/erc_ui_auth_srv/UserSet('me')?sap-client=100&sap-language=en", headers={'x-csrf-token':'Fetch'})
token = r.headers.get("x-csrf-token")
s.headers["x-csrf-token"] = token
file = open("CV.pdf","rb")
r = s.post(aplication_link,files={"Slug":f"Filename={file}&Title=CV%5FTEST&AttachmentTypeID=12"})
print(r.status_code)
```
Btw, this is the [pdf](https://filebin.net/dc41y5p7ix8jp58t) file in case you wanna test.
>
> How can I upload a pdf file using send\_keys or requests?
>
>
>
***EDIT:***
I've brought about some changes in my [existing script](https://filebin.net/r4ux62fizvyu29ne) which now works for this [link](https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/1/) visible there as `Cover Letter` but fails miserably when it goes for this [link](https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/2/) visible as `Documents` . They both are almost identical.
|
2020/03/24
|
[
"https://Stackoverflow.com/questions/60838550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10568531/"
] |
Please refer below solution to avoid your exception,
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait as Wait
from selenium.webdriver.common.action_chains import ActionChains
import os
landing_page = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/'
target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57262231/2/'
driver = webdriver.Chrome(executable_path=r"C:\New folder\chromedriver.exe")
wait = WebDriverWait(driver,30)
driver.get(landing_page)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".profileContainer > button.trigger"))).click()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='alias']"))).send_keys("SIM.iqbal_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='password']"))).send_keys("SShift_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button.loginBtn"))).click()
driver.get(target_link)
driver.maximize_window()
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
element = wait.until(EC.element_to_be_clickable((By.XPATH,"//label[@class='button uploadType-12-btn']")))
print element.text
webdriver.ActionChains(driver).move_to_element(element).click(element).perform()
webdriver.ActionChains(driver).move_to_element(element).click(element).perform()
absolute_file_path = os.path.abspath("Path of your pdf file")
print absolute_file_path
file_input = driver.find_element_by_id("DOCUMENTS--fileElem")
file_input.send_keys(absolute_file_path)
```
**Output:**
[](https://i.stack.imgur.com/9TRU2.png)
|
Try this script , it upload document on both pages
```
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
landing_page = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/SEARCH/RESULTS/'
first_target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/1/'
second_target_link = 'https://jobs.allianz.com/sap/bc/bsp/sap/zhcmx_erc_ui_ex/desktop.html#/APPLICATION/57274787/2/'
driver = webdriver.Chrome()
wait = WebDriverWait(driver,30)
driver.get(landing_page)
wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".profileContainer > button.trigger"))).click()
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='alias']"))).send_keys("SIM.iqbal_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[name='password']"))).send_keys("SShift_123")
wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button.loginBtn"))).click()
#----------------------------first upload starts from here-----------------------------------
driver.get(first_target_link)
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
element = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"form[class='fileForm'] > label[class$='uploadTypeCoverLetterBtn']")))
driver.execute_script("arguments[0].click();",element)
file_input = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[id='COVERLETTER--fileElem")))
file_input.send_keys("C://Users/WCS/Desktop/script selenium/CV.pdf")
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,".loadingSpinner")))
save_draft = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".applicationStepsUIWrapper > button.saveDraftBtn")))
driver.execute_script("arguments[0].click();",save_draft)
close = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".promptWrapper button.closeBtn")))
driver.execute_script("arguments[0].click();",close)
#-------------------------second upload starts from here-------------------------------------
driver.get(second_target_link)
button = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"button[class*='uploadBtn']")))
driver.execute_script("arguments[0].click();",button)
element = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"form[class='fileForm'] > label[data-type='12']")))
driver.execute_script("arguments[0].click();",element)
file_input = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR,"input[id='DOCUMENTS--fileElem")))
file_input.send_keys("C://Users/WCS/Desktop/script selenium/CV.pdf")
wait.until(EC.invisibility_of_element_located((By.CSS_SELECTOR,".loadingSpinner")))
save_draft = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".applicationStepsUIWrapper > button.saveDraftBtn")))
driver.execute_script("arguments[0].click();",save_draft)
close = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".promptWrapper button.closeBtn")))
driver.execute_script("arguments[0].click();",close)
```
| 12,987
|
16,505,752
|
I collected some tweets through twitter api. Then I counted the words using `split(' ')` in python. However, some words appear like this:
```
correct!
correct.
,correct
blah"
...
```
So how can I format the tweets without punctuation? Or maybe I should try another way to `split` tweets? Thanks.
|
2013/05/12
|
[
"https://Stackoverflow.com/questions/16505752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/975222/"
] |
You can do the split on multiple characters using `re.split`...
```
from string import punctuation
import re
puncrx = re.compile(r'[{}\s]'.format(re.escape(punctuation)))
print filter(None, puncrx.split(your_tweet))
```
Or, just find words that contain certain contiguous characters:
```
print re.findall(re.findall('[\w#@]+', s), your_tweet)
```
eg:
```
print re.findall(r'[\w@#]+', 'talking about #python with @someone is so much fun! Is there a 140 char limit? So not cool!')
# ['talking', 'about', '#python', 'with', '@someone', 'is', 'so', 'much', 'fun', 'Is', 'there', 'a', '140', 'char', 'limit', 'So', 'not', 'cool']
```
I did originally have a smiley in the example, but of course these end up getting filtered out with this method, so that's something to be wary of.
|
Try removing the punctuation from the string before doing the split.
```
import string
s = "Some nice sentence. This has punctuation!"
out = s.translate(string.maketrans("",""), string.punctuation)
```
Then do the `split` on `out`.
| 12,988
|
56,399,448
|
I have the following code which sums up values of each key. I am trying to use the list in the reducer since my actual use case is to sample values of each key. I get the error I show below? How do I achieve with a list(or tuple). I always get my data in the form of tensors and need to use tensorflow to achieve the reduction.
Raw data
```
ids | features
--------------
1 | 1
2 | 2.2
3 | 7
1 | 3.0
2 | 2
3 | 3
```
Desired data
```
ids | features
--------------
1 | 4
2 | 4.2
3 | 10
```
Tensorflow code
```
import tensorflow as tf
tf.enable_eager_execution()
# this is a toy example. My inputs are always passed as tensors.
ids = tf.constant([1, 2, 3, 1, 2, 3])
features = tf.constant([1, 2.2, 7, 3.0, 2, 3])
# Define reducer
# Reducer requires 3 functions - init_func, reduce_func, finalize_func.
# init_func - to define initial value
# reducer_func - operation to perform on values with same key
# finalize_func - value to return in the end.
def init_func(_):
return []
def reduce_func(state, value):
# I actually want to sample 2 values from list but for simplicity here I return sum
return state + value['features']
def finalize_func(state):
return np.sum(state)
reducer = tf.contrib.data.Reducer(init_func, reduce_func, finalize_func)
# Group by reducer
# Group the data by id
def key_f(row):
return tf.to_int64(row['ids'])
t = tf.contrib.data.group_by_reducer(
key_func = key_f,
reducer = reducer)
ds = tf.data.Dataset.from_tensor_slices({'ids':ids, 'features' : features})
ds = ds.apply(t)
ds = ds.batch(6)
iterator = ds.make_one_shot_iterator()
data = iterator.get_next()
print(data)
```
Following is the error I get
```
/home/lyft/venv/local/lib/python2.7/site-packages/tensorflow/python/data/ops/dataset_ops.pyc in __init__(self, func, transformation_name, dataset, input_classes, input_shapes, input_types, input_structure, add_to_graph, defun_kwargs)
2122 self._function = tf_data_structured_function_wrapper
2123 if add_to_graph:
-> 2124 self._function.add_to_graph(ops.get_default_graph())
2125 else:
2126 # Use the private method that will execute
AttributeError: '_OverloadedFunction' object has no attribute 'add_to_graph'
```
|
2019/05/31
|
[
"https://Stackoverflow.com/questions/56399448",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7418127/"
] |
What you can do to mock your store is
```
import { store } from './reduxStore';
import sampleFunction from './sampleFunction.js';
jest.mock('./reduxStore')
const mockState = {
foo: { isGood: true }
}
// in this point store.getState is going to be mocked
store.getState = () => mockState
test('sampleFunction returns true', () => {
// assert that state.foo.isGood = true
expect(sampleFunction()).toBeTruthy();
});
```
|
```
import { store } from './reduxStore';
import sampleFunction from './sampleFunction.js';
beforeAll(() => {
jest.mock('./reduxStore')
const mockState = {
foo: { isGood: true }
}
// making getState as mock function and returning mock value
store.getState = jest.fn().mockReturnValue(mockState)
});
afterAll(() => {
jest.clearAllMocks();
jest.resetAllMocks();
});
test('sampleFunction returns true', () => {
// assert that state.foo.isGood = true
expect(sampleFunction()).toBeTruthy();
});
```
| 12,990
|
36,620,656
|
I am trying to learn how to write a script `control.py`, that runs another script `test.py` in a loop for a certain number of times, in each run, reads its output and halts it if some predefined output is printed (e.g. the text 'stop now'), and the loop continues its iteration (once `test.py` has finished, either on its own, or by force). So something along the lines:
```
for i in range(n):
os.system('test.py someargument')
if output == 'stop now': #stop the current test.py process and continue with next iteration
#output here is supposed to contain what test.py prints
```
* The problem with the above is that, it does not check the output of `test.py` as it is running, instead it waits until `test.py` process is finished on its own, right?
* Basically trying to learn how I can use a python script to control another one, as it is running. (e.g. having access to what it prints and so on).
* Finally, is it possible to run `test.py` in a new terminal (i.e. not in `control.py`'s terminal) and still achieve the above goals?
---
**An attempt:**
`test.py` is this:
```
from itertools import permutations
import random as random
perms = [''.join(p) for p in permutations('stop')]
for i in range(1000000):
rand_ind = random.randrange(0,len(perms))
print perms[rand_ind]
```
And `control.py` is this: (following Marc's suggestion)
```
import subprocess
command = ["python", "test.py"]
n = 10
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
#if output == '' and p.poll() is not None:
# break
if output == 'stop':
print 'sucess'
p.kill()
break
#Do whatever you want
#rc = p.poll() #Exit Code
```
|
2016/04/14
|
[
"https://Stackoverflow.com/questions/36620656",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can use subprocess module or also the os.popen
```
os.popen(command[, mode[, bufsize]])
```
Open a pipe to or from command. The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w'.
With subprocess I would suggest
```
subprocess.call(['python.exe', command])
```
or the subprocess.Popen --> that is similar to os.popen (for instance)
With popen you can read the connected object/file and check whether "Stop now" is there.
The os.system is not deprecated and you can use as well (but you won't get a object from that), you can just check if return at the end of execution.
From subprocess.call you can run it in a new terminal or if you want to call multiple times ONLY the test.py --> than you can put your script in a def main() and run the main as much as you want till the "Stop now" is generated.
Hope this solve your query :-) otherwise comment again.
Looking at what you wrote above you can also redirect the output to a file directly from the OS call --> os.system(test.py \*args >> /tmp/mickey.txt) then you can check at each round the file.
As said the popen is an object file that you can access.
|
You can use the "subprocess" library for that.
```
import subprocess
command = ["python", "test.py", "someargument"]
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline()
if output == '' and p.poll() is not None:
break
if output == 'stop now':
#Do whatever you want
rc = p.poll() #Exit Code
```
| 12,991
|
13,238,357
|
I face a problem when using Scrapy + Mongodb with Tor. I get the following error when I try to have a mongodb pipeline in Scrapy.
```
2012-11-05 13:41:14-0500 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
|S-chain|-<>-127.0.0.1:9050-<><>-127.0.0.1:27017-<--denied
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 131, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 97, in _run_print_help
func(*a, **kw)
File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 138, in _run_command
cmd.run(args, opts)
File "/usr/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 42, in run
q = self.crawler.queue
File "/usr/lib/python2.7/dist-packages/scrapy/command.py", line 33, in crawler
self._crawler.configure()
File "/usr/lib/python2.7/dist-packages/scrapy/crawler.py", line 43, in configure
self.engine = ExecutionEngine(self.settings, self._spider_closed)
File "/usr/lib/python2.7/dist-packages/scrapy/core/engine.py", line 33, in __init__
self.scraper = Scraper(self, self.settings)
File "/usr/lib/python2.7/dist-packages/scrapy/core/scraper.py", line 66, in __init__
self.itemproc = itemproc_cls.from_settings(settings)
File "/usr/lib/python2.7/dist-packages/scrapy/middleware.py", line 33, in from_settings
mw = mwcls()
File "/home/bharani/ABCD_scraper/political_forum_scraper/pipelines.py", line 9, in __init__
settings['MONGODB_PORT'])
File "/usr/local/lib/python2.7/dist-packages/pymongo/connection.py", line 290, in __init__
self.__find_node()
File "/usr/local/lib/python2.7/dist-packages/pymongo/connection.py", line 586, in __find_node
raise AutoReconnect(', '.join(errors))
pymongo.errors.AutoReconnect: could not connect to localhost:27017: [Errno 111] Connection refused
```
I am not sure how to resolve this. When I do not use `proxychains`, it crawls perfectly fine.
Any help is appreciated.
Thanks.
---
Edit:
It's not code specific. See this link: <http://isbullsh.it/2012/04/Web-crawling-with-scrapy/>
This is a simple tutorial to use `Scrapy` with `MongoDB`. We are supposed to call
`scrapy crawl isbullshit`
to run the crawler which works perfectly fine. To use `Tor`, it should be called like this:
`proxychains scrapy crawl isbullshit`
Which does not work for me. The source code of the tutorial is here: <https://github.com/BaltoRouberol/isbullshit-crawler>
|
2012/11/05
|
[
"https://Stackoverflow.com/questions/13238357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/139909/"
] |
```
pymongo.errors.AutoReconnect: could not connect to localhost:27017: [Errno 111] Connection refused
```
It seems you cannot connect to the localhost on port 27017. Is this the correct port and correct host? Make sure about that, also make sure mongodb server is running on the background otherwise you will never connect it.
If mongodb is running in the background, remove the mongodb.lock
```
rm -r/var/lib/mongodb
```
and restart the server, something like;
```
sudo service mongodb start
```
in Debian or
```
sudo systemctl restart mongodb
```
in Arch Linux
|
It might be that it's trying to redirect your MongoDB connection (localhost:27017) to TOR. If you want to exclude localhost connections from proxychains, you can add the following line to your */etc/proxychains.conf*:
```
localnet 127.0.0.1 000 255.255.255.255
```
| 12,995
|
48,734,784
|
so im working on this code for school and i dont know much about python. Can someone tell me why my loop keeps outputting invalid score when the input is part of the valid scores. So if i enter 1, it will say invalid score but it should be valid because i set the variable valid\_scores= [0,1,2,3,4,5,6,7,8,9,10].
This code has to allow the user to enter 5 inputs for six different groups and then add to a list which will find the sum of the list and the avarage of the list.
Code:
```
valid_scores =[0,1,2,3,4,5,6,7,8,9,10]
for group in groups:
scores_for_group = []
valid_scores_loops=0
print("What do you score for group {group_name} as a player?"\
.format(group_name=group))
while valid_scores_loops < 5:
valid_scores=True
player_input =input("* ")
if player_input==valid_scores:
player1.append(player_input)
scores_for_player.append(int(player_input))
else:
if player_input!=valid_scores:
valid_scores= False
print("Invalid score!")
```
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48734784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9346720/"
] |
The error means that the 3rd line is not well-formed: you need a (*blank space*) between `android:id` value and `android:title` key.
This is the correct XML:
```
<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@+id/Profile" android:title="Profile" android:icon="@drawable/ic_person_black_24dp" />
<item android:id="@+id/TradingHistory" android:title="TradingHistory" android:icon="@drawable/ic_history_black_24dp" />
<item android:id="@+id/News" android:title="News" android:icon="@drawable/ic_view_list_black_24dp" />
</menu>
```
|
Go to code in the toolbar and click on reformat code.
| 12,998
|
19,855,110
|
I am learning how to use Selenium in `python` and I try to modify a `css` style on <http://www.google.com>.
For example, the `<span class="gbts"> ....</span>` on that page.
I would like to modify the `gbts` class.
```
browser.execute_script("q = document.getElementById('gbts');" + "q.style.border = '1px solid red';")
```
Is there an API method called `getElementByClass('gbts')` ?
|
2013/11/08
|
[
"https://Stackoverflow.com/questions/19855110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1073181/"
] |
You are asking how to get an element by it's CSS class using JavaScript. Nothing to do with Selenium *really*.
Regardless, you have a few options. You can first grab the element using Selenium (so here, yes, Selenium is relevant):
```
element = driver.find_element_by_class_name("gbts")
```
With a reference to this element already, it's then very easy to give it a border:
```
driver.execute_script("arguments[0].style.border = '1px solid red';")
```
(Note, the `arguments[0]`)
If you really must use JavaScript and JavaScript alone, then you are very limited. This is because there is no `getElementByClassName` function within JavaScript. Only `getElementsByClassName` which means it would return a *list* of elements that match a given class.
So you must specifically target what element within the list, that is returned, you want to change. If I wanted to change the *very first* element that had a class of `gbts`, I'd do:
```
driver.execute_script("document.getElementsByClassName('gbts')[0].style.border = '1px solid red';")
```
I would suggest you go for the first option, which means you have Selenium do the leg work for you.
|
Another pure JavaScript option which you may find gives you more flexibility is to use `document.querySelector()`.
Not only will it automatically select the first item from the results set for you, but additionally say you have multiple elements with that class name, but you know that there is only one element with that class name within a certain parent element, you can restrict the search to within that parent element, e.g. `document.querySelector("#parent-div .gbts")`.
See the [(Mozilla documentation)](https://developer.mozilla.org/en-US/docs/Web/API/document.querySelector) for more info.
| 12,999
|
61,035,989
|
i am trying to run this simple flask app and I keep getting this error in the terminal when trying to run the flask app
`FLASK_APP=app.py flask run`
i keep getting this error :
`sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) fe_sendauth: no password supplied`
here is my app:
```
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres@localhost:5432/testdb'
db = SQLAlchemy(app)
class Person(db.Model):
__tablename__ = 'persons'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(), nullable=False)
db.create_all()
@app.route('/')
def index():
return 'Hello World!'
```
just to clarify i have psycopg2, flask-SQLAlchemy and all dependencies installed **AND THERE IS NO PASSWORD FOR USER postgres by default** and i haven't changed that and i can log in to database as postgres using psql **without password** without any problems.
Full error output:
```
FLASK_APP=app.py flask run
* Serving Flask app "app.py"
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
/usr/lib/python3/dist-packages/flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.
warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning.')
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 1122, in _do_get
return self._pool.get(wait, self._timeout)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/queue.py", line 145, in get
raise Empty
sqlalchemy.util.queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2147, in _wrap_pool_connect
return fn()
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 387, in connect
return _ConnectionFairy._checkout(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 766, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 516, in checkout
rec = pool._do_get()
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 1138, in _do_get
self._dec_overflow()
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 1135, in _do_get
return self._create_connection()
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 333, in _create_connection
return _ConnectionRecord(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 461, in __init__
self.__connect(first_connect_check=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 651, in __connect
connection = pool._invoke_creator(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 105, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 393, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/aa/.local/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/aa/.local/bin/flask", line 11, in <module>
sys.exit(main())
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 967, in main
cli.main(args=sys.argv[1:], prog_name="python -m flask" if as_module else None)
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 586, in main
return super(FlaskGroup, self).main(*args, **kwargs)
File "/home/aa/.local/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/aa/.local/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/aa/.local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/aa/.local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/aa/.local/lib/python3.6/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/home/aa/.local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 848, in run_command
app = DispatchingApp(info.load_app, use_eager_loading=eager_loading)
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 305, in __init__
self._load_unlocked()
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 330, in _load_unlocked
self._app = rv = self.loader()
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 388, in load_app
app = locate_app(self, import_name, name)
File "/home/aa/.local/lib/python3.6/site-packages/flask/cli.py", line 240, in locate_app
__import__(module_name)
File "/home/aa/Desktop/Udacity - FullStackND 2020/FullStackND2020/Practice Projects/flask hello world app/app.py", line 18, in <module>
db.create_all()
File "/usr/lib/python3/dist-packages/flask_sqlalchemy/__init__.py", line 972, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/usr/lib/python3/dist-packages/flask_sqlalchemy/__init__.py", line 964, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/usr/lib/python3/dist-packages/sqlalchemy/sql/schema.py", line 3934, in create_all
tables=tables)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1928, in _run_visitor
with self._optional_conn_ctx_manager(connection) as conn:
File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1921, in _optional_conn_ctx_manager
with self.contextual_connect() as conn:
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2112, in contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2151, in _wrap_pool_connect
e, dialect, self)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1465, in _handle_dbapi_exception_noconnection
exc_info
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 2147, in _wrap_pool_connect
return fn()
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 387, in connect
return _ConnectionFairy._checkout(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 766, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 516, in checkout
rec = pool._do_get()
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 1138, in _do_get
self._dec_overflow()
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 1135, in _do_get
return self._create_connection()
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 333, in _create_connection
return _ConnectionRecord(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 461, in __init__
self.__connect(first_connect_check=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/pool.py", line 651, in __connect
connection = pool._invoke_creator(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/strategies.py", line 105, in connect
return dialect.connect(*cargs, **cparams)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 393, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/aa/.local/lib/python3.6/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) fe_sendauth: no password supplied
```
|
2020/04/04
|
[
"https://Stackoverflow.com/questions/61035989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10600856/"
] |
Other than just setting a password, you can log in without a password by:
1. Enable trust authentication, eg `echo "local all all trust" | sudo tee -a /etc/postgresql/10/main/pg_hba.conf`
2. Create a role with login access with the same username as your local username, eg if `whoami` returns `myname`, then `sudo -u postgres psql -c "CREATE ROLE myname WITH LOGIN"`
3. Use sqlalchemy's postgresql unix domain connection instead of tcp ([ref](https://docs.sqlalchemy.org/en/13/dialects/postgresql.html?highlight=postgres%20password#unix-domain-connections)), eg `"postgres+psycopg2://myname@/mydb"`
Note that this requires installing `pip3 install psycopg2-binary`
|
so apparently you need a password to connect to database from SQLAlchemy, I was able to work around this by simply creating a password or adding new user with password.
let me know if there is a different work around/solution
| 13,000
|
61,469,948
|
I am a beginner in the field of Data Science and I am working with Data Preprocessing in python. However, I am working with the Fingers Dataset so I want to move the pictures so every pic fits in its own directory to be able to use **ImageDataGenerator** and **flowfromdirectory** to import the pictures and apply rescaling, flipping... etc
the code below shows what I'm trying to do...
```
dataset_path = "/tmp/extracted_fingers/fingers"
train_set = os.path.join(dataset_path, "train/")
test_set = os.path.join(dataset_path, "test/")
for i in range(6):
os.mkdir("/tmp/extracted_fingers/fingers/train/" + str(i) + "L")
os.mkdir("/tmp/extracted_fingers/fingers/test/" + str(i) + "L")
for i in range(6):
os.mkdir("/tmp/extracted_fingers/fingers/train/" + str(i) + "R")
os.mkdir("/tmp/extracted_fingers/fingers/test/" + str(i) + "R")
L_0_tr = glob.glob(train_set + "*0L.png")
L_1_tr = glob.glob(train_set + "*1L.png")
L_2_tr = glob.glob(train_set + "*2L.png")
L_3_tr = glob.glob(train_set + "*3L.png")
L_4_tr = glob.glob(train_set + "*4L.png")
L_5_tr = glob.glob(train_set + "*5L.png")
R_0_tr = glob.glob(train_set + "*0R.png")
R_1_tr = glob.glob(train_set + "*1R.png")
R_2_tr = glob.glob(train_set + "*2R.png")
R_3_tr = glob.glob(train_set + "*3R.png")
R_4_tr = glob.glob(train_set + "*4R.png")
R_5_tr = glob.glob(train_set + "*5R.png")
L_0_ts = glob.glob(test_set + "*0L.png")
L_1_ts = glob.glob(test_set + "*1L.png")
L_2_ts = glob.glob(test_set + "*2L.png")
L_3_ts = glob.glob(test_set + "*3L.png")
L_4_ts = glob.glob(test_set + "*4L.png")
L_5_ts = glob.glob(test_set + "*5L.png")
R_0_ts = glob.glob(test_set + "*0R.png")
R_1_ts = glob.glob(test_set + "*1R.png")
R_2_ts = glob.glob(test_set + "*2R.png")
R_3_ts = glob.glob(test_set + "*3R.png")
R_4_ts = glob.glob(test_set + "*4R.png")
R_5_ts = glob.glob(test_set + "*5R.png")
shutil.move(L_0_tr, "0L")
shutil.move(L_1_tr, "1L")
shutil.move(L_2_tr, "2L")
shutil.move(L_3_tr, "3L")
shutil.move(L_4_tr, "4L")
shutil.move(L_5_tr, "5L")
shutil.move(R_0_tr, "0R")
shutil.move(R_1_tr, "1R")
shutil.move(R_2_tr, "2R")
shutil.move(R_3_tr, "3R")
shutil.move(R_4_tr, "4R")
shutil.move(R_5_tr, "5R")
shutil.move(L_0_ts, "0L")
shutil.move(L_1_ts, "1L")
shutil.move(L_2_ts, "2L")
shutil.move(L_3_ts, "3L")
shutil.move(L_4_ts, "4L")
shutil.move(L_5_ts, "5L")
shutil.move(R_0_ts, "0R")
shutil.move(R_1_ts, "1R")
shutil.move(R_2_ts, "2R")
shutil.move(R_3_ts, "3R")
shutil.move(R_4_ts, "4R")
shutil.move(R_5_ts, "5R")
```
Now this way can do the job, but I am 100% sure there are better ways with much less typing to do it, so I, as a beginner, am trying to do it...
I do not know whether it can be done by loops, Regex, and string formatting or any other way...
However, if anyone could help me, I will be appreciated...
A little help can be good.
Thanks in advance.
Much Love
**Note:** if you want I can give more information
|
2020/04/27
|
[
"https://Stackoverflow.com/questions/61469948",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11381138/"
] |
Firstly instead of doing a for each and push promises you can map them and do a Promise all. You need no push. Your function can return directly your promise all call. The caller can await it or use then...
Something like this (I didn't test it)
```js
// serverlist declaration
function getList(serverlist) {
const operations = serverlist.map(Server => {
if (Server.mon_sid.includes('_DB')) {
return home.checkOracleDatabase(Server.mon_hostname, Server.mon_port);
} else if (Server.mon_sid.includes('_HDB')) {
return home.checkHANADatabase(Server.mon_hostname, Server.mon_port);
} else {
return home.checkPort(Server.mon_port, Server.mon_hostname, 1000);
}
});
return Promise.all(operations)
}
const serverlist = [...]
const test = await getList(serverlist)
// test is an array of fulfilled/unfulfilled results of Promise.all
```
So I would create a function that takes a list of operations(serverList) and returns the promise all like the example above without awaiting for it.
The caller would await it or using another promise all of a series of other calls to your function.
Potentially you can go deeper like Inception :)
```js
// on the caller you can go even deeper
await Promise.all([getList([...]) , getList([...]) , getList([...]) ])
```
Considering what you added you can return something more customized like:
```js
if (Server.mon_sid.includes('_DB')) {
return home.checkOracleDatabase(Server.mon_hostname, Server.mon_port).then(result => ({result, name: 'Oracle', index:..., date:..., hostname: Server.mon_hostname, port: Server.mon_port}))
```
In the case above your promise would return a json with the output of your operation as result, plus few additional fields you might want to reuse to organize your data.
|
For more consistency and for resolving this question i/we(other people which can help you) need *all* code this all variables definition (because for now i can't find where is the `promises` variable is defined). Thanks
| 13,001
|
73,805,667
|
i am a beginner with python. I need to calculate a binairy number. I used a for loop for this because I need to print those same numbers with the len(). Can someone tell me what i am doing wrong?
```
for binairy in ["101011101"]:
binairy = binairy ** 2
print(binairy)
print(len(binairy))
```
>
> TypeError: unsupported operand type(s) for \*\* or pow(): 'str' and 'int'
>
>
>
|
2022/09/21
|
[
"https://Stackoverflow.com/questions/73805667",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14861522/"
] |
you used a list but you could use a string directly
```
def binary_to_decimal(binary):
decimal = 0
for digit in binary:
decimal = decimal*2 + int(digit)
return decimal
print(binary_to_decimal("1010"))
```
|
["101011101"] is a list with a single string. "101011101" is a list of characters. Look at this:
```py
for let_me_find_out in ["101011101"]:
print(let_me_find_out)
for let_me_find_out in "101011101":
print(let_me_find_out)
```
With this knowledge, you can now start your binary conversion:
```py
for binairy in "101011101":
binairy = binairy ** 2
print(binairy)
```
That code gives you the error:
>
> TypeError: unsupported operand type(s) for \*\* or pow(): 'str' and 'int'
>
>
>
It means that you have the operator `**` and you put it between a string and a number. However, you can only put it between two numbers.
So you need to convert the string into a number using `int()`:
```py
for binairy in "101011101":
binairy = int(binairy) ** 2
print(binairy)
```
You'll now find that it still gives you 1s and 0s only. That is because you calculated 1^2 instead of 2^1. Swap the order of the operands:
```py
for binairy in "101011101":
binairy = 2**int(binairy)
print(binairy)
```
Now you get 2s and 1s, which is still incorrect. To deal with this, remember basic school where you learned about how numbers work. You don't use the number and take it to the power of something. Let me explain:
The number 29 is not calculated by `10^2 + 1^9` (which is 101). It is calculated by `2*10^1 + 9*10^0`. So your formula needs to be rewritten from scratch.
As you see, you have 3 components in a single part of the sum:
1. the digit value (2 or 9 in my example, but just 0 or 1 in a binary number)
2. the base (10 in my example, but 2 in your example)
3. the power (starting at 0 and counting upwards)
This brings you to
```py
power = 0
for binairy in "101011101":
digit = int(binairy)
base = 2
value = digit * base ** power
print(value)
power += 1
```
With above code you get the individual values, but not the total value. So you need to introduce a sum:
```py
power = 0
sum = 0
for binairy in "101011101":
digit = int(binairy)
base = 2
value = digit * base ** power
sum += value
print(value)
power += 1
print(sum)
```
And you'll find that it gives you 373, which is not the correct answer (349 is correct). This is because you started the calculation from the beginning, but you should start at the end. A simple way to fix this is to reverse the text:
```py
power = 0
sum = 0
for binairy in reversed("101011101"):
digit = int(binairy)
base = 2
value = digit * base ** power
sum += value
print(value)
power += 1
print(sum)
```
Bam, you get it. That was easy! Just 1 hour of coding :-)
Later in your career, feel free to write `print(int("101011101", 2))` instead. Your colleagues will love you for using built-in functionality instead of writing complicated code.
| 13,002
|
51,793,379
|
In my file, I have a large number of images in **jpg** format and they are named **[fruit type].[index].jpg**.
Instead of manually making three new sub folders to copy and paste the images into each sub folder, is there some python code that can parse through the name of the images and choose where to redirect the image to based on the `fruit type` in the name, at the same time create a new sub folder when a new `fruit type` is parsed?
---
**Before**
* TrainingSet (file)
+ apple.100.jpg
+ apple.101.jpg
+ apple.102.jpg
+ apple.103.jpg
+ peach.100.jpg
+ peach.101.jpg
+ peach.102.jpg
+ orange.100.jpg
+ orange.101.jpg
---
**After**
* TrainingSet (file)
+ apple(file)
- apple.100.jpg
- apple.101.jpg
- apple.102.jpg
- apple.103.jpg
+ peach(file)
- peach.100.jpg
- peach.101.jpg
- peach.102.jpg
+ orange(file)
- orange.100.jpg
- orange.101.jpg
|
2018/08/10
|
[
"https://Stackoverflow.com/questions/51793379",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8238011/"
] |
Here’s the code to do just that, if you need help merging this into your codebase let me know:
```
import os, os.path, shutil
folder_path = "test"
images = [f for f in os.listdir(folder_path) if os.path.isfile(os.path.join(folder_path, f))]
for image in images:
folder_name = image.split('.')[0]
new_path = os.path.join(folder_path, folder_name)
if not os.path.exists(new_path):
os.makedirs(new_path)
old_image_path = os.path.join(folder_path, image)
new_image_path = os.path.join(new_path, image)
shutil.move(old_image_path, new_image_path)
```
|
If they are all formatted similarly to the three fruit example you gave, you can simply do a string.split(".")[0] on each filename you encounter:
```
import os
for image in images:
fruit = image.split(".")[0]
if not os.path.isdir(fruit):
os.mkdir(fruit)
os.rename(os.path.join(fruit, image))
```
| 13,005
|
71,480,638
|
Atttempted to implement jit decorator to increase the speed of execution of my code. Not getting proper results. It is throughing all sorts of errors.. Key error, type errors, etc..
The actual code without numba is working without any issues.
```
# The Code without numba is:
df = pd.DataFrame()
df['Serial'] = [865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880]
df['Value'] = [586,586.45,585.95,585.85,585.45,585.5,586,585.7,585.7,585.5,585.5,585.45,585.3,584,584,585]
df['Ref'] = [586.35,586.1,586.01,586.44,586.04,585.91,585.47,585.99,585.35,585.27,585.32,584.86,585.36,584.18,583.53,585]
df['Base'] = [0,-1,1,1,1,1,-1,0,1,1,1,0,1,1,0,-1]
df['A'] = 0.0
df['B'] = 0.0
df['Counter'] = 0
df['Counter'][0] = df['Serial'][0]
for i in range(0,len(df)-1):
# Filling Column 'A'
if (df.iloc[1+i,2] > df.iloc[1+i,1]) & (df.iloc[i,5] > df.iloc[1+i,1]) & (df.iloc[1+i,3] >0):
df.iloc[1+i,4] = round((df.iloc[1+i,1]*1.02),2)
elif (df.iloc[1+i,2] < df.iloc[1+i,1]) & (df.iloc[i,5] < df.iloc[1+i,1]) & (df.iloc[1+i,3] <0):
df.iloc[1+i,4] = round((df.iloc[1+i,1]*0.98),2)
else:
df.iloc[1+i,4] = df.iloc[i,4]
# Filling Column 'B'
df.iloc[1+i,5] = round(((df.iloc[1+i,1] + df.iloc[1+i,2])/2),2)
# Filling Column 'Counter'
if (df.iloc[1+i,5] > df.iloc[1+i,1]):
df.iloc[1+i,6] = df.iloc[1+i,0]
else:
df.iloc[1+i,6] = df.iloc[i,6]
df
```
The below code is giving me the error. where i tried to implement numba jit decorator to speed up the original python code.
```
#The code with numba jit which is throwing error is:
df = pd.DataFrame()
df['Serial']=[865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880]
df['Value']=[586,586.45,585.95,585.85,585.45,585.5,586,585.7,585.7,585.5,585.5,585.45,585.3,584,584,585]
df['Ref']=[586.35,586.1,586.01,586.44,586.04,585.91,585.47,585.99,585.35,585.27,585.32,584.86,585.36,584.18,583.53,585]
df['Base'] = [0,-1,1,1,1,1,-1,0,1,1,1,0,1,1,0,-1]
from numba import jit
@jit(nopython=True)
def Calcs(Serial,Value,Ref,Base):
n = Base.size
A = np.empty(n, dtype='f8')
B = np.empty(n, dtype='f8')
Counter = np.empty(n, dtype='f8')
A[0] = 0.0
B[0] = 0.0
Counter[0] = Serial[0]
for i in range(0,n-1):
# Filling Column 'A'
if (Ref[i+1] > Value[i+1]) & (B[i] > Value[i+1]) & (Base[i+1] > 0):
A[i+1] = round((Value[i+1]*1.02),2)
elif (Ref[i+1] < Value[i+1]) & (B[i] < Value[i+1]) & (Base[i+1] < 0):
A[i+1] = round((Value[i+1]*0.98),2)
else:
A[i+1] = A[i]
# Filling Column 'B'
B[i+1] = round(((Value[i+1] + Ref[i+1])/2),2)
# Filling Column 'Counter'
if (B[i+1] > Value[i+1]):
Counter[i+1] = Serial[i+1]
else:
Counter[i+1] = Counter[i]
List = [A,B,Counter]
return List
Serial = df['Serial'].values.astype(np.float64)
Value = df['Value'].values.astype(np.float64)
Ref = df['Ref'].values.astype(np.float64)
Base = df['Base'].values.astype(np.float64)
VCal = Calcs(Serial,Value,Ref,Base)
df['A'].values[:] = VCal[0].astype(object)
df['B'].values[:] = VCal[1].astype(object)
df['Counter'].values[:] = VCal[2].astype(object)
df
```
I tried to modify the code as per the guidance provided by @Jérôme Richard for the question [How to Eliminate for loop in Pandas Dataframe in filling each row values of a column based on multiple if,elif statements](https://stackoverflow.com/questions/71458405/how-to-eliminate-for-loop-in-pandas-dataframe-in-filling-each-row-values-of-a-co).
But getting the errors and am Unable to correct the code. Looking for some help from the community in correcting & improving the above code or to find an even better code to enhance the speed of execution.
The expected outcome of the code is shown in the below pic.
[](https://i.stack.imgur.com/3lMZj.jpg)
|
2022/03/15
|
[
"https://Stackoverflow.com/questions/71480638",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15922454/"
] |
You can only use `df['A'].values[:]` if the column `A` exists in the dataframe. Otherwise you need to create a new one, possibly with `df['A'] = ...`.
Moreover, the trick with `astype(object)` applies for string but not for numbers. Indeed, string-based dataframe columns do apparently not use Numpy string-based array but Numpy object-based arrays containing CPython strings. For numbers, Pandas properly uses number-based arrays. Converting numbers back to object is inefficient. The same applies for `astype(np.float64)`: it is not needed if the time is already fine. This is the case here. If you are unsure about the input type, you can let them though as they are not very expensive.
The Numba function itself is fine (at least with a recent version of Numba). Note that you can specify the signature to [compile the function eagerly](https://numba.readthedocs.io/en/stable/user/jit.html). This feature also help you to catch typing errors sooner and make them a bit more clear. The downside is that it makes the function less generic as only specific types are supported (though you can specify multiple signatures).
```py
from numba import njit
@njit('List(float64[:])(float64[:], float64[:], float64[:], float64[:])')
def Calcs(Serial,Value,Ref,Base):
[...]
Serial = df['Serial'].values
Value = df['Value'].values
Ref = df['Ref'].values
Base = df['Base'].values
VCal = Calcs(Serial, Value, Ref, Base)
df['A'] = VCal[0]
df['B'] = VCal[1]
df['Counter'] = VCal[2]
```
Note that you can use the Numba decorator flag `fastmath=True` to speed up the computation if you are sure that the input array never contains spacial values like NaN or Inf or -0, and you do not rely on FP-math associativity.
|
This Code is giving list to list convertion typeerror.
```
from numba import njit
df = pd.DataFrame()
df['Serial'] = [865,866,867,868,869,870,871,872,873,874,875,876,877,878,879,880]
df['Value'] = [586,586.45,585.95,585.85,585.45,585.5,586,585.7,585.7,585.5,585.5,585.45,585.3,584,584,585]
df['Ref'] = [586.35,586.1,586.01,586.44,586.04,585.91,585.47,585.99,585.35,585.27,585.32,584.86,585.36,584.18,583.53,585]
df['Base'] = [0,-1,1,1,1,1,-1,0,1,1,1,0,1,1,0,-1]
@njit('List(float64[:])(float64[:], float64[:], float64[:], float64[:])',fastmath=True)
def Calcs(Serial,Value,Ref,Base):
n = Base.size
A = np.empty(n)
B = np.empty(n)
Counter = np.empty(n)
A[0] = 0.0
B[0] = 0.0
Counter[0] = Serial[0]
for i in range(0,n-1):
# Filling Column 'A'
if (Ref[i+1] > Value[i+1]) & (B[i] > Value[i+1]) & (Base[i+1] > 0):
A[i+1] = round((Value[i+1]*1.02),2)
elif (Ref[i+1] < Value[i+1]) & (B[i] < Value[i+1]) & (Base[i+1] < 0):
A[i+1] = round((Value[i+1]*0.98),2)
else:
A[i+1] = A[i]
# Filling Column 'B'
B[i+1] = round(((Value[i+1] + Ref[i+1])/2),2)
# Filling Column 'Counter'
if (B[i+1] > Value[i+1]):
Counter[i+1] = Serial[i+1]
else:
Counter[i+1] = Counter[i]
List = [A,B,Counter]
return List
Serial = df['Serial'].values
Value = df['Value'].values
Ref = df['Ref'].values
Base = df['Base'].values
VCal = Calcs(Serial, Value, Ref, Base)
df['A'] = VCal[0]
df['B'] = VCal[1]
df['Counter'] = VCal[2]
df
```
Getting the below error.
```
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No conversion from list(array(float64, 1d, C))<iv=None> to
list(array(float64, 1d, A))<iv=None> for '$408return_value.5', defined at None
File "<ipython-input-9-3c9e0fe02b75>", line 33:
def Calcs(Serial,Value,Ref,Base):
<source elided>
List = [A,B,Counter]
return List
^
During: typing of assignment at <ipython-input-9-3c9e0fe02b75> (33)
File "<ipython-input-9-3c9e0fe02b75>", line 33:
def Calcs(Serial,Value,Ref,Base):
<source elided>
List = [A,B,Counter]
return List
^
```
| 13,007
|
48,757,747
|
Let's consider a file called `test1.py` and containing the following code:
```
def init_foo():
global foo
foo=10
```
Let's consider another file called `test2.py` and containing the following:
```
import test1
test1.init_foo()
print(foo)
```
Provided that `test1` is on the pythonpath (and gets imported correctly) I will now receive the following error message:
`NameError: name 'foo' is not defined`
Anyone can explain to me why the variable `foo` is not declared as a `global` in the scope of `test2.py` while it is run? Also if you can provide a workaround for that problem?
Thx!
|
2018/02/13
|
[
"https://Stackoverflow.com/questions/48757747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4961888/"
] |
For this you need to use [`selModel`](https://docs.sencha.com/extjs/5.1.4/api/Ext.grid.Panel.html#cfg-selModel) config for [`grid`](https://docs.sencha.com/extjs/5.1.4/api/Ext.grid.Panel.html) using [`CheckboxModel`](https://docs.sencha.com/extjs/5.1.4/api/Ext.selection.CheckboxModel.html).
* A **selModel** Ext.selection.Model instance or config object, or the selection model class's alias string.In latter case its type property determines to which type of selection model this config is applied.
* A **CheckboxModel** selection model that renders a column of checkboxes that can be toggled to select or deselect rows. The default mode for this selection model is MULTI.
In this **[FIDDLE](https://fiddle.sencha.com/#view/editor&fiddle/2d1r)**, I have create a demo using two grid. In first grid you can select record by `ctrl/shift` key and In second grid you can select directly on row click. I hope this will help/guide you to achieve you requirement.
**CODE SNIPPET**
```
Ext.application({
name: 'Fiddle',
launch: function () {
//define user store
Ext.define('User', {
extend: 'Ext.data.Store',
alias: 'store.users',
fields: ['name', 'email', 'phone'],
data: [{
name: 'Lisa',
email: 'lisa@simpsons.com',
phone: '555-111-1224'
}, {
name: 'Bart',
email: 'bart@simpsons.com',
phone: '555-222-1234'
}, {
name: 'Homer',
email: 'homer@simpsons.com',
phone: '555-222-1244'
}, {
name: 'Marge',
email: 'marge@simpsons.com',
phone: '555-222-1254'
}, {
name: 'AMargeia',
email: 'marge@simpsons.com',
phone: '555-222-1254'
}]
});
//Define custom grid
Ext.define('MyGrid', {
extend: 'Ext.grid.Panel',
alias: 'widget.mygrid',
store: {
type: 'users'
},
columns: [{
text: 'Name',
flex: 1,
dataIndex: 'name'
}, {
text: 'Email',
dataIndex: 'email',
flex: 1
}, {
text: 'Phone',
flex: 1,
dataIndex: 'phone'
}]
});
//create panel with 2 grid
Ext.create({
xtype: 'panel',
renderTo: Ext.getBody(),
items: [{
//select multiple records by using ctrl key and by selecting the checkbox with mouse in extjs grid
xtype: 'mygrid',
title: 'multi selection example by using ctrl/shif key',
/*
* selModel
* A Ext.selection.Model instance or config object,
* or the selection model class's alias string.
*/
selModel: {
/* selType
* A selection model that renders a column of checkboxes
* that can be toggled to select or deselect rows.
* The default mode for this selection model is MULTI.
*/
selType: 'checkboxmodel'
}
}, {
//select multi record by row click
xtype: 'mygrid',
margin: '20 0 0 0',
title: 'multi selection example on rowclick',
/*
* selModel
* A Ext.selection.Model instance or config object,
* or the selection model class's alias string.
*/
selModel: {
/* selType
* A selection model that renders a column of checkboxes
* that can be toggled to select or deselect rows.
* The default mode for this selection model is MULTI.
*/
selType: 'checkboxmodel',
/* mode
* "SIMPLE" - Allows simple selection of multiple items one-by-one.
* Each click in grid will either select or deselect an item.
*/
mode: 'SIMPLE'
}
}]
});
}
});
```
|
I achieved by adding keyup,keydown listeners. Please find the fiddle where i updated the code.
<https://fiddle.sencha.com/#view/editor&fiddle/2d98>
| 13,008
|
56,314,194
|
I want to get random item from a list, also I don't want some items to be consider while `random.choice()`. Below is my data structure
```
x=[
{ 'id': 1, 'version':0.1, 'ready': True }
{ 'id': 6, 'version':0.2, 'ready': True }
{ 'id': 4, 'version':0.1, 'ready': False }
{ 'id': 35, 'version':0.1, 'ready': False }
{ 'id': 45, 'version':0.1, 'ready': False }
{ 'id': 63, 'version':0.1, 'ready': True }
{ 'id': 34, 'version':0.1, 'ready': True }
{ 'id': 33, 'version':0.1, 'ready': True }
]
```
I can get the random item by using `random.choice(x)`. But is there any way, consider `'ready': True` attribute of item while `random.choice()`.
Or any SIMPLE trick to achieve this?
NOTE: I want to use python in-built modules to avoid dependencies like `numpy`, etc.
|
2019/05/26
|
[
"https://Stackoverflow.com/questions/56314194",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2200798/"
] |
Have a look at Dotplant2 which uses Yii2 and currency conversion tables in the backend. Specifically these files.
1. components/payment/[PaypalPayment.php](https://github.com/DevGroup-ru/dotplant2/blob/master/application/components/payment/PayPalPayment.php) and 2. their [CurrencyHelper.php](https://github.com/DevGroup-ru/dotplant2/blob/master/application/modules/shop/helpers/CurrencyHelper.php) at modules/shop/helpers. Here are some code snippets.
PaypalPayment
```
public function content()
{
/** @var Order $order */
$order = $this->order;
$order->calculate();
$payer = (new Payer())->setPaymentMethod('paypal');
$priceSubTotal = 0;
/** @var ItemList $itemList */
$itemList = array_reduce($order->items, function($result, $item) use (&$priceSubTotal) {
/** @var OrderItem $item */
/** @var Product $product */
$product = $item->product;
$price = CurrencyHelper::convertFromMainCurrency($item->price_per_pcs, $this->currency);
$priceSubTotal = $priceSubTotal + ($price * $item->quantity);
/** @var ItemList $result */
return $result->addItem(
(new Item())
->setName($product->name)
->setCurrency($this->currency->iso_code)
->setPrice($price)
->setQuantity($item->quantity)
->setUrl(Url::toRoute([
'@product',
'model' => $product,
'category_group_id' => $product->category->category_group_id
], true))
);
}, new ItemList());
$priceTotal = CurrencyHelper::convertFromMainCurrency($order->total_price, $this->currency);
$details = (new Details())
->setShipping($priceTotal - $priceSubTotal)
->setSubtotal($priceSubTotal)
->setTax(0);
$amount = (new Amount())
->setCurrency($this->currency->iso_code)
->setTotal($priceTotal)
->setDetails($details);
$transaction = (new Transaction())
->setAmount($amount)
->setItemList($itemList)
->setDescription($this->transactionDescription)
->setInvoiceNumber($this->transaction->id);
$urls = (new RedirectUrls())
->setReturnUrl($this->createResultUrl(['id' => $this->order->payment_type_id]))
->setCancelUrl($this->createFailUrl());
$payment = (new Payment())
->setIntent('sale')
->setPayer($payer)
->setTransactions([$transaction])
->setRedirectUrls($urls);
$link = null;
try {
$link = $payment->create($this->apiContext)->getApprovalLink();
} catch (\Exception $e) {
$link = null;
}
return $this->render('paypal', [
'order' => $order,
'transaction' => $this->transaction,
'approvalLink' => $link,
]);
}
```
CurrencyHelper
```
<?php
namespace app\modules\shop\helpers;
use app\modules\shop\models\Currency;
use app\modules\shop\models\UserPreferences;
class CurrencyHelper
{
/**
* @var Currency $userCurrency
* @var Currency $mainCurrency
*/
static protected $userCurrency = null;
static protected $mainCurrency = null;
/**
* @return Currency
*/
static public function getUserCurrency()
{
if (null === static::$userCurrency) {
static::$userCurrency = static::findCurrencyByIso(UserPreferences::preferences()->userCurrency);
}
return static::$userCurrency;
}
/**
* @param Currency $userCurrency
* @return Currency
*/
static public function setUserCurrency(Currency $userCurrency)
{
return static::$userCurrency = $userCurrency;
}
/**
* @return Currency
*/
static public function getMainCurrency()
{
return null === static::$mainCurrency
? static::$mainCurrency = Currency::getMainCurrency()
: static::$mainCurrency;
}
/**
* @param string $code
* @param bool|true $useMainCurrency
* @return Currency
*/
static public function findCurrencyByIso($code)
{
$currency = Currency::find()->where(['iso_code' => $code])->one();
$currency = null === $currency ? static::getMainCurrency() : $currency;
return $currency;
}
/**
* @param float|int $input
* @param Currency $from
* @param Currency $to
* @return float|int
*/
static public function convertCurrencies($input = 0, Currency $from, Currency $to)
{
if (0 === $input) {
return $input;
}
if ($from->id !== $to->id) {
$main = static::getMainCurrency();
if ($main->id === $from->id && $main->id !== $to->id) {
$input = $input / $to->convert_rate * $to->convert_nominal;
} elseif ($main->id !== $from->id && $main->id === $to->id) {
$input = $input / $from->convert_nominal * $from->convert_rate;
} else {
$input = $input / $from->convert_nominal * $from->convert_rate;
$input = $input / $to->convert_rate * $to->convert_nominal;
}
}
return $to->formatWithoutFormatString($input);
}
/**
* @param float|int $input
* @param Currency $from
* @return float|int
*/
static public function convertToUserCurrency($input = 0, Currency $from)
{
return static::convertCurrencies($input, $from, static::getUserCurrency());
}
/**
* @param float|int $input
* @param Currency $from
* @return float|int
*/
static public function convertToMainCurrency($input = 0, Currency $from)
{
return static::convertCurrencies($input, $from, static::getMainCurrency());
}
/**
* @param float|int $input
* @param Currency $to
* @return float|int
*/
static public function convertFromMainCurrency($input = 0, Currency $to)
{
return static::convertCurrencies($input, static::getMainCurrency(), $to);
}
/**
* @param Currency $currency
* @param string|null $locale
* @return string
*/
static public function getCurrencySymbol(Currency $currency, $locale = null)
{
$locale = null === $locale ? \Yii::$app->language : $locale;
$result = '';
try {
$fake = $locale . '@currency=' . $currency->iso_code;
$fmt = new \NumberFormatter($fake, \NumberFormatter::CURRENCY);
$result = $fmt->getSymbol(\NumberFormatter::CURRENCY_SYMBOL);
} catch (\Exception $e) {
$result = preg_replace('%[\d\s,]%i', '', $currency->format(0));
}
return $result;
}
}
```
|
I got this from PayPal support:
>
> Unfortunately domestic transaction is not possible to receive in USD and similarly international transaction should be in USD. There is no way to have single currency for both the transactions. Thanks and regards,
>
>
>
| 13,009
|
4,214,031
|
I have a PHP website and I would like to execute a very long Python script in background (300 MB memory and 100 seconds). The process communication is done via database: when the Python script finishes its job, it updates a field in database and then the website renders some graphics, based on the results of the Python script.
I can execute "manually" the Python script from bash (any current directory) and it works. I would like to integrate it in PHP and I tried the function shell\_exec:
`shell_exec("python /full/path/to/my/script")` but it's not working (I don't see any output)
Do you have any ideas or suggestions? It worths to mention that the python script is a wrapper over other polyglot tools (Java mixed with C++).
Thanks!
|
2010/11/18
|
[
"https://Stackoverflow.com/questions/4214031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2203413/"
] |
First off [set\_time\_limit(0);](http://php.net/set_time_limit) will make your script run for ever so timeout shouldn't be an issue. Second any \*exec call in PHP does NOT use the PATH by default (might depend on configuration), so your script will exit without giving any info on the problem, and it quite often ends up being that it can't find the program, in this case python. So change it to:
```
shell_exec("/full/path/to/python /full/path/to/my/script");
```
If your python script is running on it's own without problems, then it's very likely this is the problem. As for the memory, I'm pretty sure PHP won't use the same memory python is using. So if it's using 300MB PHP should stay at default (say 1MB) and just wait for the end of shell\_exec.
|
I found that the issue when I tried this was the simple fact that I did not compile the source on the server I was running it on. By compiling on your local machine and then uploading to your server, it will be corrupted in some way. shell\_exec() should work by compiling the source you are trying to run on the same server your are running the script.
| 13,010
|
11,533,939
|
If I have more than one class in a python script, how do I call a function from the first class in the second class?
Here is an Example:
```
Class class1():
def function1():
blah blah blah
Class class2():
*How do I call function1 to here from class1*
```
|
2012/07/18
|
[
"https://Stackoverflow.com/questions/11533939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1481620/"
] |
Functions in classes are also known as methods, and they are invoked on objects. The way to call a method in class1 from class2 is to have an instance of class1:
```
class Class2(object):
def __init__(self):
self.c1 = Class1()
self.c1.function1()
```
|
The cleanest way is probably through inheritance:
```
class Base(object):
def function1(self):
# blah blah blah
class Class1(Base):
def a_method(self):
self.function1() # works
class Class2(Base):
def some_method(self):
self.function1() # works
c1 = Class1()
c1.function1() # works
c2 = Class2()
c2.function1() # works
```
| 13,020
|
27,296,373
|
I'm importing data with XLRD. The overall project involves pulling data from existing Excel files, but there are merged cells. Essentially, an operator is accounting for time on one of 3 shifts. As such, for each date on the grid they're working with, there are 3 columns (one for each shift). I want to change the UI as little as possible. As such, when I pull the list of the dates covered on the sheet, I want it to have the same date 3 times in a row, once for each shift.
Here's what I get:
```
[41665.0, '', '', 41666.0, '', '', 41667.0, '', '', 41668.0, '', '', 41669.0, '', '', 41670.0, '', '', 41671.0, '', '']
```
Here's what I want:
```
[41665.0, 41665.0, 41665.0, 41666.0, 41666.0, 41666.0, 41667.0, 41667.0, 41667.0, 41668.0, 41668.0, 41668.0, 41669.0, 41669.0, 41669.0, 41670.0, 41670.0, 41670.0, 41671.0, 41671.0, 41671.0]
```
I got this list with:
```
dates = temp_sheet.row_values(2)[2:]
```
I don't know if the dates will always show up as floats. I was thinking something along the lines of:
```
dates = [i for i in dates if not i.isspace() else j]
```
It throws an error when it gets to the first value, which is a float. .isnumeric() is a string operator...there has to be a much more elegant way. [I found this answer](https://stackoverflow.com/questions/3441358/python-most-pythonic-way-to-check-if-an-object-is-a-number) but it seems much more involved than what I believe I need.
|
2014/12/04
|
[
"https://Stackoverflow.com/questions/27296373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2711535/"
] |
I dont think this is a good application for a comprehension, and an explicit loop would be better:
```
lst = [41665.0, '', '', 41666.0, '', '', 41667.0, '', '', 41668.0, '', '', 41669.0, '', '', 41670.0, '', >>> lst = [41665.0, '', '', 41666.0, '', '', 41667.0, '', '', 41668.0, '', '', 41669.0, '', '', 41670.0, '', '', 41671.0, '', '']
>>> temp = lst[0]
>>> nlst = []
>>> for i in lst:
... if i: temp = i
...
... nlst.append(temp)
...
>>> nlst
[41665.0, 41665.0, 41665.0, 41666.0, 41666.0, 41666.0, 41667.0, 41667.0, 41667.0, 41668.0, 41668.0, 41668.0, 41669.0, 41669.0, 41669.0, 41670.0, 41670.0, 41670.0, 41671.0, 41671.0, 41671.0]
```
|
Another solution,
```
>>> l = [41665.0, '', '', 41666.0, '', '', 41667.0, '', '', 41668.0, '', '', 41669.0, '', '', 41670.0, '', '', 41671.0, '', '']
>>> lst = [ item for item in l if item is not '']
>>> [ v for item in zip(lst,lst,lst) for v in item ]
[41665.0, 41665.0, 41665.0, 41666.0, 41666.0, 41666.0, 41667.0, 41667.0, 41667.0, 41668.0, 41668.0, 41668.0, 41669.0, 41669.0, 41669.0, 41670.0, 41670.0, 41670.0, 41671.0, 41671.0, 41671.0]
```
| 13,021
|
38,971,465
|
I want to use the stack method to get reverse string in this revers question.
"Write a function revstring(mystr) that uses a stack to reverse the characters in a string."
This is my code.
```
from pythonds.basic.stack import Stack
def revstring(mystr):
myStack = Stack() //this is how i have myStack
for ch in mystr: //looping through characters in my string
myStack.push(ch) //push the characters to form a stack
revstr = '' //form an empty reverse string
while not myStack.isEmpty():
revstr = revstr + myStack.pop() //adding my characters to the empty reverse string in reverse order
return revstr
print revstring("martin")
```
**the output seems to print out only the first letter of mystr that is "m"**
why this??
|
2016/08/16
|
[
"https://Stackoverflow.com/questions/38971465",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6712244/"
] |
Here's 3 solutions to the same problem, just pick one:
**1ST SOLUTION**
Fixing your solution, you almost got it, you just need to indent properly
your blocks like this:
```
from pythonds.basic.stack import Stack
def revstring(mystr):
myStack = Stack() # this is how i have myStack
for ch in mystr: # looping through characters in my string
myStack.push(ch) # push the characters to form a stack
revstr = '' # form an empty reverse string
while not myStack.isEmpty():
# adding my characters to the empty reverse string in reverse order
revstr = revstr + myStack.pop()
return revstr
print revstring("martin")
```
**2ND SOLUTION**
This one is structurally the same than yours but instead of using a custom stack, it's just using the builtin python list
```
def revstring(mystr):
myStack = [] # this is how i have myStack
for ch in mystr:
myStack.append(ch) # push the characters to form a stack
revstr = '' # form an empty reverse string
while len(myStack):
# adding my characters to the empty reverse string in reverse order
revstr = revstr + myStack.pop()
return revstr
print revstring("martin")
```
**3RD SOLUTION**
To reverse strings, just use this pythonic way :)
```
print "martin"[::-1]
```
|
* the `while` should not be in the `for`
* the `return` should be outside, not in the `while`
**code:**
```
from pythonds.basic.stack import Stack
def revstring(mystr):
myStack = Stack() # this is how i have myStack
for ch in mystr: # looping through characters in my string
myStack.push(ch) # push the characters to form a stack
revstr = '' # form an empty reverse string
while not myStack.isEmpty():
# adding my characters to the empty reverse string in reverse order
revstr = revstr + myStack.pop()
return revstr
print revstring("martin")
```
| 13,022
|
69,897,646
|
I'm using seaborn.displot to display a distribution of scores for a group of participants.
Is it possible to have the y axis show an actual percentage (example below)?
This is required by the audience for the data.
Currently it is done in excel but It would be more useful in python.
```py
import seaborn as sns
data = sns.load_dataset('titanic')
p = sns.displot(data=data, x='age', hue='sex', height=4, kind='kde')
```
[](https://i.stack.imgur.com/IYZGt.png)
Desired Format
--------------
[](https://i.stack.imgur.com/gk3Cq.png)
|
2021/11/09
|
[
"https://Stackoverflow.com/questions/69897646",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2015461/"
] |
As mentioned by @JohanC, the y axis for a KDE is a [density](https://en.wikipedia.org/wiki/Probability_density_function), not a proportion, so it does not make sense to convert it to a percentage.
You'd have two options. One would be to plot a KDE curve over a histogram with histogram counts expressed as percentages:
```
sns.displot(
data=tips, x="total_bill", hue="sex",
kind="hist", stat="percent", kde=True,
)
```
[](https://i.stack.imgur.com/GnJHF.png)
But your "desired plot" actually doesn't look like a density at all, it looks like a histogram plotted with a line instead of bars. You can get that with `element="poly"`:
```
sns.displot(
data=tips, x="total_bill", hue="sex",
kind="hist", stat="percent", element="poly", fill=False,
)
```
[](https://i.stack.imgur.com/Hw2tH.png)
|
* [`seaborn.displot`](https://seaborn.pydata.org/generated/seaborn.displot.html) is a figure-level plot providing access to several approaches for visualizing the univariate or bivariate distribution of data ([histplot](https://seaborn.pydata.org/generated/seaborn.histplot.html), [kdeplot](https://seaborn.pydata.org/generated/seaborn.kdeplot.html), [ecdfplot](https://seaborn.pydata.org/generated/seaborn.ecdfplot.html))
* See [How to plot percentage with seaborn distplot / histplot / displot](https://stackoverflow.com/a/68850867/7758804)
* [seaborn histplot and displot output doesn't match](https://stackoverflow.com/q/68865538/7758804) is relevant for the settings of the `common_bins` and `common_norm`.
* **Tested in `python 3.8.12`, `pandas 1.3.4`, `matplotlib 3.4.3`, `seaborn 0.11.2`**
+ `data` is a pandas dataframe, and `seaborn` is an API for `matplotlib`.
`kind='hist'`: [`seaborn.histplot`](https://seaborn.pydata.org/generated/seaborn.histplot.html)
-----------------------------------------------------------------------------------------------
* Use `stat='percent'` → available from `seaborn 0.11.2`
```py
import seaborn as sns
from matplotlib.ticker import PercentFormatter
data = sns.load_dataset('titanic')
p = sns.displot(data=data, x='age', stat='percent', hue='sex', height=4, kde=True, kind='hist')
```
[](https://i.stack.imgur.com/Tui3d.png)
Don't do the following, as explained
====================================
`kind='kde'`: [`seaborn.kdeplot`](https://seaborn.pydata.org/generated/seaborn.kdeplot.html)
--------------------------------------------------------------------------------------------
* As per [mwaskom](https://stackoverflow.com/users/1533576/mwaskom), the creator of `seaborn`: you can wrap a percent formatter around a density value (as shown in the following code), **but it's incorrect because the density is not a proportion** (you may end up with values > 100%).
* As per [JohanC](https://stackoverflow.com/users/12046409/johanc), you can't view the y-axis as a percentage, it is a [density](https://en.wikipedia.org/wiki/Probability_density_function). The density can go arbitrarily high or low, depending on the x-axis. **Formatting it as a percentage is a mistake.**
* **I will leave this as part of the answer as an explanation, otherwise it will just be posted by someone else.**
* Using [`matplotlib.ticker.PercentFormatter`](https://matplotlib.org/stable/api/ticker_api.html#matplotlib.ticker.PercentFormatter) to convert the axis values.
```py
import seaborn as sns
from matplotlib.ticker import PercentFormatter
data = sns.load_dataset('titanic')
p = sns.displot(data=data, x='age', hue='sex', height=3, kind='kde')
p.axes.flat[0].yaxis.set_major_formatter(PercentFormatter(1))
```
[](https://i.stack.imgur.com/NUjnt.png)
| 13,023
|
73,098,560
|
Try to use pytorch, when I do
import torch
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-2-eb42ca6e4af3> in <module>
----> 1 import torch
C:\Big_Data_app\Anaconda3\lib\site-packages\torch\__init__.py in <module>
124 err = ctypes.WinError(last_error)
125 err.strerror += f' Error loading "{dll}" or one of its dependencies.'
--> 126 raise err
127 elif res is not None:
128 is_loaded = True
OSError: [WinError 182] <no description> Error loading "C:\Big_Data_app\Anaconda3\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
```
Not sure what happend.
The truth is, I installed pytorch around the end of last year.(I think)
I don't remember how, I installed it because I want try to use it.
But I guess I never used it after I installed. don't remember running some script with pytorch.
And now I start to use it, get this error. Have no clue What I should check first.
That means, I don't know what module, framework, driver, app are related with pytorch.
So I don't know where to beginning, check is there any other module might cause this Error.
If anyone knows how to solve this. Or where to start the problem checking. Please let me know.
Thank you.
My pytorch version is 1.11.0
OS: window 10
|
2022/07/24
|
[
"https://Stackoverflow.com/questions/73098560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14733291/"
] |
I solved the problem.
Just reinstall your Anaconda.
**!!Warning!!: you will lose your lib.**
Referring solution:
[Problem with Torch 1.11](https://discuss.pytorch.org/t/problem-with-torch-1-11/146885)
|
The version may not be exactly as same as yours, but maybe [this question](https://stackoverflow.com/questions/63187161/error-while-import-pytorch-module-the-specified-module-could-not-be-found) asked on 2020-09-04 helps.
| 13,024
|
56,303,279
|
when i run this code:
```
#!/usr/bin/env python
import scapy.all as scapy
from scapy_http import http
def sniff(interface):
scapy.sniff(iface=interface, store=False, prn=process_sniffed_packet)
def process_sniffed_packet(packet):
if packet.haslayer(http.HTTPRequest):
print(packet)
sniff("eth0")
```
i got :
```
Traceback (most recent call last): File "packet_sniffer.py", line 16, in <module>
sniff("eth0") File "packet_sniffer.py", line 8, in sniff
scapy.sniff(iface=interface, store=False, prn=process_sniffed_packet) File "/usr/local/lib/python3.7/dist-packages/scapy/sendrecv.py", line 886, in sniff
r = prn(p) File "packet_sniffer.py", line 13, in process_sniffed_packet
print(packet) File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 438, in
__str__
return str(self.build()) File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 556, in build
p = self.do_build() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 541, in do_build
pay = self.do_build_payload() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 528, in do_build_payload
return self.payload.do_build() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 541, in do_build
pay = self.do_build_payload() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 528, in do_build_payload
return self.payload.do_build() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 541, in do_build
pay = self.do_build_payload() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 528, in do_build_payload
return self.payload.do_build() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 541, in do_build
pay = self.do_build_payload() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 528, in do_build_payload
return self.payload.do_build() File "/usr/local/lib/python3.7/dist-packages/scapy/packet.py", line 538, in do_build
pkt = self.self_build() File "/usr/local/lib/python3.7/dist-packages/scapy_http/http.py", line 179, in self_build
return _self_build(self, field_pos_list) File "/usr/local/lib/python3.7/dist-packages/scapy_http/http.py", line 101, in _self_build
val = _get_field_value(obj, f.name) File "/usr/local/lib/python3.7/dist-packages/scapy_http/http.py", line 74, in _get_field_value
headers = _parse_headers(val) File "/usr/local/lib/python3.7/dist-packages/scapy_http/http.py", line 18, in _parse_headers
headers = s.split("\r\n") TypeError: a bytes-like object is required, not 'str'
```
what should i do?
|
2019/05/25
|
[
"https://Stackoverflow.com/questions/56303279",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11533009/"
] |
You can fix this problem by using `python2` instead of `python3`. If you don't want to change your python version then you'd need to make some changes to `scapy-http`'s library codes. From your traceback I can see that file is located at: `/usr/local/lib/python3.7/dist-packages/scapy_http/http.py`. Now open that file with any text editor.
Change *line 18* from:
```py
headers = s.split("\r\n")
```
to:
```py
try:
headers = s.split("\r\n")
except TypeError as err:
headers = s.split(b"\r\n")
```
Then change *line 109* from:
```py
p = f.addfield(obj, p, val + separator)
```
to:
```py
try:
p = f.addfield(obj, p, val + separator)
except TypeError as err:
p = f.addfield(obj, p, str(val) + str(separator))
```
|
You're using `scapy_http` in addition to `scapy`.
`scapy_http` hasn't been update in a while, and has poor compatibility with Python 3+
Feel free to have a look at <https://github.com/secdev/scapy/pull/1925> : it's not merged in Scapy yet (should be sometimes soon I guess) but it's an updated & improved port of `scapy_http`, made for the latest `scapy` version
| 13,027
|
65,977,278
|
I have a image:
It has some time in it.
But I need to convert it to this using python:
this:
[](https://i.stack.imgur.com/j3iIp.png)
to this
[](https://i.stack.imgur.com/d61GK.png)
In short, I need to paint the text in that image black using python. I think maybe opencv is useful for this, But not sure. And I need to do that for any time that will be in that same type of image(like the image before editing, just different digits).
I think we can use canny to detect edges of the digits and the dots and then paint the area between them black. I don't know the perfect way of doing this, but I think it might work. Hoping for a solution. Thanks in advance.
|
2021/01/31
|
[
"https://Stackoverflow.com/questions/65977278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13994535/"
] |
Try this out.
```py
sampleList = [
'CustomerA', 'Yes', 'No', 'No',
'CustomerB', 'No', 'No', 'No',
'CustomerC', 'Yes', 'Yes', 'No'
]
preferredOutput = [
tuple(sampleList[n : n + 4])
for n in range(0, len(sampleList), 4)
]
print(preferredOutput)
# OUTPUT (IN PRETTY FORM)
#
# [
# ('CustomerA', 'Yes', 'No', 'No'),
# ('CustomerB', 'No', 'No', 'No'),
# ('CustomerC', 'Yes', 'Yes', 'No')
# ]
```
|
You can use list comprehension `output=[tuple(sampleList[4*i:4*i+4]) for i in range(3)]`
| 13,028
|
49,220,569
|
a=[('<https://www.google.co.in/search?q=kite+zerodha&oq=kite%2Cz&aqs=chrome.1.69i57j0l5.4766j0j7&sourceid=chrome&ie=UTF-8>', 1), ('<https://kite.zerodha.com/>', 1), ('<https://kite.trade/connect/login?api_key=xyz>', 1)]
how to get value of api\_key which is xyz from above mentioned `a`.
please help me to write code in python.Thank you
|
2018/03/11
|
[
"https://Stackoverflow.com/questions/49220569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9337404/"
] |
Just looping over all elements and parsing url to get the api\_key, have a look into below code:
```
from urlparse import urlparse, parse_qs
a=[('https://www.google.co.in/search?q=kite+zerodha&oq=kite%2Cz&aqs=chrome.1.69i57j0l5.4766j0j7&sourceid=chrome&ie=UTF-8', 1), ('https://kite.zerodha.com/', 1), ('https://kite.trade/connect/login?api_key=xyz', 1)]
for value in a:
if len(value) > 1:
url = value[0]
if 'api_key' in parse_qs(urlparse(url).query).keys():
print parse_qs(urlparse(url).query)['api_key'][0]
```
output:
```
xyz
```
|
This will work too. Hope this helps.
>
> * Find all items that has the keyword 'api\_key' (in url[0]),
> * Split it into columns, delimited by '=' (split by '=')
> * The last entry ([-1]) will be the answer (xyz).
>
>
>
```
a=[('https://www.google.co.in/search?q=kite+zerodha&oq=kite%2Cz&aqs=chrome.1.69i57j0l5.4766j0j7&sourceid=chrome&ie=UTF-8', 1), ('https://kite.zerodha.com/', 1), ('https://kite.trade/connect/login?api_key=xyz', 1)]
for ky in [url for url in a if 'api_key' in url[0]]:
print(ky[0].split('=')[-1])
Sample result: xyz
```
| 13,029
|
55,905,144
|
Here is an image showing Python scope activity (version 3.6 and target x64):
Python Scope
[](https://i.stack.imgur.com/QqZbP.png)
The main problem is the relation between both invoke python methods, the first one is used to start the class object, and the second one to access a method of that class. Here is an image of the first invoke python properties:
Invoke Python init method
[](https://i.stack.imgur.com/ekTvW.png)
And the getNumberPlusOne activity call:
Invoke Python getNumberPlusOne method
[](https://i.stack.imgur.com/2ZmUz.png)
The python code being executed:
```py
class ExampleClass:
def __init__(self,t,n):
self.text = t
self.number = n
def getNumberPlusOne(self):
return (self.number+1)
```
And finally, the error when executing the second Invoke Python Method:
>
> An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is:
> System.InvalidOperationException: Error invoking Python method ----> System.Runtime.Serialization.InvalidDataContractException: Type 'UiPath.Python.PythonObject' cannot be serialized. Consider marking it with the DataContractAttribute attribute, and marking all of its members you want serialized with the DataMemberAttribute attribute. If the type is a collection, consider marking it with the CollectionDataContractAttribute. See the Microsoft .NET Framework documentation for other supported types.
>
>
>
Any idea about where is the mistake and how to interact with the output object created in the init method?
|
2019/04/29
|
[
"https://Stackoverflow.com/questions/55905144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11425716/"
] |
I believe that this activity was designed with simple scripts in mind, not with entire classes. [Here's](https://forum.uipath.com/t/python-script-with-class-and-object/111110/3) an article on their Community Forum where user Sergiu.Wittenberger goes into more details.
Let's start with the Load Python Script activity:
[](https://i.stack.imgur.com/LnAeb.png)
In my case the local variable "pyScript" is a pointer to the python object, i.e. an instance of `ExampleClass`.
Now, there is the Invoke Python Method activity - this one allows us to call a method by name. It seems however that methods on the class are inaccessible to UiPath - you can't just type `pyScript.MethodName()`.
[](https://i.stack.imgur.com/hkfu0.png)
So it seems that we can't access class methods (please proof me wrong here!), but there's a workaround as shown by Sergio. In your case, you would add another method outside your class in order to access or manipulate your object:
```py
class ExampleClass:
def __init__(self,t,n):
self.text = t
self.number = n
def getNumberPlusOne(self):
return (self.number+1)
foo = ExampleClass("bar", 42)
def get_number_plus_one():
return foo.getNumberPlusOne()
```
Note that this also means that the object is instantiated within the very same file: `foo`. At this point this seems to be the only option to interact with an object -- again, I'd hope somebody can prove me wrong.
For the sake of completeness, here's the result:
[](https://i.stack.imgur.com/EvpVG.png)
|
I would like to add to what the above user said that you have to make sure that the imports you use are in the global site-packages, and not in a venv as Studio doesn't have access to that.
Moreoever, always add this:
```
import os
import sys
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
```
to the beginning of your code. Again, a limitation of the implementation. (docs here: <https://docs.uipath.com/activities/docs/load-script>)
Doing this you might be able to do more complicated structures I think, but I haven't tested this out.
| 13,030
|
59,317,919
|
Training an image classifier using `.fit_generator()` or `.fit()` and passing a dictionary to `class_weight=` as an argument.
I never got errors in TF1.x but in 2.1 I get the following output when starting training:
```none
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
```
What does it mean to coerce something from `...` to `['...']`?
The source for this warning on `tensorflow`'s repo is [here](https://github.com/tensorflow/tensorflow/blob/d1574c209396d38ebe3c20b3ba1cb4c8d82170ae/tensorflow/python/keras/engine/data_adapter.py#L1090), comments placed are:
>
> Attempt to coerce sample\_weight\_modes to the target structure. This implicitly depends on the fact that Model flattens outputs for its internal representation.
>
>
>
|
2019/12/13
|
[
"https://Stackoverflow.com/questions/59317919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1838257/"
] |
This seems like a bogus message. I get the same warning message after upgrading to TensorFlow 2.1, but I do not use any class weights or sample weights at all. I do use a generator that returns a tuple like this:
```
return inputs, targets
```
And now I just changed it to the following to make the warning go away:
```
return inputs, targets, [None]
```
I don't know if this is relevant, but my model uses 3 inputs, so my `inputs` variable is actually a list of 3 numpy arrays. `targets` is just a single numpy array.
In any case, it's just a warning. The training works fine either way.
Edit for TensorFlow 2.2:
========================
This bug seems to have been fixed in TensorFlow 2.2, which is great. However the fix above will fail in TF 2.2, because it will try to get the shape of the sample weights, which will obviously fail with `AttributeError: 'NoneType' object has no attribute 'shape'`. So undo the above fix when upgrading to 2.2.
|
I have taken your Gist and installed Tensorflow 2.0, instead of TFA and it worked without any such Warning.
Here is the [Gist](https://colab.sandbox.google.com/gist/rmothukuru/e9d65d1119d90c1ec0d435249f3f5d01/untitled2.ipynb) of the complete code. Code for installing the Tensorflow is shown below:
```
!pip install tensorflow==2.0
```
Screenshot of the successful execution is shown below:
[](https://i.stack.imgur.com/LCNRJ.png)
**Update:** This bug is fixed in **`Tensorflow Version 2.2.`**
| 13,031
|
21,083,760
|
I have two files that are tab delimited.I need to compare file 1 column 3 to file 2 column 1 .If there is a match I need to write column 2 of file 2 next to the matching line in file 1.here is a sample of my file:
file 1:
```
a rao rocky1 beta
b rao buzzy2 beta
c Rachel rocky2 alpha
```
file 2:
```
rocky1 highlightpath
rimper2 darkenpath
rocky2 greenpath
```
output:
new file:
```
a rao rocky1 beta highlightpath
b rao buzzy2 beta
c Rachel rocky2 alpha greenpath
```
the problem is file 1 is huge ! file 2 is also big but not as much.
So far I tried awk command , it worked partially. what I mean is number of lines in file 1 and output file which is newfile should be same, which is not what I got ! I get a difference of 20 lines.
```
awk 'FNR==NR{a[$3]=$0;next}{if($1 in a){p=$1;$1="";print a[p],$0}}' file1 file2 > newfile
```
So I thought I could try python to do it, but I am a novice at python. All I know so far is I would like to make a dictionary for file 1 and file 2 and compare. I know how to read a file into dictionary and then I am blank.Any help and suggestion with the code will help.
Thanks
|
2014/01/13
|
[
"https://Stackoverflow.com/questions/21083760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2464553/"
] |
```
import sys
# Usage: python SCRIPT.py FILE1 FILE2 > OUTPUT
file1, file2 = sys.argv[1:3]
# Store info from the smaller file in a dict.
d = {}
with open(file2) as fh:
for line in fh:
k, v = line.split()
d[k] = v
# Process the bigger file line-by-line, printing to standard output.
with open(file1) as fh:
for line in fh:
line = line.rstrip()
k = line.split()[2]
if k in d:
print line, d[k]
else:
print line
```
|
```
with open('outfile.txt', 'w') as outfile:
with open('file1.txt', 'r') as f1:
with open('file2.txt', 'r') as f2:
for f1line in f1:
for f2line in f2:
## remove new line character at end of each line
f1line = f1line.rstrip()
f2line = f2line.rstrip()
## extract column fields
f1col3 = f1line.split('\t')[2]
f2col1 = f2line.split('\t')[0]
## test if fields are equal
if (f1col3 == f2col1 ):
outfile.write('%s\t%s\n' % (f1line,
f2line.split('\t')[1]))
else:
outfile.write('%s\t\n' % (f1line))
break
```
* This script will compare lines1 of file1&2, then lines2 of file1&2, lines3 ... etc ...
* It is good for large files; shouldn't load the memory :)
| 13,037
|
56,504,180
|
I have set up a Raspberry Pi connected to an LED strip which is controllable from my phone via a Node server I have running on the RasPi. It triggers a simple python script that sets a colour.
I'm looking to expand the functionality such that I have a python script continuously running and I can send colours to it that it will consume the new colour and display both the old and new colour side by side. I.e the python script can receive commands and manage state.
I've looked into whether to use a simple loop or a deamon for this but I don't understand how to both run a script continuously and receive the new commands.
Is it better to keep state in the Node server and keep sending a lot of simple commands to a basic python script or to write a more involved python script that can receive few simpler commands and continuously update the lights?
|
2019/06/08
|
[
"https://Stackoverflow.com/questions/56504180",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1635908/"
] |
Just delete keys that must be exclude for invalid user:
```
myReportsHeader = () => {
const { valid_user } = this.props;
const { tableHeaders } = this.props.tableHeaders
const { labels: tableLabels } = tableHeaders
if (!valid_user) {
delete tableLabels['tb4']
}
return tableLabels
}
```
[Delete operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/delete)
|
You can first construct an object with default values. Then add additional property if the condition is true and finally return it e.g:
```
const obj = {
tb1: tableLabels.tb1,
tb2: tableLabels.tb2,
tb3: tableLabels.tb3,
tb5: tableLabels.tb5,
tb6: tableLabels.tb6,
tb7: tableLabels.tb7,
tb8: tableLabels.tb8,
tb9: tableLabels.tb9,
tb10: tableLabels.tb10
};
if (valid_user) {
obj.tb4 = tableLabels.tb4, // Add only if valid_user
}
return obj;
```
| 13,041
|
67,899,519
|
I have a json file with size in 500 MB with structure like
```
{"user": "abc@xyz.com","contact":[{"name":"Jack "John","number":"+1 23456789"},{"name":"Jack Jill","number":"+1 232324789"}]}
```
Issue is when i parse this string with pandas read\_json, I get error
`Unexpected character found when decoding object value`
Issue is happening because of double quotes `"` in before `John` in first elecment of contact array.
Need help in escaping this quote either with sed/awk/python so that i can directly load the file in pandas.
**I am fine with ignoring contact with name as Jack as well but i can not ignore the complete row.**
|
2021/06/09
|
[
"https://Stackoverflow.com/questions/67899519",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9025131/"
] |
In general here is no robust way to fix this, nor to ignore any part of the row nor even to ignore the whole row because if a quoted string can contain quotes and can also contain `:`s and `,`s then a messed up string can look exactly like a valid set of fields.
Having said that, if we target only the "name" field and assume that the associated string cannot contain `,`s then with GNU awk for the 3rd arg to `match()` we can do:
```
$ cat tst.awk
{
while ( match($0,/(.*"name":")(([^,"]*"[^,"]*)+)(".*)/,a) ) {
gsub(/"/,"",a[2])
$0 = a[1] a[2] a[4]
}
print
}
```
```
$ awk -f tst.awk file
{"user": "abc@xyz.com","contact":[{"name":"Jack O'Malley","number":"+1 23456789"},{"name":"Jack Jill Smith-Jones","number":"+1 232324789"}]}
{"user": "abc@xyz.com","contact":[{"name":"X Æ A-12 Musk","number":"+1 23456789"},{"name":"Jack Jill","number":"+1 232324789"}]}
```
I ran the above on this input file:
```
$ cat file
{"user": "abc@xyz.com","contact":[{"name":"Jack "O'Malley","number":"+1 23456789"},{"name":"Jack "Jill "Smith-Jones","number":"+1 232324789"}]}
{"user": "abc@xyz.com","contact":[{"name":"X Æ A-12 "Musk","number":"+1 23456789"},{"name":"Jack Jill","number":"+1 232324789"}]}
```
and you can do the same using any awk:
```
$ cat tst.awk
{
while ( match($0,/"name":"([^,"]*"[^,"]*)+"/) ) {
tgt = substr($0,RSTART+8,RLENGTH-9)
gsub(/"/,"",tgt)
$0 = substr($0,1,RSTART+7) tgt substr($0,RSTART+RLENGTH-1)
}
print
}
```
```
$ awk -f tst.awk file
{"user": "abc@xyz.com","contact":[{"name":"Jack O'Malley","number":"+1 23456789"},{"name":"Jack Jill Smith-Jones","number":"+1 232324789"}]}
{"user": "abc@xyz.com","contact":[{"name":"X Æ A-12 Musk","number":"+1 23456789"},{"name":"Jack Jill","number":"+1 232324789"}]}
```
|
You can try removing that extra " using regex `("[\s\w]*)(")([\s\w]*")`.
The regex try to match any string with spaces/alphabets followed by a quote and then followed again by spaces/alphabets and removing that additional quote.
This will work for the given problem but for more complex patterns, you may have to tweak the regex.
Complete code:
```
import re
text_json = '{"user": "abc@xyz.com","contact":[{"name":"Jack "John","number":"+1 23456789"},{"name":"Jack Jill","number":"+1 232324789"}]}'
text_json = re.sub(r'("[\s\w]*)(")([\s\w]*")', r"\1\3", text_json )
print(text_json)
```
Output:
```
'{"user": "abc@xyz.com","contact":[{"name":"Jack John","number":"+1 23456789"},{"name":"Jack Jill","number":"+1 232324789"}]}'
```
| 13,045
|
50,848,226
|
Currently I am trying to run Stardew Valley from python by doing this:
```
import subprocess
subprocess.call(['cmd', 'D:\SteamR\steamapps\common\Stardew Valley\Stardew Valley.exe'])
```
However, this fails and only opens a CMD window. I have a basic understanding of how to launch programs from python, but I do not understand how to specifically open a program that is located not only in a different location, but also on a different drive.
Any help would be appreciated. Thanks!
Edit:
This is on windows 10
Stardew Valley version is the beta and is located on the D:/ drive (windows is on C:/ of course)
|
2018/06/14
|
[
"https://Stackoverflow.com/questions/50848226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4848801/"
] |
Can you try using the steam commandline using the appid of the game:
```
subprocess.call(r"C:\Program Files (x86)\Steam\Steam.exe -applaunch 413150")
```
you can find the app id in the "web document tab" from the desktop shortcut properties
(which can be generated by right click and select create desktop shortcut in the steam library).
It will be something like this *steam://rungameid/413150*
|
You don't have to use `cmd`, you can start the `.exe` directly.
Additionally you should be aware that `\` is used to escape characters in Python strings, but should not be interpreted specially in Windows paths. Better use raw strings prefixed with `r` for Windows paths, which disable such escapes:
```
import subprocess
subprocess.call([r'D:\SteamR\steamapps\common\Stardew Valley\Stardew Valley.exe'])
```
| 13,046
|
11,108,461
|
I'm having a strange problem while trying to install the Python library `zenlib`, using its `setup.py` file. When I run the `setup.py` file, I get an import error, saying
>
> ImportError: No module named Cython.Distutils`
>
>
>
but I do have such a module, and I can import it on the python command line without any trouble. Why might I be getting this import error?
I think that the problem may have to do with the fact that I am using [Enthought Python Distribution](https://www.enthought.com/product/enthought-python-distribution/), which I installed right beforehand, rather than using the Python 2.7 that came with Ubuntu 12.04.
More background:
Here's exactly what I get when trying to run setup.py:
```
enwe101@enwe101-PCL:~/zenlib/src$ sudo python setup.py install
Traceback (most recent call last):
File "setup.py", line 4, in <module>
from Cython.Distutils import build_ext
ImportError: No module named Cython.Distutils
```
But it works from the command line:
```
>>> from Cython.Distutils import build_ext
>>>
>>> from fake.package import noexist
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named fake.package
```
Note the first import worked and the second throws an error. Compare this to the first few lines of setup.py:
```
#from distutils.core import setup
from setuptools import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import os.path
```
I made sure that the Enthought Python Distribution and not the python that came with Ubuntu is what is run by default by prepending my bash $PATH environment variable by editing `~/.bashrc`, adding this as the last line:
```
export PATH=/usr/local/epd/bin:$PATH
```
and indeed `which python` spits out `/usr/local/epd/bin/python`... not knowing what else to try, I went into my site packages directory, (`/usr/local/epd/lib/python2.7/site-packages`) and give full permissions (r,w,x) to `Cython`, `Distutils`, `build_ext.py`, and the `__init__.py` files. Probably silly to try, and it changed nothing.
Can't think of what to try next!? Any ideas?
|
2012/06/19
|
[
"https://Stackoverflow.com/questions/11108461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1467306/"
] |
Install Cython:
```
pip install cython
```
|
In the CLI-python, import sys and look what's inside sys.path
Then try to use `export PYTHONPATH=whatyougot`
| 13,049
|
38,557,849
|
A peak finding program in a 1-D python list which returns a peak with its index if for an index 'x' in the list 'arr' if (arr[x] > arr[x+1] and arr[x] > arr[x-1]).
Special case
Case 1 : In case of the first element. Only compare it to the second element. If arr[x] > arr[x+1], peak found.
Case 2 : Last element. Compare with the previous element. If arr[x] > arr[x-1], peak found.
Below is the code.
For some reason it is not working in the case the peak is at index = 0. Its perfectly working for the peak in the middle and peak at the end.
Any help is greatly appreciated.
```
import sys
def find_peak(lst):
for x in lst:
if x == 0 and lst[x] > lst[x+1]:
print "Peak found at index", x
print "Peak :", lst[x]
return
elif x == len(lst)-1 and lst[x] > lst[x-1]:
print "Peak found at index", x
print "Peak :", lst[x]
return
elif x > 0 and x < len(lst)-1:
if lst[x] > lst[x+1] and lst[x] > lst[x-1]:
print "Peak found at index", x
print "Peak :", lst[x]
return
else :
print "No peak found"
def main():
lst = []
for x in sys.argv[1:]:
lst.append(int(x))
find_peak(lst)
if __name__ == '__main__':
main()
Anuvrats-MacBook-Air:Python anuvrattiku$ python peak_finding_1D.py 1 2 3 4
Peak found at index 3
Peak : 4
Anuvrats-MacBook-Air:Python anuvrattiku$ python peak_finding_1D.py 1 2 3 4 3
Peak found at index 3
Peak : 4
Anuvrats-MacBook-Air:Python anuvrattiku$ python peak_finding_1D.py 4 3 2 1
No peak found
```
|
2016/07/24
|
[
"https://Stackoverflow.com/questions/38557849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4254959/"
] |
for example, let us create a middleware function that will handle CORS using:
*[github.com/buaazp/fasthttprouter](https://github.com/buaazp/fasthttprouter)* and *[github.com/valyala/fasthttp](https://github.com/valyala/fasthttp)*
```
var (
corsAllowHeaders = "authorization"
corsAllowMethods = "HEAD,GET,POST,PUT,DELETE,OPTIONS"
corsAllowOrigin = "*"
corsAllowCredentials = "true"
)
func CORS(next fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
ctx.Response.Header.Set("Access-Control-Allow-Credentials", corsAllowCredentials)
ctx.Response.Header.Set("Access-Control-Allow-Headers", corsAllowHeaders)
ctx.Response.Header.Set("Access-Control-Allow-Methods", corsAllowMethods)
ctx.Response.Header.Set("Access-Control-Allow-Origin", corsAllowOrigin)
next(ctx)
}
}
```
Now we chain this middleware function on our Index handler and register it on the router.
```
func Index(ctx *fasthttp.RequestCtx) {
fmt.Fprint(ctx, "some-api")
}
func main() {
router := fasthttprouter.New()
router.GET("/", Index)
if err := fasthttp.ListenAndServe(":8181", CORS(router.Handler)); err != nil {
log.Fatalf("Error in ListenAndServe: %s", err)
}
}
```
|
Example of auth middleware for fasthttp & fasthttprouter (new versions)
```
type Middleware func(h fasthttp.RequestHandler) fasthttp.RequestHandler
type AuthFunc func(ctx *fasthttp.RequestCtx) bool
func NewAuthMiddleware(authFunc AuthFunc) Middleware {
return func(h fasthttp.RequestHandler) fasthttp.RequestHandler {
return func(ctx *fasthttp.RequestCtx) {
result, err: = authFunc(ctx)
if result {
h(ctx)
} else {
ctx.Response.SetStatusCode(fasthttp.StatusUnauthorized)
}
}
}
}
func AuthCheck(ctx *fasthttp.RequestCtx)(bool, error) {
return false; // for example ;)
}
// router
authMiddleware: = middleware.NewAuthMiddleware(security.AuthCheck)
...
router.GET("/protected", authMiddleware(handlers.ProtectedHandler))
```
| 13,059
|
12,902,178
|
>
> **Possible Duplicate:**
>
> [Compare two different files line by line and write the difference in third file - Python](https://stackoverflow.com/questions/7757626/compare-two-different-files-line-by-line-and-write-the-difference-in-third-file)
>
>
>
The logic in my head works something like this...
for line in import\_file check to see if it contains any of the items in Existing-user-string-list if it contains any one of the items from that list then delete that line for the file.
```
filenew = open('new-user', 'r')
filexist = open('existing-user', 'r')
fileresult = open('result-file', 'r+')
xlines = filexist.readlines()
newlines = filenew.readlines()
for item in newlines:
if item contains an item from xlines
break
else fileresult.write(item)
filenew.close()
filexist.close()
fileresult.close()
```
I know this code is all jacked up but perhaps you can point me in the right direction.
Thanks!
Edit ----
**Here is an example of what is in my existing user file....**
```
allyson.knanishu
amy.curtiss
amy.hunter
amy.schelker
andrea.vallejo
angel.bender
angie.loebach
```
**Here is an example of what is in my new user file....**
```
aimee.neece,aimee,neece,aimee.neece@faculty.asdf.org,aimee neece,aimee neece,"CN=aimee neece,OU=Imported,dc=Faculty,dc=asdf,dc=org"
alexis.andrews,alexis,andrews,alexis.andrews@faculty.asdf.org,alexis andrews,alexis andrews,"CN=alexis andrews,OU=Imported,dc=Faculty,dc=asdf,dc=org"
alice.lee,alice,lee,alice.lee@faculty.asdf.org,alice lee,alice lee,"CN=alice lee,OU=Imported,dc=Faculty,dc=asdf,dc=org"
allyson.knanishu,allyson,knanishu,allyson.knanishu@faculty.asdf.org,allyson knanishu,allyson knanishu,"CN=allyson knanishu,OU=Imported,dc=Faculty,dc=asdf,dc=org"
```
New code from @mikebabcock ... thanks.
```
outfile = file("result-file.txt", "w")
lines_to_check_for = [ parser(line) for line in file("existing-user.txt", "r") ]
for line in file("new-user.txt", "r"):
if not parser(line) in lines_to_check_for:
outfile.write(line)
```
Added an import statement for the parser... I am receiving the following error...
```
C:\temp\ad-import\test-files>python new-script.py
Traceback (most recent call last):
File "new-script.py", line 7, in <module>
lines_to_check_for = [ parser(line) for line in file("existing-user.txt", "r
") ]
TypeError: 'module' object is not callable
```
Thanks!
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12902178",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1747993/"
] |
assuming I understand what you want to do .... use set intersection :)
```
for line in newlines:
if set(line.split()) & set(xlines): #set intersection
print "overlap between xlines and current line"
break
else:
fileresult.write(item)
```
|
I presume this is what you want to do:
```
outfile = file("outfile.txt", "w")
lines_to_check_for = [ line for line in file("list.txt", "r") ]
for line in file("testing.txt", "r"):
if not line in lines_to_check_for:
outfile.write(line)
```
This will read all the lines in `list.txt` into an array, and then check each line of `testing.txt` against that array. All the lines that aren't in that array will be written out to `outfile.txt`.
Here's an updated example based on your new question:
```
newusers = file("newusers.txt", "w")
existing_users = [ line for line in file("users.txt", "r") ]
for line in file("testing.txt", "r"):
# grab each comma-separated user, take the portion left of the @, if any
new_users = [ entry.split["@"](0) for entry in line.split(",") ]
for user in new_users:
if not user in existing_users:
newusers.write(user)
```
| 13,060
|
11,452,887
|
Based on this example of a line from a file
```
1:alpha:beta
```
I'm trying to get python to read the file in and then line by line print whats after the 2nd `':'`
```
import fileinput
#input file
x = fileinput.input('ids.txt')
strip_char = ":"
for line in x:
strip_char.join(line.split(strip_char)[2:])
```
This produces no results, however from a console session on a single line it works fine
```
Python 2.7.3rc2 (default, Apr 22 2012, 22:35:38)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
data = '1:alpha:beta'
strip_char = ":"
strip_char.join(data.split(strip_char)[2:])
'beta'
```
What am i doing wrong please? Thanks
|
2012/07/12
|
[
"https://Stackoverflow.com/questions/11452887",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
For the data format given this will work:
```
with open('data.txt') as inf:
for line in inf:
line = line.strip()
line = line.split(':')
print ':'.join(line[2:])
```
For `'1:alpha:beta'` the output would be `'beta'`
For `'1:alpha:beta:gamma'` the output would be `'beta:gamma'` (Thanks for @JAB pointing this out)
|
Values returned by functions aren't automatically sent to stdout in non-interactive mode, you have to explicitly print them.
So, for Python 2, use `print line.split(strip_char, 2)[2]`. If you ever use Python 3, it'll be `print(line.split(strip_char, 2)[2])`.
(Props to Jon Clements, I forgot you could limit how many times a string will be split.)
| 13,062
|
48,753,297
|
I am trying to evaluate istio and trying to deploy the bookinfo example app provided with the istio installation. While doing that, I am facing the following issue.
```
Environment: Non production
1. Server Node - red hat enterprise linux 7 64 bit VM [3.10.0-693.11.6.el7.x86_64]
Server in customer secure vpn with no access to enterprise/public DNS.
2. docker client and server: 1.12.6
3. kubernetes client - 1.9.1, server - 1.8.4
4. kubernetes install method: kubeadm.
5. kubernetes deployment mode: single node with master and slave.
6. Istio install method:
- istio version: 0.5.0
- no SSL, no automatic side car injection, no Helm.
- Instructions followed: https://istio.io/docs/setup/kubernetes/quick-start.html
- Cloned the istio github project - https://github.com/istio/istio.
- Used the istio.yaml and bookinfo.yaml files for the installation and example implementation.
```
Issue:
The installation of istio client and control plane components went through fine.
The control plane also starts up fine.
However, when I launch the bookinfo app, the app's proxy init containers crash with a cryptic "iptables: Chain already exists" log message.
```
ISTIO CONTROL PLANE
--------------------
$ kubectl get deployments,pods,svc,ep -n istio-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/istio-ca 1 1 1 1 2d
deploy/istio-ingress 1 1 1 1 2d
deploy/istio-mixer 1 1 1 1 2d
deploy/istio-pilot 1 1 1 1 2d
NAME READY STATUS RESTARTS AGE
po/istio-ca-5796758d78-md7fl 1/1 Running 0 2d
po/istio-ingress-f7ff9dcfd-fl85s 1/1 Running 0 2d
po/istio-mixer-69f48ddb6c-d4ww2 3/3 Running 0 2d
po/istio-pilot-69cc4dd5cb-fglsg 2/2 Running 0 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/istio-ingress LoadBalancer 10.103.67.68 <pending> 80:31445/TCP,443:30412/TCP 2d
svc/istio-mixer ClusterIP 10.101.47.150 <none> 9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 2d
svc/istio-pilot ClusterIP 10.110.58.219 <none> 15003/TCP,8080/TCP,9093/TCP,443/TCP 2d
NAME ENDPOINTS AGE
ep/istio-ingress 10.244.0.22:443,10.244.0.22:80 2d
ep/istio-mixer 10.244.0.20:9125,10.244.0.20:9094,10.244.0.20:15004 + 4 more... 2d
ep/istio-pilot 10.244.0.21:443,10.244.0.21:15003,10.244.0.21:8080 + 1 more... 2d
BOOKINFO APP
-------------
$ kubectl get deployments,pods,svc,ep
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 2d
deploy/productpage-v1 1 1 1 1 2d
deploy/ratings-v1 1 1 1 1 2d
deploy/reviews-v1 1 1 1 1 2d
deploy/reviews-v2 1 1 1 1 2d
deploy/reviews-v3 1 1 1 1 2d
NAME READY STATUS RESTARTS AGE
po/details-v1-df5d6ff55-92jrx 0/2 Init:CrashLoopBackOff 738 2d
po/productpage-v1-85f65888f5-xdkt6 0/2 Init:CrashLoopBackOff 738 2d
po/ratings-v1-668b7f9ddc-9nhcw 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v1-5845b57d57-2cjvn 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v2-678b446795-hkkvv 0/2 Init:CrashLoopBackOff 738 2d
po/reviews-v3-8b796f-64lm8 0/2 Init:CrashLoopBackOff 738 2d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/details ClusterIP 10.104.237.100 <none> 9080/TCP 2d
svc/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 70d
svc/productpage ClusterIP 10.100.136.14 <none> 9080/TCP 2d
svc/ratings ClusterIP 10.105.166.190 <none> 9080/TCP 2d
svc/reviews ClusterIP 10.110.221.19 <none> 9080/TCP 2d
NAME ENDPOINTS AGE
ep/details 10.244.0.24:9080 2d
ep/kubernetes NNN.NN.NN.NNN:6443 70d
ep/productpage 10.244.0.45:9080 2d
ep/ratings 10.244.0.25:9080 2d
ep/reviews 10.244.0.26:9080,10.244.0.28:9080,10.244.0.29:9080 2d
PROXY INIT CRASHED CONTAINERS
------------------------------
$ docker ps -a | grep -i istio | grep -i exit
9109bafcf9e7 docker.io/istio/proxy_init@sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 11 seconds ago Exited (1) 10 seconds ago k8s_istio-init_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568
e45b4_740
0ed3b188d7ba docker.io/istio/proxy_init@sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 27 seconds ago Exited (1) 26 seconds ago k8s_istio-init_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-005056
8e45b4_740
893fcec0b01e docker.io/istio/proxy_init@sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-005056
8e45b4_740
a2a036273402 docker.io/istio/proxy_init@sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-00
50568e45b4_740
520beb6779e0 docker.io/istio/proxy_init@sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" About a minute ago Exited (1) About a minute ago k8s_istio-init_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45
b4_740
91a0f41f5fde docker.io/istio/proxy_init@sha256:0962ff2159796a66b9d243cac82cfccb6730cd5149c91a0f64baa08f065b22f8 "/usr/local
/bin/prepa" 3 minutes ago Exited (1) 3 minutes ago k8s_istio-init_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-005056
8e45b4_740
PROXY PROCESSES FOR EACH ISTIO COMPONENT
-----------------------------------------
$ docker ps | grep -vi exit | grep proxy
4d9b37839e44 docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-0050568e45b4_0
1c72e3a990cb docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-0050568e45b4_0
f6ffcaf4b24b docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-0050568e45b4_0
b66b7ab90a2d docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-0050568e45b4_0
08bf2370b5be docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45b4_0
0c10d8d594bc docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568e45b4_0
6134fa756f35 docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_istio-pilot-69cc4dd5cb-fglsg_istio-system_5ecf54b6-0dcd-11e8-8de9-0050568e45b4_0
9a18ea74b6bf docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-proxy_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
5db18d722bb1 docker.io/istio/proxy@sha256:3a9fc8a72faec478a7eca222bbb2ceec688514c95cf06ac12ab6235958c6883c "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_istio-ingress_istio-ingress-f7ff9dcfd-fl85s_istio-system_5ed6333d-0dcd-11e8-8de9-0050568e45b4_0
$ docker ps | egrep -iv "proxy|pause|kube-|etcd|defaultbackend|ingress"
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Docker Containers for the apps (These seem to have started up without issues)
------------------------------
61951f88b83c docker.io/istio/examples-bookinfo-reviews-v2@sha256:e390023aa6180827373293747f1bff8846ffdf19fdcd46ad91549d3277dfd4ea "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v2-678b446795-hkkvv_default_b557b5a5-0dcd-11e8-8de9-0050568e45b4_0
18d2137257c0 docker.io/istio/examples-bookinfo-productpage-v1@sha256:ce983ff8f7563e582a8ff1adaf4c08c66a44db331208e4cfe264ae9ada0c5a48 "/bin/sh -c 'python p" 2 days ago Up 2 days k8s_productpage_productpage-v1-85f65888f5-xdkt6_default_b579277b-0dcd-11e8-8de9-0050568e45b4_0
5ba97591e5c7 docker.io/istio/examples-bookinfo-reviews-v1@sha256:aac2cfc27fad662f7a4473ea549d8980eb00cd72e590749fe4186caf5abc6706 "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v1-5845b57d57-2cjvn_default_b555bb75-0dcd-11e8-8de9-0050568e45b4_0
ed11b00eff22 docker.io/istio/examples-bookinfo-reviews-v3@sha256:6829a5dfa14d10fa359708cf6c11ec9022a3d047a089e73dea3f3bfa41f7ed66 "/bin/sh -c '/opt/ibm" 2 days ago Up 2 days k8s_reviews_reviews-v3-8b796f-64lm8_default_b559d9ef-0dcd-11e8-8de9-0050568e45b4_0
be88278186c2 docker.io/istio/examples-bookinfo-ratings-v1@sha256:b14905701620fc7217c12330771cd426677bc5314661acd1b2c2aeedc5378206 "/bin/sh -c 'node rat" 2 days ago Up 2 days k8s_ratings_ratings-v1-668b7f9ddc-9nhcw_default_b55128a5-0dcd-11e8-8de9-0050568e45b4_0
e1c749eedf3c docker.io/istio/examples-bookinfo-details-v1@sha256:02c863b54d676489c7e006948e254439c63f299290d664e5c0eaf2209ee7865e "/bin/sh -c 'ruby det" 2 days ago Up 2 days k8s_details_details-v1-df5d6ff55-92jrx_default_b54d921c-0dcd-11e8-8de9-0050568e45b4_0
Docker Containers for Control Plane components
-----------------------------------------------
(CA: no ssl setup done)
5847934ca3c6 docker.io/istio/istio-ca@sha256:b3aaa5e5df2c16b13ea641d9f6b21f1fa3fb01b2f36a6df5928f17815aa63307 "/usr/local/bin/istio" 2 days ago Up 2 days k8s_istio-ca_istio-ca-5796758d78-md7fl_istio-system_5ed9a9e4-0dcd-11e8-8de9-0050568e45b4_0
(PILOT:
[1] W0209 19:13:58.364556 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.)
[2] warn AvailabilityZone couldn't find the given cluster node
[3] warn AvailabilityZone unexpected service-node: invalid node type (valid types: ingress, sidecar, router in the service node "mixer~~.~.svc.cluster.local"
[4] warn AvailabilityZone couldn't find the given cluster node
pattern 2, 3, 4 repeats)
f7a7816bd147 docker.io/istio/pilot@sha256:96c2174f30d084e0ed950ea4b9332853f6cd0ace904e731e7086822af726fa2b "/usr/local/bin/pilot" 2 days ago Up 2 days k8s_discovery_istio-pilot-69cc4dd5cb-fglsg_istio-system_5ecf54b6-0dcd-11e8-8de9-0050568e45b4_0
(MIXER: W0209 19:13:57.948480 1 client_config.go:529] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.)
f4c85eb7f652 docker.io/istio/mixer@sha256:a2d5f14fd55198239817b6c1dac85651ac3e124c241feab795d72d2ffa004bda "/usr/local/bin/mixs " 2 days ago Up 2 days k8s_mixer_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
(STATD EXPORTER: No issues/errors)
9fa2865b7e9b docker.io/prom/statsd-exporter@sha256:d08dd0db8eaaf716089d6914ed0236a794d140f4a0fe1fd165cda3e673d1ed4c "/bin/statsd_exporter" 2 days ago Up 2 days k8s_statsd-to-prometheus_istio-mixer-69f48ddb6c-d4ww2_istio-system_5e8801ab-0dcd-11e8-8de9-0050568e45b4_0
```
|
2018/02/12
|
[
"https://Stackoverflow.com/questions/48753297",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3675410/"
] |
For your example string, you could remove the `^` like:
[`(?<=FN=).+?(?=&.+?$)`](https://regex101.com/r/33BiQU/1)
`^` means assert the position at start of the string.
For this example, you could also write this as:
[`(?<=FN=)[^&]+`](https://regex101.com/r/uIORIF/1)
**Explanation**
* `(?<=` Positive lookbehind that asserts what is before
+ `FN=` Match FN#
* `)` Close lookbehind
* `[^&]+` Negated character set that matches not an ampersand one or more times
|
Instead of a regex, you could use the "proper" way of parsing the URI and the query string:
```
Imports System.Web
Module Module1
Sub Main()
Dim u = New Uri("https://portal-gamma.myColgate.com/sites/ENG/Pages/r.aspx?RT=Modify Support&Element=Business Direct&SE=Chain Supply&FN=Freight Forwarder Standard Operating Procedure - United Kingdom.pdf&DocID=400")
Dim parts = HttpUtility.ParseQueryString(u.Query)
Console.WriteLine(parts("FN"))
Console.ReadLine()
End Sub
End Module
```
Outputs:
>
> Freight Forwarder Standard Operating Procedure - United Kingdom.pdf
>
>
>
You will need to add a reference in the project to System.Web if there isn't one already.
| 13,067
|
63,461,566
|
Currently using selenium in python and was trying to for loop after locating element by "img" tags in whole webpage. I am trying to save all the urls and img names to my 2 arrays.
```
imgurl = []
imgname = []
allimgtags = browser.find_element_by_tag_name("img")
for a in len(allimgtags):
imgurl.append(wholeimgtags.get_attribute("src"))
imgname.append(wholeimgtags.get_attribute("alt"))
```
but i am getting this error in the terminal. How do i save the sub urls and names to my 2 arrays?
```
Traceback (most recent call last):
File "scrpy_selenium.py", line 31, in <module>
for a in len(wholeimgtags):
TypeError: object of type 'WebElement' has no len()
```
|
2020/08/18
|
[
"https://Stackoverflow.com/questions/63461566",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14082479/"
] |
You can refer the below sample code:
```
class SampleAdapter(private var list: List<String>,
private val viewmodel: SampleViewModel,
private val lifecycleOwner: LifecycleOwner) : RecyclerView.Adapter<SampleAdapter.ViewHolder>() {
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ViewHolder {
val binding: ItemRowBinding = DataBindingUtil.inflate(
LayoutInflater.from(parent.context),
R.layout.item_row, parent, false)
val holder= ViewHolder(binding, lifecycleOwner)
binding.lifecycleOwner=holder
holder.lifecycleCreate()
return holder
}
override fun onBindViewHolder(holder: ViewHolder, position: Int) {
holder.bind()
}
override fun getItemCount(): Int {
return list.size
}
override fun onViewAttachedToWindow(holder: ViewHolder) {
super.onViewAttachedToWindow(holder)
holder.attachToWindow()
}
inner class ViewHolder(private val binding: ItemRowBinding,
private var lifecycleOwner: LifecycleOwner)
: RecyclerView.ViewHolder(binding.root),LifecycleOwner {
private val lifecycleRegistry = LifecycleRegistry(this)
private var paused: Boolean = false
init {
lifecycleRegistry.currentState = Lifecycle.State.INITIALIZED
}
fun lifecycleCreate() {
lifecycleRegistry.currentState = Lifecycle.State.CREATED
}
override fun getLifecycle(): Lifecycle {
return lifecycleRegistry
}
fun bind() {
lifecycleOwner = this@SampleAdapter.lifecycleOwner
binding.viewmodel = viewmodel
binding.executePendingBindings()
}
fun attachToWindow() {
if (paused) {
lifecycleRegistry.currentState = Lifecycle.State.RESUMED
paused = false
} else {
lifecycleRegistry.currentState = Lifecycle.State.STARTED
}
}
}
fun setList(list: List<String>) {
this.list = list
notifyDataSetChanged()
} }
```
Reason why we need to pass a lifecycle owner is because:
ViewHolder is not a lifecycle owner and that is why it can not observe LiveData.
The entire idea is to make Viewholder a lifecycle owner by implementing LifecycleOwner and after that start its lifecycle.
|
If i understand correctly from [this page](https://medium.com/@stephen.brewer/an-adventure-with-recyclerview-databinding-livedata-and-room-beaae4fc8116) it is not best to pass the `lifeCycleOwner` to a `RecyclerView.Adapter` binding item, since:
>
> When a ViewHolder has been
> detached, meaning it is not currently visible on the screen,
> parentLifecycleOwner is still in the resumed state, so the
> ViewDataBindings are still active and observing the data. This means
> that when a LiveData instance is updated, it triggers the View to be
> updated, but the View is not currently being displayed! Not ideal.
>
>
>
Stephen Brewer seems to suggest some solutions but i did not test them
| 13,068
|
32,177,869
|
I have several pods, for example a python web app and a redis(shared by other apps), so I need to place redis in a separate pod. But they are all use the same subnet from docker(172.17.0.0/16) or even the same ip address. how can app pods talk with redis pod?
Maybe what I want ask is what's the best way to setup multi-host conainer networking.
---
7 weeks later, I get more familiar with kubernetes. I know kubernetes has network assuming that pods can access each other. so if you app service pod need to access a shared service(redis) pod, you need expose the the shared service as a kubernetes service, then you can get the shared service endpoint from app pods' environment variables or hostname.
|
2015/08/24
|
[
"https://Stackoverflow.com/questions/32177869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1853876/"
] |
How did you set up Kubernetes? I'm not aware of any installation scripts that put pod IPs into a 172 subnet.
But in general, assuming Kubernetes has been set up properly (ideally using one of the provided scripts), using a [service object](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/services.md) to load balance across your 1 or more redis pods would be the standard approach.
|
I realize maybe the question was a bit vague:
if what you want is for your app to talk to the redis service, then set a service for redis (with a name of 'redis' for example) and then the redis service will be accessible simply by calling it by its hostname 'redis'
check the guestbook example that sets up a Redis master + replica slaves and get connected to the php app(s) that way
<https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook>
| 13,077
|
69,602,778
|
I have installed the library in Base conda environment (the only one I have):
```
(base) C:\Users\44444>conda install graphviz
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
```
Set the path in System -> Environment Variables : to both locations : where conda and where python
```
PATHs:
C:\Users\44444\anaconda3\Scripts\
C:\Users\44444\anaconda3\
```
and when I try to import the library I get the error:
```
ModuleNotFoundError: No module named 'graphviz'
```
Am I missing something please? How to import the library.
|
2021/10/17
|
[
"https://Stackoverflow.com/questions/69602778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16339433/"
] |
You have only installed the `graphiz` software, not the python interface to it. For that you will need [this package](https://anaconda.org/conda-forge/python-graphviz) which you can install with
```
conda install -c conda-forge python-graphviz
```
|
Don't add conda's folders to the system PATH manually. What you need to do to work with Anaconda is activating the environment via
```
conda activate
```
This will add all the following folders temporarily to the PATH:
```
C:\Users\44444\anaconda3
C:\Users\44444\anaconda3\Library\mingw-w64\bin
C:\Users\44444\anaconda3\Library\usr\bin
C:\Users\44444\anaconda3\Library\bin
C:\Users\44444\anaconda3\Scripts
C:\Users\44444\anaconda3\bin
```
| 13,082
|
52,089,002
|
I was trying to install a plugin for tmux called powerline. I was installing some thing on brew like PyPy and python.
Now when I try to open a vim file I get:
```
dyld: Library not loaded: /usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/Python
Referenced from: /usr/local/bin/vim
Reason: image not found
Abort trap: 6
```
and when I try to open tmux i get:
```
exited
```
|
2018/08/30
|
[
"https://Stackoverflow.com/questions/52089002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9470570/"
] |
`=AVERAGE.WEIGHTED(FILTER(A:A,ISNUMBER(A:A)),FILTER(B:B,ISNUMBER(A:A)))`
`=SUM(FILTER(A:A*B:B / sum(FILTER(B:B,ISNUMBER(A:A))),ISNUMBER(A:A)))`
`=SUM(FILTER(A:A*B:B / (sum(B:B) - SUMIF(A:A,"><", B:B)),ISNUMBER(A:A)))`
case both columns contain strings, add 1 more condition for each formula:
`=AVERAGE.WEIGHTED(FILTER(A:A,ISNUMBER(A:A),ISNUMBER(B:B)),FILTER(B:B,ISNUMBER(A:A), ISNUMBER(B:B)))`
|
So let us suppose our numbers (we hope) sit in A2:A4 and the weights are in B2:B4. In C2, place
```
=if(and(ISNUMBER(A2),isnumber(B2)),A2,"")
```
and drag that down to C4 to keep only actual numbers for which we have weights.
Similarly in D2 (and drag to D4), use
```
=if(and(ISNUMBER(A2),isnumber(B2)),B2,"")
```
and in E2 (and drag to E4) we need
```
=if(and(ISNUMBER(C2),isnumber(D2)),C2*D2,"")
```
then our weighted average will be:
```
=iferror(sum(E2:E4)/sum(D2:D4))
```
you can hide the working columns or place them in the middle of nowhere if you wish.
| 13,083
|
18,560,993
|
This may be a well-known question stored in some FAQ but i can't google the solution. I'm trying to write a scalar function of scalar argument but allowing for ndarray argument. The function should check its argument for domain correctness because domain violation may cause an exception. This example demonstrates what I tried to do:
```
import numpy as np
def f(x):
x = np.asarray(x)
y = np.zeros_like(x)
y[x>0.0] = 1.0/x
return y
print f(1.0)
```
On assigning `y[x>0.0]=...` python says `0-d arrays can't be indexed`.
What's the right way to solve this execution?
|
2013/09/01
|
[
"https://Stackoverflow.com/questions/18560993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1038377/"
] |
This will work fine in NumPy >= 1.9 (not released as of writing this). On previous versions you can work around by an extra `np.asarray` call:
```
x[np.asarray(x > 0)] = 0
```
|
Could you call `f([1.0])` instead?
Otherwise you can do:
```
x = np.asarray(x)
if x.ndim == 0:
x = x[..., None]
```
| 13,084
|
32,395,635
|
I'm very new to odoo and python and was wondering if I could get some help getting my module to load. I've been following the odoo 8 documentation very closely and can't get anything to appear in the local modules part. (Yes, I have clicked refresh/update module list).
I have also made sure that I put the correct path in my odoo-server.conf file and ensured their are no clashes.
Below is the code:
```
Models.py
Created on 4 Sep 2015
@author:
'''
# -*- coding: utf-8 -*-
from openerp import models, fields, api
# class test(model.Model):
# _name = 'test.test'
# name = fields.Char()
__init__.py
from . import controllers
from . import models
__openerp__.py file
{
'name': "models",
'version': '1.0',
'depends': ['base'],
'author': "Elliot",
'category': 'Category',
'description': """
My first working module.
""",
'installable': True,
'auto_install': False,
'data': [
'templates.xml',
],
'xml': [
'xml.xml'
],
}
controllers.py
from openerp import http
# class test_mod(http.Controller):
# @http.route('/test_mod/model/', auth='public')
# def index(self, **kw):
# return "Hello, world"
# @http.route('/test_mod/model/objects/', auth='public')
# def list(self, **kw):
# return http.request.render('test_mod.listing', {
# 'root': '/Test_mod/Test_mod',
# 'objects': http.request.env['test_mod.model'].search([]),
# })
# @http.route('/test_mod/model/objects/<model("test_mod.model"):obj>/', auth= 'public')
# def object(self, obj, **kw):
# return http.request.render('test_mod.object', {
# 'object': obj
# })
and templates.xml
<openerp>
<data>
<!-- <template id="listing"> -->
<!-- <ul> -->
<!-- <li t-foreach="objects" t-as="object"> -->
<!-- <a t-attf-href="{{ root }}/objects/{{ object.id }}"> -->
<!-- <t t-esc="object.display_name"/> -->
<!-- </a> -->
<!-- </li> -->
<!-- </ul> -->
<!-- </template> -->
<!-- <template id="object"> -->
<!-- <h1><t t-esc="object.display_name"/></h1> -->
<!-- <dl> -->
<!-- <t t-foreach="object._fields" t-as="field"> -->
<!-- <dt><t t-esc="field"/></dt> -->
<!-- <dd><t t-esc="object[field]"/></dd> -->
<!-- </t> -->
<!-- </dl> -->
<!-- </template> -->
</data>
</openerp>
```
|
2015/09/04
|
[
"https://Stackoverflow.com/questions/32395635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5300262/"
] |
I think you might have missed to include the addon directory which includes the custom module.
It can be accomplished via two methods.
1. You can add to, the addons\_path directive in openerp-server.conf, (separate paths with a comma)
```
eg: addons_path = /opt/openerp/server/openerp/addons,custom_path_here
```
2. You can use
```
--addons='addon_path',
```
if starting your server from the command line.
|
You need to restart your service (odoo-service).
| 13,085
|
58,506,732
|
**Objective**: to extract the first email from an email thread
**Description**: Based on manual inspection of the emails, I realized that the next email in the email thread always starts with a set of From, Sent, To and Subject
**Test Input**:
```
Hello World from: the other side of the first email
from: this
sent: at
to: that
subject: what
second email
from: this
sent: at
to: that
subject: what
third email
from: this
date: at
to: that
subject: what
fourth email
```
**Expected output**:
```
Hello World from: the other side of the first email
```
**Failed Attempts**:
Following breaks when there's a `from:` in the first email
`(.*)((from:[\s\S]+?)(sent:[\s\S]+?)(to:[\s\S]+?)(subject:[\s\S]+))`
Following fails when there are repeated groups of From, Sent, To and Subject
`([\s\S]+)((from:(?:(?!from:)[\s\S])+?sent:(?:(?!sent:)[\s\S])+?to:(?:(?!to:)[\s\S])+?subject:(?:(?!subject:)[\s\S])+))`
The second attempt [works with PCRE(PHP)](https://regex101.com/r/dw05sx/5) when an ungreedy option (flag) is selected. However, this option is not available in python and I couldn't figure out a way to make it work.
[Regex101 demo](https://regex101.com/r/dw05sx/4)
|
2019/10/22
|
[
"https://Stackoverflow.com/questions/58506732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4457330/"
] |
To only get the first match, you could use a capturing group and match exactly what should follow.
```
^(.*)\r?\n\s*\r?\nfrom:.*\r?\nsent:.*\r?\nto:.*\r?\nsubject:
```
* `^` Start of string
* `(.*)` Match any char except a newline 0+ times
* `\r?\n\s*` Match a newline followed by 0+ times a whitespace char using `\s*`
* `\r?\nfrom:.*` Match the next line starting with `from:`
* `\r?\nsent:.*` Match the next line starting with `sent:`
* `\r?\nto:.*` Match the next line starting with `to:`
* `\r?\nsubject:.*` Match the next line starting with `subject:`
Note that in the demo link the global flag `g` at the right top is not enabled.
[Regex demo](https://regex101.com/r/v09eBp/1) | [Python demo](https://ideone.com/iEdbqO)
If the first line can span multiple lines and if it acceptable to note cross any of the lines that start with `from:`, `sent:`, `to:` or `subject:` you could also use a negative lookahead.
```
^(.*(?:\r?\n(?!(?:from|sent|to|subject):).*)*)\r?\n\s*\r?\nfrom:.*\r?\nsent:.*\r?\nto:.*\r?\nsubject:
```
[Regex demo](https://regex101.com/r/p33Sde/1)
If there are spaces between `from`, `sent`, `to` and `subject` 0+ (`*`) whitespace characters can be matched
```
^(.*(?:\r?\n(?!(?:from|sent|to|subject):).*)*)\r?\s*\r?\sfrom:.*\r?\s*sent:.*\r?\s*to:.*\r?\s*subject:
```
[Regex demo](https://regex101.com/r/p33Sde/2)
|
Maybe I am misunderstanding, but why don't you just do this:
```
re.compile(r"^.*from:\s(\w+@\w+\.\w+)")
```
This will find the first string in "email-form" (group 1) after the first "from: " at beginning of the string.
| 13,086
|
34,998,210
|
How do I pip install pattern packages in python 3.5?
While in CMD:
```
pip install pattern
syntaxerror: missing parentheses in call to 'print'
```
Shows error:
```
messageCommand "python setup.py egg_info" failed with error
code 1 in temp\pip-build-3uegov4d\pattern
```
`seaborn` and `tweepy` were all successful.
How can I solve this problem?
|
2016/01/25
|
[
"https://Stackoverflow.com/questions/34998210",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5818150/"
] |
The short answer at the moment is - you can't. They haven't quite finished the port to python3 yet.
There is alleged compatibility in the development branch but the recommended manual setup didn't work for me (in virtualenv) - it fails in a different way.
<https://github.com/clips/pattern/tree/development>
The porting issue thread is here, spanning 2013 to yesterday:
<https://github.com/clips/pattern/issues/62>
The official A contributing port repo is here, but it's not finished yet (the readme says there is no Python3 support).
<https://github.com/pattern3/pattern>
So you could try `pip install pattern3` which does install it - but it has a different package name, so you would have to modify any references to it. For me this is "impossible" as it's required by other 3rd party packages like GenSim.
UPDATE
I did get it working in the end in Python3 with Gensim, by manually installing it from the development branch as suggested and fixing up a few problems both during install and execution. (I removed the mysql-client dependency as the installer isn't working on Mac. I manually downloaded the certs for the NTLK wordnet corpus, to fix an SSL error in the setup. I also fixed a couple of scripts which were erroring, e.g. an empty 'try' clause in tree.py). It has a huge set of dependencies!
After reading more on the port activity, it seems they're almost finished and should release in a few months (perhaps early 2018). Furthermore the pattern3 repository is more of a "friend" than the official Python3 fork. They have already pulled the changes from this contributor into the main repo and they are preparing to get it released.
Therefore it should become available on `pip` in the main `pattern` package (not the pattern3 one which I presume will be removed), and there should be no package name-change problems.
|
Additionally, I was facing :
```
"BadZipFile: File is not a zip file" error while importing from pattern.
```
This is because `sentiwordnet` which is out of date in nltk. So comment it in :
```
C:\Anaconda3\Lib\site-packages\Pattern-2.6-py3.5.egg\pattern\text\en\wordnet\_init.py
```
Make sure the necessary corpora are downloaded to the local drive
for token in ("wordnet", "wordnet\_ic"): , "sentiwordnet"
try:
```
nltk.data.find("corpora/" + token)
```
| 13,089
|
8,891,099
|
I seem to have stumbled across a quirk in Django custom model fields. I have the following custom modelfield:
```
class PriceField(models.DecimalField):
__metaclass__ = models.SubfieldBase
def to_python(self, value):
try:
return Price(super(PriceField, self).to_python(value))
except (TypeError, AttributeError):
return None
def get_db_prep_value(self, value):
return super(PriceField, self).get_db_prep_value(value.d if value else value)
def value_to_string(self, instance):
return 'blah'
```
Which should always return the custom Price class in python. This works as expected:
```
>>> PricePoint.objects.all()[0].price
Price('1.00')
```
However when retrieving prices in the form of a values\_list I get decimals back:
```
>>> PricePoint.objects.all().values_list('price')
[(Decimal('1'),)]
```
Then if I change the DB type to foat and try again I get a float:
```
>>> PricePoint.objects.all().values_list('price')
[(1.0,)]
>>> type(PricePoint.objects.all().values_list('price')[0][0])
<type 'float'>
```
This makes me think that values\_list does not rely on to\_python at all and instead just returns the type as defined in the database. Is that correct? Is there any way to return a custom type through values\_list?
|
2012/01/17
|
[
"https://Stackoverflow.com/questions/8891099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/836049/"
] |
Answered my own question: This apparently is a Django bug. values\_list does not deserialize the database data.
It's being tracked here: <https://code.djangoproject.com/ticket/9619> and is pending a design decision.
|
Just to note that this was resolved in Django 1.8 with the addition of the `from_db_value` method on custom fields.
See <https://docs.djangoproject.com/en/1.8/howto/custom-model-fields/#converting-values-to-python-objects>
| 13,099
|
10,354,000
|
I using python 2.7 and openCV 2.3.1 (win 7).
I trying open video file:
```
stream = cv.VideoCapture("test1.avi")
if stream.isOpened() == False:
print "Cannot open input video!"
exit()
```
But I have warning:
```
warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl_v2.hpp:394)
```
If use video camera (`stream = cv.VideoCapture(0)`), this code works.
Any ideas as to what I'm doing wrong?
Thanks a lot to all!
|
2012/04/27
|
[
"https://Stackoverflow.com/questions/10354000",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1361499/"
] |
Try using `cv.CaptureFromFile()` instead.
Copy this code if you must: [Watch Video in Python with OpenCV](http://web.michaelchughes.com/how-to/watch-video-in-python-with-opencv).
|
you can use the new interface of OpenCV (cv2), the object oriented one, which is binded from c++.
I find it easier and more readable.
note: if your open a picture with this, the fps doesn't mean anything, so the picture stays still.
```
import cv2
import sys
try:
vidFile = cv2.VideoCapture(sys.argv[1])
except:
print "problem opening input stream"
sys.exit(1)
if not vidFile.isOpened():
print "capture stream not open"
sys.exit(1)
nFrames = int(vidFile.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT)) # one good way of namespacing legacy openCV: cv2.cv.*
print "frame number: %s" %nFrames
fps = vidFile.get(cv2.cv.CV_CAP_PROP_FPS)
print "FPS value: %s" %fps
ret, frame = vidFile.read() # read first frame, and the return code of the function.
while ret: # note that we don't have to use frame number here, we could read from a live written file.
print "yes"
cv2.imshow("frameWindow", frame)
cv2.waitKey(int(1/fps*1000)) # time to wait between frames, in mSec
ret, frame = vidFile.read() # read next frame, get next return code
```
| 13,100
|
43,044,060
|
There is this similar [question](https://stackoverflow.com/questions/22214086/python-a-program-to-find-the-length-of-the-longest-run-in-a-given-list), but not quite what I am asking.
Let's say I have a list of ones and zeroes:
```
# i.e. [1, 0, 0, 0, 1, 1, 1, 1, 0, 1]
sample = np.random.randint(0, 2, (10,)).tolist()
```
I am trying to find the index of subsequences of the same value, sorted by their length. So here, we would have the following sublists:
```
[1, 1, 1, 1]
[0, 0, 0]
[1]
[0]
[1]
```
So their indices would be `[4, 1, 0, 8, 9]`.
I can get the sorted subsequences doing this:
```
sorted([list(l) for n, l in itertools.groupby(sample)], key=lambda l: -len(l))
```
However, if I get repeated subsequences I won't be able to find the indices right away (I would have to use another loop).
I feel like there is a more straightforward and Pythonic way of doing what I'm after, just like the answer to the previous questions suggests. This is what I'm looking for.
|
2017/03/27
|
[
"https://Stackoverflow.com/questions/43044060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3120489/"
] |
You can first create tuples of indices and values with `enumerate(..)`. Next you `groupby` but on the second element of the tuple, and finally you map them back on the second index. Like:
```
**map(lambda x:x[0][0],** # obtain the index of the first element
sorted([list(l) for _,l in itertools.groupby(**enumerate(**sample**)**, # create tuples with their indices
**key=lambda x:x[1]**)], # group in value, not on index
key=lambda l: -len(l)))
```
When running (the compressed command) in the console, it produces:
```
>>> map(lambda x:x[0][0],sorted([list(l) for _,l in itertools.groupby(enumerate(sample),key=lambda x:x[1])],key=lambda l: -len(l)))
[4, 1, 0, 8, 9]
```
>
> **N.B. 1**: instead of using `lambda l: -len(l)` as `key` when you sort, you can use `reverse=True` (and `key = len`), which is more
> declarative, like:
>
>
>
> ```
> map(lambda x:x[0][0],
> sorted([list(l) for _,l in itertools.groupby(enumerate(sample),
> key=lambda x:x[1])],
> **key=len, reverse=True**))
> ```
>
> **N.B. 2**: In [python-3.x](/questions/tagged/python-3.x "show questions tagged 'python-3.x'") `map` will produce an **iterator** and not a list. You can *materialize* the result by calling `list(..)` on
> the result.
>
>
>
|
You can use a `groupby` the `sorted` function with generator function to do this efficiently.
```
from itertools import groupby
from operator import itemgetter
data = [1, 0, 0, 0, 1, 1, 1, 1, 0, 1]
def gen(items):
for _, elements in groupby(enumerate(items)):
indexes, values = zip(*elements)
yield indexes[0], values
result = sorted(list(gen(data)), key=lambda x: len(x[1]), reverse=True)
```
Printing result yields:
```
[(4, (1, 1, 1, 1)), (1, (0, 0, 0)), (0, (1,)), (8, (0,)), (9, (1,))]
```
| 13,103
|
60,182,910
|
In my droplet is running several PHP websites, recently I tried to deploy a Django website that I build. But it's doesn't work properly.
I will explain the step that I did.
1, Pointed a Domain name to my droplet.
2, Added Domain name using Plesk Add Domain Option.
3, Uploaded the Django files to httpdocs by Plesk file manager.
4, Connected the server through ssh and type `python manage.py runserver
0:8000`
5, My Django Website is successfully running.
Here are the real issues occurs, We need to type exact port number to view the website every time. Eg:- \*\*xyz.com:8000 \*\*
As well as the **Django webserver** is down after sometimes.
I am newbie to Django all I have experience in deploying a PHP website. If my procedure is wrong please guide me to correct procedure.
Thanks in Advance.
|
2020/02/12
|
[
"https://Stackoverflow.com/questions/60182910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9892045/"
] |
[Django Runserver is not a production one](https://vsupalov.com/django-runserver-in-production/), it should be used only for development. That's why you need to explicitly type in the port and it sometimes go down because of code reload or other triggers.
Check [Gunicorn](https://gunicorn.org/) for example, as a production server for Django applications. (there are other options also)
|
I suggest following this [guide](https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04). Covers initial setup of django with gunicorn and nginx which is essential for deployment, you don't have to add the port to access the site. It doesn't cover how to add the domain but it seems you already know how to add the domain though.
| 13,104
|
3,631,510
|
how to do a zoom in/out with wxpython? what are the very basics for this purpose? I googled this, but could not find much, thanks!!
|
2010/09/02
|
[
"https://Stackoverflow.com/questions/3631510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/427183/"
] |
JavaScript is single threaded. So this wouldn't apply to JavaScript.
However, it is possible to spawn multiple threads through a very **limited** [Worker](http://www.w3.org/TR/workers/#dedicated-workers-and-the-worker-interface) interface introduced in HTML5 and is already available on some browsers. From an [MDC article](https://developer.mozilla.org/en/Using_web_workers),
>
> The Worker interface **spawns real OS-level threads**, and concurrency can cause interesting effects in your code if you aren't careful. However, in the case of web workers, the carefully controlled communication points with other threads means that it's actually very hard to cause concurrency problems. There's no access to non-thread safe components or the DOM and you have to pass specific data in and out of a thread through serialized objects. So you have to work really hard to cause problems in your code.
>
>
>
What do you need this for?
|
For most things in JavaScript there's one thread, so there's no method for this, since it'd invariable by "1" where you could access such information. There are more threads in the background for events and queuing (handled by the browser), but as far as your code's concerned, there's a main thread.
Java != JavaScript, they only share 4 letters :)
| 13,105
|
3,664,124
|
I posted this basic question before, but didn't get an answer I could work with.
I've been writing applications on my Mac, and have been physically making them into .app bundles
(i.e., making the directories and plist files by hand). But when I open a file in the application by right clicking on the file in finder and specifying my app, how do I then reference that file?
I mostly use python, but I'm looking for a way that is fairly universal.
My first guess was as an argument, as were the answers to my previous post, but that is not the case.
Py:
```
>>> print(sys.argv[1:])
'-psn_0_#######'
```
Where is the file reference?
Thanks in advance,
|
2010/09/08
|
[
"https://Stackoverflow.com/questions/3664124",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/428582/"
] |
The file is passed by the Apple Event, see [this Apple document](http://developer.apple.com/mac/library/documentation/cocoa/conceptual/ScriptableCocoaApplications/SApps_handle_AEs/SAppsHandleAEs.html#//apple_ref/doc/uid/20001239-BBCBCIJE). You need to receive that from inside your Python script. If it's a PyObjC script, there should be a standard way to translate what's explained in that Apple document in Objective-C to Python.
If your script is not a GUI app, but if you just want to pass a file to a Python script by clicking it, the easiest way would be to use Automator. There's an action called "Run Shell Script", to which you can specify the interpreter and the code. You can choose whether you receive the file names via `stdin` or the arguments. Automator packages the script into the app for you.
|
Are we referring to the file where per-user binding of file types/extensions are set to point to certain applications?
```
~/Library/Preferences/com.apple.LaunchServices.plist
```
The framework is [launchservices](http://developer.apple.com/library/mac/#documentation/Carbon/Conceptual/LaunchServicesConcepts/LSCIntro/LSCIntro.html), which had received a good amount of scrutiny due to 'murkiness' early in 10.6, and (like all property list files) can be altered via the bridges to ObjectiveC made for Python and Ruby. [Here's](http://wranglingmacs.blogspot.com/2010/05/using-plists-from-python.html) a link with Python code examples for how to associate a given file type with an app.
| 13,110
|
52,079,637
|
I have been trying to scroll the output of a script run via Ipython in a separate window/session created by tmux (off notebook, meaning that I am not using Ipython notebook as usual: I am just using Ipython). I see that, while the program is loading the output, I can’t scroll the window to see what has been published before, and can just visualize the latest results produced. Honestly, this problem comes totally unexpected because I usually use a notebook and have never encountered a similar issue. Could you help me please? Thanks in advance
|
2018/08/29
|
[
"https://Stackoverflow.com/questions/52079637",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10285919/"
] |
I finally found the answer to my problem right here <https://superuser.com/questions/209437/how-do-i-scroll-in-tmux> , it did not depend on Ipython but on tmux as I was suspecting. To scroll on tmux press ctrl+b and then [, so that you can enter the copy mode, and then use arrows or page up/down to scroll through the window, then press ctrl+c or Esc to quit the copy mode after finishing the desired scroll
|
Use threads for data processing, visualization, and gui interaction that is how you can avoid freez.
| 13,117
|
38,079,862
|
Sometimes this code works just fine and runs through, but other times it throws the int object not callable error. I am not real sure as to why it is doing so.
```
for ship in ships:
vert_or_horz = randint(0,100) % 2
for size in range(ship.size):
if size == 0:
ship.location.append((random_row(board),random_col(board)))
else:
# This is the horizontal placing
if vert_or_horz != 0 and ship.size > 1:
ship.location.append((ship.location[0][0], \
ship.location[0][1] + size))
while(ship.location[size][1] > len(board[0])) or \
(ship.location[size][1] < 0):
if ship.location[size][1] > len(board[0]):
ship.location[size][1]((ship.location[0][0], \
ship.location[0][1] - size))
if ship.location[size][1] < 0:
ship.location[size][1]((ship.location[0][0], \
ship.location[0][1] + size))
# This is the vertical placing
if vert_or_horz == 0 and ship.size > 1:
ship.location.append((ship.location[0][0] + size, \
ship.location[0][1]))
while(ship.location[size][1] > len(board[0])) or \
(ship.location[size][1] < 0):
if ship.location[size][1] > len(board[0]):
ship.location[size][1] \
((ship.location[0][0] - size, \
ship.location[0][1]))
if ship.location[size][1] < 0:
ship.location[size][1] \
((ship.location[0][0] + size, \
ship.location[0][1]))
```
Here is the traceback:
```
Traceback (most recent call last):
File "python", line 217, in <module>
File "python", line 124, in create_med_game
ship.location[size][1]((ship.location[0][0], \
TypeError: 'int' object is not callable
```
|
2016/06/28
|
[
"https://Stackoverflow.com/questions/38079862",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2759574/"
] |
You might be able to directly call [TransferHandler#exportAsDrag(...)](http://docs.oracle.com/javase/8/docs/api/javax/swing/TransferHandler.html#exportAsDrag-javax.swing.JComponent-java.awt.event.InputEvent-int-) method in `MouseMotionListener#mouseDragged(...)`:
```
import java.awt.*;
import java.awt.event.*;
import java.util.Optional;
import javax.swing.*;
import javax.swing.table.*;
public class Main2 {
public JComponent makeUI() {
Object columnNames[] = {"Column 1", "Column 2", "Column 3"};
Object data[][] = {
{ "a", "a", "a" },
{ "a", "a", "a" },
{ "a", "a", "a" },
{ "a", "a", "a" },
{ "a", "a", "a" },
};
JTable table = new JTable(new DefaultTableModel(data, columnNames));
table.setDragEnabled(true);
table.setDropMode(DropMode.INSERT_ROWS);
table.addMouseMotionListener(new MouseAdapter() {
@Override public void mouseDragged(MouseEvent e) {
JComponent c = (JComponent) e.getComponent();
Optional.ofNullable(c.getTransferHandler())
.ifPresent(th -> th.exportAsDrag(c, e, TransferHandler.COPY));
}
});
table.setSelectionMode(ListSelectionModel.MULTIPLE_INTERVAL_SELECTION);
table.setColumnSelectionAllowed(true);
table.setRowSelectionAllowed(true);
JPanel p = new JPanel(new BorderLayout());
p.add(new JScrollPane(table));
p.add(new JTextField(), BorderLayout.SOUTH);
return p;
}
public static void main(String... args) {
EventQueue.invokeLater(() -> {
JFrame f = new JFrame();
f.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
f.getContentPane().add(new Main2().makeUI());
f.setSize(320, 240);
f.setLocationRelativeTo(null);
f.setVisible(true);
});
}
}
```
|
>
> Selection mode should only be possible via click and should be extendable via shift + click, not via dragging
>
>
>
You may be able to alter the behavior by overriding JTable's `processMouseEvent` method with some additional logic. When a button down event occurs - and shift is not down - alter the selection of the JTable to the current clicked row before calling the parent method.
```
JTable table = new JTable( rowData, columnNames){
@Override
public void processMouseEvent(MouseEvent e){
Component c = (Component)e.getSource();
if (e.getModifiersEx() == MouseEvent.BUTTON1_DOWN_MASK && ( e.getModifiers() & MouseEvent.SHIFT_DOWN_MASK) == 0 ){
int row = rowAtPoint(e.getPoint());
int col = columnAtPoint(e.getPoint());
if ( !isCellSelected(row, col) ){
clearSelection();
changeSelection(row, col, false, false);
}
}
MouseEvent e1 = new MouseEvent(c, e.getID(), e.getWhen(),
e.getModifiers() | InputEvent.CTRL_MASK,
e.getX(), e.getY(),
e.getClickCount(),
e.isPopupTrigger() ,
e.getButton());
super.processMouseEvent(e1);
}
};
```
| 13,118
|
49,280,016
|
I'm trying to create a dataset from a CSV file with 784-bit long rows. Here's my code:
```
import tensorflow as tf
f = open("test.csv", "r")
csvreader = csv.reader(f)
gen = (row for row in csvreader)
ds = tf.data.Dataset()
ds.from_generator(gen, [tf.uint8]*28**2)
```
I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-4b244ea66c1d> in <module>()
12 gen = (row for row in csvreader_pat_trn)
13 ds = tf.data.Dataset()
---> 14 ds.from_generator(gen, [tf.uint8]*28**2)
~/Documents/Programming/ANN/labs/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py in from_generator(generator, output_types, output_shapes)
317 """
318 if not callable(generator):
--> 319 raise TypeError("`generator` must be callable.")
320 if output_shapes is None:
321 output_shapes = nest.map_structure(
TypeError: `generator` must be callable.
```
The [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) said that I should have a generator passed to `from_generator()`, so that's what I did, `gen` is a generator. But now it's complaining that my generator isn't **callable**. How can I make the generator callable so I can get this to work?
**EDIT:**
I'd like to add that I'm using python 3.6.4. Is this the reason for the error?
|
2018/03/14
|
[
"https://Stackoverflow.com/questions/49280016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3128156/"
] |
The `generator` argument (perhaps confusingly) should not actually be a generator, but a callable returning an iterable (for example, a generator function). Probably the easiest option here is to use a `lambda`. Also, a couple of errors: 1) [`tf.data.Dataset.from_generator`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator) is meant to be called as a class factory method, not from an instance 2) the function (like a few other in TensorFlow) is weirdly picky about parameters, and it wants you to give the sequence of dtypes and each data row as `tuple`s (instead of the `list`s returned by the CSV reader), you can use for example `map` for that:
```
import csv
import tensorflow as tf
with open("test.csv", "r") as f:
csvreader = csv.reader(f)
ds = tf.data.Dataset.from_generator(lambda: map(tuple, csvreader),
(tf.uint8,) * (28 ** 2))
```
|
[From the docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator), which you linked:
>
> The `generator` argument must be a callable object that returns an
> object that support the `iter()` protocol (e.g. a generator function)
>
>
>
This means you should be able to do something like this:
```
import tensorflow as tf
import csv
with open("test.csv", "r") as f:
csvreader = csv.reader(f)
gen = lambda: (row for row in csvreader)
ds = tf.data.Dataset()
ds.from_generator(gen, [tf.uint8]*28**2)
```
In other words, the function you pass must produce a generator when called. This is easy to achieve when making it an anonymous function (a `lambda`).
Alternatively try this, which is closer to how it is done in the docs:
```
import tensorflow as tf
import csv
def read_csv(file_name="test.csv"):
with open(file_name) as f:
reader = csv.reader(f)
for row in reader:
yield row
ds = tf.data.Dataset.from_generator(read_csv, [tf.uint8]*28**2)
```
(If you need a different file name than whatever default you set, you can use `functools.partial(read_csv, file_name="whatever.csv")`.)
The difference is that the `read_csv` function returns the generator object when called, whereas what you constructed is already the generator object and equivalent to doing:
```
gen = read_csv()
ds = tf.data.Dataset.from_generator(gen, [tf.uint8]*28**2) # does not work
```
| 13,119
|
49,244,935
|
I am trying to execute a Python program as a background process inside a container with `kubectl` as below (`kubectl` issued on local machine):
`kubectl exec -it <container_id> -- bash -c "cd some-dir && (python xxx.py --arg1 abc &)"`
When I log in to the container and check `ps -ef` I do not see this process running. Also, there is no output from `kubectl` command itself.
* Is the `kubectl` command issued correctly?
* Is there a better way to achieve the same?
* How can I see the output/logs printed off the background process being run?
* If I need to stop this background process after some duration, what is the best way to do this?
|
2018/03/12
|
[
"https://Stackoverflow.com/questions/49244935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1650281/"
] |
The [nohup](https://en.wikipedia.org/wiki/Nohup#Overcoming_hanging) Wikipedia page can help; you need to redirect all three IO streams (stdout, stdin and stderr) - an example with `yes`:
```
kubectl exec pod -- bash -c "yes > /dev/null 2> /dev/null &"
```
`nohup` is not required in the above case because I did not allocate a pseudo terminal (no `-t` flag) and the shell was not interactive (no `-i` flag) so no `HUP` signal is sent to the `yes` process on session termination. See [this](https://unix.stackexchange.com/questions/84737/in-which-cases-is-sighup-not-sent-to-a-job-when-you-log-out#answer-85296) answer for more details.
Redirecting `/dev/null` to stdin is not required in the above case since stdin already refers to `/dev/null` (you can see this by running `ls -l /proc/YES_PID/fd` in another shell).
To see the output you can instead redirect stdout to a file.
To stop the process you'd need to identity the PID of the process you want to stop ([pgrep](https://linux.die.net/man/1/pgrep) could be useful for this purpose) and send a fatal signal to it (`kill PID` for example).
If you want to stop the process after a fixed duration, [timeout](https://linux.die.net/man/1/timeout) might be a better option.
|
Actually, the best way to make this kind of things is adding an entry point to your container and run execute the commands there.
Like:
`entrypoint.sh`:
```
#!/bin/bash
set -e
cd some-dir && (python xxx.py --arg1 abc &)
./somethingelse.sh
exec "$@"
```
You wouldn't need to go manually inside every single container and run the command.
| 13,121
|
21,255,168
|
I am trying to understand why does the recursive function returns 1003 instead of 1005.
```
l = [1,2,3]
def sum(l):
x, *y = l
return x + sum(y) if y else 1000
sum(l)
```
According to [pythontutor](http://pythontutor.com/visualize.html) the last value of `y` list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct?

|
2014/01/21
|
[
"https://Stackoverflow.com/questions/21255168",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1984680/"
] |
>
> According to pythontutor the last value of y list is 5 and that would make return value `1000 + sum([2,3])` 1005, am I correct?
>
>
>
No, the last value of `y` is `[]`. It's never anything but a list, and besides, there are no `5`s for it to ever be. On top of that, the recursive return value is always on the right of the `+`, only the `x` is ever on the left.
Let's step through it:
```
sum([1, 2, 3]) = 1 + sum([2, 3])
sum([2, 3]) = 2 + sum([3])
sum([3]) = 1000
```
So, substituting back:
```
sum([2, 3]) = 2 + 1000 = 1002
sum([1, 2, 3] = 1 + 1002 = 1003
```
The problem is that when `y` is empty, you're returning `1000`, not `x + 1000`.
Your confusion may just be a matter of precedence. Maybe you expected this:
```
return x + sum(y) if y else 1000
```
… to mean this:
```
return x + (sum(y) if y else 1000)
```
… but actually, it means this:
```
return (x + sum(y)) if y else 1000
```
|
Recursion step by step
1) `x = 1` `y = [2,3]`
2) `x = 2` `y = [3]`
3) `x = 3` `y = []`
Note that step 3) returns `1000` since `not y`. This is because your return statement is equivalent to
```
(x + sum(y)) if y else 1000
```
Thus we have
3) `1000`
2) `1000 + 2`
1) `1002 + 1`
The result is `1003`.
So perhaps what you are looking for is:
```
return x + sum(y) if y else 1000 + x
```
or (copied from ndpu's answer):
```
return x + (sum(y) if y else 1000)
```
(take `x` into account in last step)
| 13,122
|
72,173,762
|
I am trying to speed up my code by splitting the job among several python processes. In the single-threaded version of the code, I am looping through a code that accumulates the result in several matrices of different dimensions. Since there's no data sharing between each iteration, I can divide the task among several processes, each one having its own local set of matrices to accumulate the result. When all the processes are done, I combine the matrices of all the processes.
My idea of solving the issue is to pass a list of the same matrices to each process such that each process writes to this matrix when it's done. My question is, how do I pass this list of numpy array matrices to the processes? This seems like a straightforward thing to do except that it seems I can only pass a 1D array to the processes. Although a temporary solution would be to flatten all the numpy arrays and keep track of where each one begins and ends, is there a way where I simply pass a list of the matrices to the processes?
|
2022/05/09
|
[
"https://Stackoverflow.com/questions/72173762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11342618/"
] |
Sort a Range
------------
```vb
Sub SortData()
'Dim wb As Workbook: Set wb = ThisWorkbook
'Dim ws As Worksheet: Set ws = wb.Worksheets("Sheet1")
Dim ws As Worksheet: Set ws = ActiveSheet
With ws.Range("A1").CurrentRegion
.Sort _
Key1:=.Columns(1), Order1:=xlAscending, _
Key2:=.Columns(2), Order2:=xlAscending, _
Key3:=.Columns(8), Order3:=xlAscending, _
Header:=xlYes
End With
End Sub
```
|
You aren't quite using the `With` statement correctly. Try it like this:
```
Sub Sort()
With ActiveSheet.Sort
Cells.Select
.SortFields.Clear
.SortFields.Add2 key:=Range("A2:A35" _
), SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal
.SortFields.Add2 key:=Range("B2:B35" _
), SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal
.SortFields.Add2 key:=Range("H2:H35" _
), SortOn:=xlSortOnValues, Order:=xlAscending, DataOption:=xlSortNormal
.SetRange Range("A1:I35")
.Header = xlYes
.MatchCase = False
.Orientation = xlTopToBottom
.SortMethod = xlPinYin
.Apply
End With
End Sub
```
| 13,127
|
20,213,981
|
I have following list of items (key-value pairs):
```
items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)]
```
What I want to get:
```
{
'A' : 1,
'B' : [1,2]
'C' : 3
}
```
My naive solution:
```
res = {}
for (k,v) in items:
if k in res:
res[k].append(v)
else:
res[k] = [v]
```
I'm looking for some optimised more pythonic solution, anyone?
|
2013/11/26
|
[
"https://Stackoverflow.com/questions/20213981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/940208/"
] |
Use could use defaultdict here.
```
from collections import defaultdict
res = defaultdict(list)
for (k,v) in items:
res[k].append(v)
# Use as dict(res)
```
**EDIT:**
This is using groupby, but please note, **the above is far cleaner and neater to the eyes**:
```
>>> data = [('A', 1), ('B', 1), ('B', 2), ('C', 3)]
>>> dict([(key,list(v[1] for v in group)) for (key,group) in groupby(data, lambda x: x[0])])
{'A': [1], 'C': [3], 'B': [1, 2]}
```
**Downside**: Every element is a list. Change lists to generators as needed.
---
To convert all single item lists to individual items:
```
>>> res = # Array of tuples, not dict
>>> res = [(key,(value[0] if len(value) == 1 else value)) for key,value in res]
>>> res
[('A', 1), ('B', [1, 2]), ('C', 3)]
```
|
This is very easy to do with `defaultdict`, which can be imported from `collections`.
```
>>> from collections import defaultdict
>>> items = [('A', 1), ('B', 1), ('B', 2), ('C', 3)]
>>> d = defaultdict(list)
>>> for k, v in items:
d[k].append(v)
>>> d
defaultdict(<class 'list'>, {'A': [1], 'C': [3], 'B': [1, 2]})
```
You *can* also use a dictionary comprehension:
```
>>> d = {l: [var for key, var in items if key == l] for l in {v[0] for v in items}}
>>> d
{'A': [1], 'C': [3], 'B': [1, 2]}
```
| 13,128
|
12,924,287
|
I have a text file like this:-
```
V1xx AB1
V2xx AC34
V3xx AB1
```
Can we add `;` at each end of line through python script?
```
V1xx AB1;
V2xx AC34;
V3xx AB1;
```
|
2012/10/16
|
[
"https://Stackoverflow.com/questions/12924287",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1750896/"
] |
Here's what you can try. I have `overwritten the same file` though.
You can `try creating a new one`(I leave it to you) - You'll need to modify your `with` statement a little : -
```
lines = ""
with open('D:\File.txt') as file:
for line in file:
lines += line.strip() + ";\n"
file = open('D:\File.txt', "w+")
file.writelines(lines)
file.flush()
```
**UPDATE**: - For **in-place modification** of file, you can use `fileinput` module: -
```
import fileinput
for line in fileinput.input('D:\File.txt', inplace = True):
print line.strip() + ";"
```
|
```
#Open the original file, and create a blank file in write mode
File = open("D:\myfilepath\myfile.txt")
FileCopy = open("D:\myfilepath\myfile_Copy.txt","w")
#For each line in the file, remove the end line character,
#insert a semicolon, and then add a new end line character.
#copy these lines into the blank file
for line in File:
CleanLine=line.strip("\n")
FileCopy.write(CleanLine+";\n")
FileCopy.close()
File.close()
#Replace the original file with the copied file
File = open("D:\myfilepath\myfile.txt","w")
FileCopy = open("D:\myfilepath\myfile_Copy.txt")
for line in FileCopy:
File.write(line)
FileCopy.close()
File.close()
```
Notes: I have left the "copy file" in there as a back up. You can manually delete it or use os.remove() (if you do that don't forget to import the os module)
| 13,137
|
34,839,184
|
Which approach is the best when we want to deploy two websites on the same aws EC2 instance?
* two separate Docker container each consists a django project
* one Docker container consists of two separate django project
if the two of them are basic django-cms projects and we know they wont expand in future (nor python packages dependency neither vertical or horizontal scaling)
My purpose is to deploy 10 low-traffic django-cms websites on the same aws ec2 instance...
P.S.
I'm using Elastic Beanstalk
|
2016/01/17
|
[
"https://Stackoverflow.com/questions/34839184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4566737/"
] |
I am sure the answer is "It depends" but I believe the whole reason for using Docker is to isolate your environment, and gain flexibility.
So what happens if one project uses a bunch of python packages, and the other uses a bunch of other packages. Worse, what if any of them conflict with each other. Forget about a case where a package may require some changes to the OS.
And what happens if you need to scale and maybe move to two separate EC2 instances?
I also believe you want to isolate changes to one project when fixing the other, rather than been forced to always re-deploy both projects together.
In both of this cases, **using two separate docker containers** is safer, and more flexible/powerful.
|
What is easier to maintain for you?
* Handling software updates?
* Redeploying sources?
* Reconfiguring each app?
I like separation and therefore I would define two separate Docker container but then your in need of a load balancer in front because both containers will need a separate port. The load balancer itself will need either subdomains or domains in order to separate requests for each container.
Examples:
* www.app1.com -> container1
* www.app2.com -> container2
Example with subdomain:
* app1.example.com -> container1
* app2.example.com -> container2
Possible load balancers:
* Apache on same EC2 instance.
* Nginx on same EC2 instance. I have an example container here: [blacklabelops/nginx](https://hub.docker.com/r/blacklabelops/nginx/)
* AmazonWS Load Balancer included in EC2.
Benefit from load balancer in front:
* Minimize Downtime -> Both applications can have an update without any downtime for the other.
* Enables High Availability -> Possible Blue White Deployments of one app. Depends on your app.
You can circumvent this complexity by using your example tutorial where everything is inside the same container. You will also have all the configuration details in the same place.
| 13,140
|
67,404,079
|
I want to read a file and return word and space in the file with python.
and i don't want to pass caractere to caractere.
I already used :
```
def openfile(name_file) :
with open(name_file) as f :
l = re.split(' ',re.sub('\n',' ',f.read()))
sentence = []
for i in l :
sentence.append(i)
print(sentence)
```
input :
```
Clustalo O(1.2.4) multiple sequence alignement
id_ref ATGFDFVREF--SFERFSRSFVSRVSVSVRVSFDFVEGREHEHZ
id_iso ADEFZRVSDFVSSVDFSVSEFVDCSZF--ZEVVDSVZRVEFDFV
-------------- ------- ------------- - ---
```
output on my script :
```
['clustal','O(1.2.4)','multiple','sequence','alignement', ect...]
```
expected output :
```
['clustal','','O(1.2.4)','','multiple','','sequence','','alignement',ect...]
```
|
2021/05/05
|
[
"https://Stackoverflow.com/questions/67404079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15844030/"
] |
This is not the best solution for this, but you can do something like this:
```
import re
def openfile(name_file):
with open(name_file) as f:
original_list = []
lines = f.readlines()
for line in lines:
li = line.split(' ')
for item in li:
if item != '':
original_list.append(item.strip('\n'))
original_list.append('')
print(original_list)
```
Output:
```
['clustal', '', 'O(1.2.4)', '', 'multiple', '', 'sequence', '', 'alignement10', '']
```
If you don't need the extra `''` at the end you can simply remove it with
```
original_list.pop()
```
|
Add a parameter on the end of print
```py
def openfile(name_file) :
with open(name_file) as f :
l = re.split(' ',re.sub('\n',' ',f.read()))
for i in l :
print('i :', i, '\ni : ')
```
| 13,143
|
54,879,916
|
I've the following string in python, example:
```
"Peter North / John West"
```
Note that there are two spaces before and after the forward slash.
What should I do such that I can clean it to become
```
"Peter North_John West"
```
I tried using regex but I am not exactly sure how.
Should I use re.sub or pandas.replace?
|
2019/02/26
|
[
"https://Stackoverflow.com/questions/54879916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11117700/"
] |
You can use
```
a = "Peter North / John West"
import re
a = re.sub(' +/ +','_',a)
```
Any number of spaces with slash followed by any number of slashes can be replaced by this pattern.
|
In case of varying number of white spaces before and after `/`:
```
import re
re.sub("\s+/\s+", "_", "Peter North / John West")
# Peter North_John West
```
| 13,146
|
10,184,476
|
I'm making a Django page that has a sidebar with some info that is loaded from external websites(e.g. bus arrival times).
I'm new to web development and I recognize this as a bottleneck. As it is, the page hangs for a fraction of a second as it loads the data from the other sites. It doesn't display anything until it gets this info because it runs python scripts to get the data before baking it into the html.
Ideally, it would display the majority of the page loaded directly off my web server and then have a little "loading" gif or something until it actually manages to grab the data before displaying that.
How can I achieve this? I presume javascript will be useful? How can I get it to integrate with my existing poller scripts?
|
2012/04/17
|
[
"https://Stackoverflow.com/questions/10184476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1337686/"
] |
You probably don't need up-to-the-second information, so have [another process](http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/) load the data into a cache, and have your website read it from the local cache.
|
The easiest but not most beautiful way to integrate something like this would be with iframes. Just make iframes for the secondary stuff, and they will load themselves in due time. No javascript required.
| 13,147
|
48,282,074
|
I'm trying to learn python for data science application and signed up for a course. I am doing the exercises and got stuck even though my answer is the same as the one in the answer key.
I'm basically trying to add two new items to a dictionary with the following piece of code:
```
# Create a new key-value pair for 'Inner London'
location_dict['Inner London'] = location_dict['Camden'] + location_dict['Southwark']
# Create a new key-value pair for 'Outer London'
location_dict['Outer London'] = location_dict['Brent'] + location_dict['Redbridge']
```
but when I run it I am getting a `TypeError`.
|
2018/01/16
|
[
"https://Stackoverflow.com/questions/48282074",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9224360/"
] |
I had a list in there with the sets I think that was what was throwing the error I just converted the list to a set and joined them with .union. Thank you @Adelin, @Bilkokuya, @ Piinthesky, and @Kurast for you're quick responses and input!
|
for union of sets you can use `|`
```
location_dict['Camden'] | location_dict['Southwark']
```
| 13,148
|
358,471
|
I am writing a program that requires the use of XMODEM to transfer data from a sensor device. I'd like to avoid having to write my own XMODEM code, so I was wondering if anyone knew if there was a python XMODEM module available anywhere?
|
2008/12/11
|
[
"https://Stackoverflow.com/questions/358471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
There is XMODEM module on PyPi. It handles both sending and receiving of data with XModem. Below is sample of its usage:
```
import serial
try:
from cStringIO import StringIO
except:
from StringIO import StringIO
from xmodem import XMODEM, NAK
from time import sleep
def readUntil(char = None):
def serialPortReader():
while True:
tmp = port.read(1)
if not tmp or (char and char == tmp):
break
yield tmp
return ''.join(serialPortReader())
def getc(size, timeout=1):
return port.read(size)
def putc(data, timeout=1):
port.write(data)
sleep(0.001) # give device time to prepare new buffer and start sending it
port = serial.Serial(port='COM5',parity=serial.PARITY_NONE,bytesize=serial.EIGHTBITS,stopbits=serial.STOPBITS_ONE,timeout=0,xonxoff=0,rtscts=0,dsrdtr=0,baudrate=115200)
port.write("command that initiates xmodem send from device\r\n")
sleep(0.02) # give device time to handle command and start sending response
readUntil(NAK)
buffer = StringIO()
XMODEM(getc, putc).recv(buffer, crc_mode = 0, quiet = 1)
contents = buffer.getvalue()
buffer.close()
readUntil()
```
|
I think you’re stuck with rolling your own.
You might be able to use [sz](http://linux.about.com/library/cmd/blcmdl1_sz.htm), which implements X/Y/ZMODEM. You could call out to the binary, or port the necessary code to Python.
| 13,149
|
12,243,129
|
For a research project I am trying to boot as many VM's as possible, using python libvirt bindings, in KVM under Ubuntu server 12.04. All the VM's are set to idle after boot, and to use a minimum amount of memory. At the most I was able to boot 1000 VM's on a single host, at which point the kernel (Linux 3x) became unresponsive, even if both CPU- and memory usage is nowhere near the limits (48 cores AMD, 128GB mem.) Before this, the booting process became successively slower, after a couple of hundred VM's.
I assume this must be related to the KVM/Qemu driver, as the linux kernel itself should have no problem handling this few processes. However, I did read that the Qemu driver was now multi-threaded. Any ideas of what the cause of this slowness may be - or at least where I should start looking?
|
2012/09/03
|
[
"https://Stackoverflow.com/questions/12243129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1642856/"
] |
You are booting all the VMs using qemu-kvm right, and after 100s of VM you feel it's becoming successively slow. So when you feels it stop using kvm, just boot using qemu, I expect you see the same slowliness. My guess is that after those many VMs, KVM (hardware support) exhausts. Because KVM is nothing but software layer for few added hardware registers. So KVM might be the culprit here.
Also what is the purpose of this experiment ?
|
The following virtual hardware limits for guests have been tested. We ensure host and VMs install and work successfully, even when reaching the limits and there are no major performance regressions (CPU, memory, disk, network) since the last release (SUSE Linux Enterprise Server 11 SP1).
Max. Guest RAM Size --- 512 GB
Max. Virtual CPUs per Guest --- 64
Max. Virtual Network Devices per Guest --- 8
Max. Block Devices per Guest --- 4 emulated (IDE), 20 para-virtual (using virtio-blk)
Max. Number of VM Guests per VM Host Server --- Limit is defined as the total number of virtual CPUs in all guests being no greater than 8 times the number of CPU cores in the host
for more limitations of KVm please refer [this](http://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/cha.kvm.limits.html) document link
| 13,159
|
13,965,823
|
I'm trying to get NLTK and wordnet working on Heroku. I've already done
```
heroku run python
nltk.download()
wordnet
pip install -r requirements.txt
```
But I get this error:
```
Resource 'corpora/wordnet' not found. Please use the NLTK
Downloader to obtain the resource: >>> nltk.download()
Searched in:
- '/app/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
```
Yet, I've looked at in /app/nltk\_data and it's there, so I'm not sure what's going on.
|
2012/12/20
|
[
"https://Stackoverflow.com/questions/13965823",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1881006/"
] |
I know this is an old question, but since the "right" answer has changed thanks to Heroku offering support for `nltk`, I thought it might be worthwhile to answer.
Heroku now supports `nltk`. If you need to download something for `nltk` (wordnet in this example, or perhaps stopwords or a corpora), you can do so by simply including an `nltk.txt` file in the same root directory where you have your `Procfile` and `requirements.txt`. In your `nltk.txt` file you list each item you would like to download. For a project I just deployed I needed stopwords and wordnet, so my `nltk.txt` looks like this:
```
stopwords
wordnet
```
Pretty straightforward. And, of course, make sure you have the appropriate version of `nltk` specified in your `Pipfile` or `requirements.txt`. For the ground truth, visit <https://devcenter.heroku.com/articles/python-nltk>.
|
I was also facing the issue when i tried to use this code lemmatizer.lemmatize('goes'), its actually because of packages they have not downloaded.
so try to download them using following code, may be it can solve many problems regarding to these,
nltk.download('wordnet')
nltk.download('omw-1.4')
Thank You..
| 13,160
|
64,734,118
|
I'm trying to make a discord bot, and when I try to load a .env with load\_dotenv() it doesn't work because it says
```
Traceback (most recent call last):
File "/home/fanjin/Documents/Python Projects/Discord Bot/bot.py", line 15, in <module>
client.run(TOKEN)
File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 708, in run
return future.result()
File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 687, in runner
await self.start(*args, **kwargs)
File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 650, in start
await self.login(*args, bot=bot)
File "/home/fanjin/.local/lib/python3.8/site-packages/discord/client.py", line 499, in login
await self.http.static_login(token.strip(), bot=bot)
AttributeError: 'NoneType' object has no attribute 'strip
```
Here's my code for the bot:
```
import os
import discord
from dotenv import load_dotenv
load_dotenv()
TOKEN = os.getenv('DISCORD_TOKEN')
client = discord.Client()
@client.event
async def on_ready():
print(f'{client.user} has connected to Discord!')
client.run(TOKEN)
```
And the save.env file: (It's a fake token)
```
# .env
DISCORD_TOKEN={Bzc0NjfUH8fEWFjg2NDMyMjY2.X6coqw.JyiOR89JIH7fFFoyOMufK_1A}
```
Both files are in the same directory, and I even tried to explicitly specify the .env's path with
```
env_path = Path('path/to/file') / '.env'
load_dotenv(dotenv_path=env_path)
```
but that also didn't work
|
2020/11/08
|
[
"https://Stackoverflow.com/questions/64734118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9509783/"
] |
I had same error trying to load my environment configuration on ubuntu 20.04 and python-dotenv 0.15.0. I was able to rectify this using python interpreter which will log out any error encountered while trying to load your environments. Whenever your environment variable is loaded successfully, load\_dotenv() returns `True`.
For me it was an issue with my configuration file (syntax error) that broke the loading process. All i needed to do was to go to my environment variable config file and fix the broken syntax..
Try passing `verbose=True` when loading your environment variable (from python's interpreter) to get more info from load\_dotenv.
|
So this took me a while. My load\_dotenv() was returning True.
I had commas after some records which is not correct.
Once I removed the commas the variables were working.
| 13,170
|
63,756,673
|
I'm facing weird issue in my Jupyter-notebook.
In my first cell:
```
import sys
!{sys.executable} -m pip install numpy
!{sys.executable} -m pip install Pillow
```
In the second cell:
```
import numpy as np
from PIL import Image
```
But it says : **ModuleNotFoundError: No module named 'numpy'**
[](https://i.stack.imgur.com/UZvdk.png)
I have used this command to install Jupyter notebook :
```
sudo apt install python3-notebook jupyter jupyter-core python-ipykernel
```
Additional information :
```
pip --version
pip 20.2.2 from /home/maifee/.local/lib/python3.7/site-packages/pip (python 3.7)
python --version
Python 3.7.5
```
|
2020/09/05
|
[
"https://Stackoverflow.com/questions/63756673",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10305444/"
] |
Thanks to @suuuehgi.
When Jupyter Notebook isn't opened as root:
```
import sys
!{sys.executable} -m pip install --user numpy
```
|
I've had occasional weird install issues with Jupyter Notebooks as well when I'm running a particular virtual environment. Generally, installing with pip directly in the notebook in this form:
`!pip install numpy`
fixes it. Let me know how it goes.
| 13,173
|
36,481,891
|
Is there a way to call Excel add-ins from python? In my company there are several excel add-ins that are available, they usually provide direct access to some database and make additional calculations.
What is the best way to call those functions directly from python?
To clarify, I'm NOT interested in accessing python from excel. I'm interested in accessing excel-addins from python.
|
2016/04/07
|
[
"https://Stackoverflow.com/questions/36481891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5181181/"
] |
There are at least 3 possible ways to call an Excel add-in, call the COM add-in directly or through automation.
Microsoft provide online documentation for it's Excel interop (<https://learn.microsoft.com/en-us/dotnet/api/microsoft.office.interop.excel>). Whilst it's for .NET, it highlights the main limitations. You cannot directly call a custom ribbon (which contains the add-in). This is intended to protect one add-in from another. (*Although you can install/ uninstall add-in*).
**COM add-in**
You can call the COM add-in directly using win32com.client. However you can only run methods that are visible to COM. This means you may have to alter the add-in. See a C# tutorial <https://learn.microsoft.com/en-us/visualstudio/vsto/walkthrough-calling-code-in-a-vsto-add-in-from-vba?view=vs-2019>.
Once the method is exposed, then it can be called in Python using [win32com.client](https://pypi.org/project/pywin32/). For example:
```
import win32com.client as win32
def excel():
# This may error if Excel is open.
return win32.gencache.EnsureDispatch('Excel.Application')
xl = excel()
helloWorldAddIn = xl.COMAddIns("HelloWorld") # HelloWorld is the name of my AddIn.
#app = helloWorldAddIn.Application
obj = helloWorldAddIn.Object # Note: This will be None (null) if add-in doesn't expose anything.
obj.processData() # processData is the name of my method
```
**Web scraping**
If you're able to upload your add-in to an office 365 account. Then you could preform web scraping with a package like Selenium. I've not attempted it. Also you'll most likely encounter issues calling external resources like connection strings & http calls.
**Automation**
You can run the add-in using an automation package. For example [pyautogui](https://pyautogui.readthedocs.io/en/latest/). You can write code to control the mouse & keyboard to simulate a user running the add-in. This solution will mean you shouldn't need to update existing add-ins.
A very basic example of calling add-in through automation:
```
import os
import pyautogui
import time
def openFile():
path = "C:/Dev/test.xlsx"
path = os.path.realpath(path)
os.startfile(path)
time.sleep(1)
def fullScreen():
pyautogui.hotkey('win', 'up')
time.sleep(1)
def findAndClickRibbon():
pyautogui.moveTo(550, 50)
pyautogui.click()
time.sleep(1)
def runAddIn():
pyautogui.moveTo(15, 100)
pyautogui.click()
time.sleep(1)
def saveFile():
pyautogui.hotkey('ctrl', 's')
time.sleep(1)
def closeFile():
pyautogui.hotkey('alt', 'f4')
time.sleep(1)
openFile()
fullScreen()
findAndClickRibbon()
runAddIn()
saveFile()
closeFile()
```
|
This [link](https://docs.continuum.io/anaconda/excel) list some of the available packages to work with Excel and Excel files. You might find the answer to your question there.
As a summary, here are the name of some of the listed packages:
>
> 1. openpyxl - Read/Write Excel 2007 xlsx/xlsm files
> 2. xlrd - Extract data from Excel spreadsheets (.xls and .xlsx, versions 2.0 onwards) on any platform
> 3. xlsxwriter - Write files in the Excel 2007+ XLSX file format
> 4. xlwt - Generate spreadsheet files that are compatible with Excel 97/2000/XP/2003, OpenOffice.org Calc, and Gnumeric.
>
>
>
And especially this might be interesting for you:
>
> ExPy - ExPy is freely available demonstration software that is simple to install. Once installed, Excel users have access to built-in Excel functions that wrap Python code. Documentation and examples are provided at the site.
>
>
>
| 13,180
|
74,111,833
|
I want to add a new field to a PostgreSQL database.
It's a not null and unique CharField, like
```
dyn = models.CharField(max_length=31, null=False, unique=True)
```
The database already has relevant records, so it's not an option to
* delete the database
* reset the migrations
* wipe the data
* set a default static value.
How to proceed?
---
Edit
----
Tried to add a `default=uuid.uuid4`
```
dyn = models.CharField(max_length=31, null=False, unique=True, default=uuid.uuid4)
```
but then I get
>
> Ensure this value has at most 31 characters (it has 36).
>
>
>
---
Edit 2
------
If I create a function with [.hex](https://docs.python.org/3/library/uuid.html#uuid.UUID.hex) ([as found here](https://stackoverflow.com/a/48438640/5675325))
```
def hex_uuid():
"""
The UUID as a 32-character lowercase hexadecimal string
"""
return uuid.uuid4().hex
```
and use it in the default
```
dyn = models.CharField(max_length=31, null=False, unique=True, default=hex_uuid)
```
I'll get
>
> Ensure this value has at most 31 characters (it has 32).
>
>
>
***Note***: I don't want to simply get a substring of the result, like adjusting the `hex_uuid()` to have `return str(uuid.uuid4())[:30]`, since that'll increase the collision chances.
|
2022/10/18
|
[
"https://Stackoverflow.com/questions/74111833",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5675325/"
] |
I ended up using [one of the methods](https://stackoverflow.com/a/56398787/5675325) shown by Oleg
```
def dynamic_default_value():
"""
This function will be called when a default value is needed.
It'll return a 31 length string with a-z, 0-9.
"""
alphabet = string.ascii_lowercase + string.digits
return ''.join(random.choices(alphabet, k=31)) # 31 is the length of the string
```
with
```
dyn = models.CharField(max_length=31, null=False, unique=True, default=dynamic_default_value)
```
---
If the field was max\_length of 32 characters then I'd have used the `hex_uuid()` present in the question.
---
If I wanted to make the dynamic field the same as the `id` of another existing unique field in the same model, then [I'd go through the following steps](https://stackoverflow.com/a/66785579/5675325).
|
you can reset the migrations or edit it or create new one like:
```
python manage.py makemigrations name_you_want
```
after that:
```
python manage.py migrate same_name
```
edit:
example for funcation:
```
def generate_default_data():
return datetime.now()
class MyModel(models.Model):
field = models.DateTimeField(default=generate_default_data)
```
| 13,181
|
70,133,541
|
I have a list of 4 binary numbers and i want to check if they are divisible by 5, and if it's the case, i print them.
I've tried something but i'm stuck with an error, showing you the error and the code i made.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-5-8c92562788a5> in <module>()
1 bin_liste = ['0100','0110','1010','1001']
2 for element in bin_liste:
----> 3 if element%5 != 0:
4 print(element)
TypeError: not all arguments converted during string formatting
```
my code:
```
bin_liste = ['0100','0110','1010','1001']
for element in bin_liste:
if element%5 != 0:
print(element
```
|
2021/11/27
|
[
"https://Stackoverflow.com/questions/70133541",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17522853/"
] |
EDIT: Originally answered [here](https://discussions.apple.com/thread/253425286?answerId=256408798022#256408798022).
If you are using any package manager (e.g. homebrew), then you need the command line tools.
Here is my current SDK for macOS Monterey 12.0.1:
[](https://i.stack.imgur.com/h3MGt.png)
|
If you uninstall the command line tools, and some software you are using is dependent on them, then they (or the latest version) can just be re-installed.
| 13,182
|
62,562,064
|
One Population Proportion
Research Question: In previous years 52% of parents believed that electronics and social media was the cause of their teenager’s lack of sleep. Do more parents today believe that their teenager’s lack of sleep is caused due to electronics and social media?
Population: Parents with a teenager (age 13-18) Parameter of Interest: p Null Hypothesis: p = 0.52 Alternative Hypthesis: p > 0.52 (note that this is a one-sided test)
1018 Parents
56% believe that their teenager’s lack of sleep is caused due to electronics and social media
this is a one tailed test and according to the professor the p-value should be 0.0053, but when i calculate the p-value for z-statistic=2.5545334262132955 in python :
`p_value=stats.distributions.norm.cdf(1-z_statistic)`
this code gives 0.06 as output
i know that
`stats.distributions.norm.cdf`
gives the probability to the left hand side of the statistic but the above code is giving wrong p value
but when I type :
`stats.distributions.norm.cdf(-z_statistic)`
it gives output as 0.0053,
how is this possible,please help!!!
|
2020/06/24
|
[
"https://Stackoverflow.com/questions/62562064",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13807934/"
] |
You approximate the binomial distribution with the normal since n\*p > 30 and the zscore for a [proportion test](https://online.stat.psu.edu/statprogram/reviews/statistical-concepts/proportions) is:
[](https://i.stack.imgur.com/Iue0em.png)
So the calculation is:
```
import numpy as np
from scipy import stats
p0 = 0.52
p = 0.56
n = 1018
Z = (p-p0)/np.sqrt(p0*(1-p0)/n)
Z
2.5545334262132955
```
Your Z is correct, `stats.norm.cdf(Z)` gives you the cumulative probability up till Z, and since you need the probability of observing something more extreme than this, it is:
```
1-stats.norm.cdf(Z)
0.0053165109918223985
```
The probability density function of the normal distribution is symmetric, so `1-stats.norm.cdf(Z)` is the same as `stats.norm.cdf(-Z)`
|
The question is formulated as a binomial problem: 1018 people take a yes/no decision with constant probability. In your case 570 out of 1018 people hold that belief and that probability is to be compared to 52 %
I do not know about Python, but I con confirm your teachers result in R:
```
> binom.test(570, 1018, p = .52, alternative = "greater")
Exact binomial test
data: 570 and 1018
number of successes = 570, number of trials =
1018, p-value = 0.005843
alternative hypothesis: true probability of success is greater than 0.52
95 percent confidence interval:
0.533735 1.000000
sample estimates:
probability of success
0.5599214
```
The fact, that you handle z-values leads me to believe, that you had no Python problem but were using the wrong test, which is why I belief I can answer using R.
You can find a binomial test impemented in Python here:
<https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom_test.html>
| 13,183
|
18,487,171
|
I am having a direcotry structure as :
```
D:\testfolder\folder_to_tar:
|---folder1
|--- file1.txt
|---folder2
|--- file2.txt
|---file3.txt
```
I want to create a tarball using Python at the same directory level. However, I am observing that in the tarball python is including the parent directory as well i.e. `testfolder` in my example.
```
Expected Output :
D:\testfolder:
|---folder_to_tar.tar
|---folder_to_tar
|--folder1
.....
Actual Output :
D:\testfolder:
|---folder_to_tar.tar
|---testfolder
|---folder_to_tar
|--folder1
.....
```
Code :
```
import tarfile
tarname = "D:\\testfolder\\folder_to_tar"
tarfile1 = "D:\\testfolder\\folder_to_tar.tar"
tarout = tarfile.open(tarfile1,mode="w")
try:
tarout.add(tarname,arcname=tarname)
finally:
tarout.close()
```
Can some one please help me on how to achieve it.
|
2013/08/28
|
[
"https://Stackoverflow.com/questions/18487171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1514773/"
] |
Try replacing the tarout.add line with:
```
tarout.add(tarname,arcname=os.path.basename(tarname))
```
Note: you also need to `import os`
|
Have you tried adding `\` at the end of `tarname`?
| 13,184
|
21,820,104
|
I am attempting to use the Python [requests](http://requests.readthedocs.org/en/latest/) library to login to a website called surfline.com, then `get` a webpage once logged in (presumably within a persisting [session](http://docs.python-requests.org:8000/en/latest/user/advanced/)).
The login form that surfline.com uses throughout its pages uses `onclick()` to map to a JS function called `verifyLogin()`, which in turn posts to a `.cfm` file. On `success`, it refreshes the page, and the user is now kept logged in so long as the cookies persist.
I am using the [requests](http://requests.readthedocs.org/en/latest/) library for the first time, and am unsure how to:
1. login successfully,
2. stay logged-in throughout the session,
3. then print out the homepage as a logged-in user to the terminal.
Here is the login form (HTML):
```
<form id="loginForm">
<label for="name">Email Address:</label><br>
<input type="text" name="username" id="username">
<label for="mail">Password:</label><br>
<input type="password" name="password" id="password">
<input type="hidden" name="top_login" id="top_login" value="true">
<button class="surfline-button blue1" type="button" onclick="verifyLogin();">Log In</button>
<button class="surfline-button grey1" type="button" onclick="jQuery('#dialog-login').dialog('close');">Cancel</button>
<div id="forgot">Forgot Password? <a href="/myaccount/?action=forgot_password">Click Here</a></div>
<div class="clear"></div>
<div><input style="float:left; width:18px; height:18px; margin-right:6px; border:none" type="checkbox" name="rememberMe" id="rememberMe" value="true" checked="checked"><div id="remember-me-text" style="float:left; width:200px; text-align:left; margin-top:1px;">Remember me on this computer?</div></div>
<div class="clear"></div>
<p>Not a Premium member? <a href="https://www.surfline.com/subscribe_vindicia/index.cfm?mkt=login&slintcid=LOGIN&slcmpname=LOGIN-MODAL">TRY PREMIUM FREE NOW</a></p>
</form>
```
Here is the `verifyLogin()` function in the [/05222013\_slmenu.js](http://www.surfline.com/global_includes/scripts/05222013_slmenu.js) file:
```
function verifyLogin(){
//var username = jQuery("#username").val();
//var password = jQuery("#password").val();
var usernameVal = jQuery("#username").val();
var passwordVal = jQuery("#password").val();
var rememberMeVal = jQuery("#rememberMe").val();
var top_loginVal = jQuery("#top_login").val();
if(usernameVal.length === 0 || passwordVal.length === 0 ){
jQuery("#login-note").addClass("warning");
if(usernameVal.length === 0){ jQuery("#username").addClass('warning'); jQuery("#login-note").html("Email Field is Blank"); }else{ jQuery("#username").removeClass('warning');}
if(passwordVal.length === 0){ jQuery("#password").addClass('warning'); jQuery("#login-note").html("Password Field is Blank"); }else{ jQuery("#password").removeClass('warning'); }
if(usernameVal.length === 0 && passwordVal.length === 0){jQuery("#login-note").html("Email and Password Fields are Blank"); }
}else{
jQuery("#inner-dialog").fadeOut("slow",function(){
var htmlData = "<center><div style='padding-top:60px;'><h1>Verifying Login</h1><img src='/global_includes/images/ajax-loader-snake-295284.gif' style='margin-top:24px;'></div></center>";
jQuery("#verifying").html(htmlData).fadeIn("slow", function(){
//var loginData = jQuery('#loginForm').serialize();
var loginData = 'username=' + escape(usernameVal) + '&password=' + escape(passwordVal) + '&rememberMe=' + rememberMeVal + '&top_login=' + top_loginVal;
jQuery.ajax({
type:'POST',
url: '/myaccount/inc_login_handler.cfm',
data:loginData,
cache:false,
success: function(response){
var responseTrimmed = response.replace(/^\s+||\s+$/g,'');
if(responseTrimmed != true){
jQuery("#verifying").fadeOut("slow",function(){ jQuery("#inner-dialog").fadeIn("slow"); jQuery("#login-note").html("Unable to find your login information. Please Try Again...").addClass("warning"); jQuery("#username").addClass('warning'); jQuery("#password").addClass('warning'); });
}else{
var successData = "<center><div style='padding-top:60px;'><h1>Success!</h1><img src='/global_includes/images/checkmark.png' style='margin-top:24px; margin-bottom:20px;'><br /> Please Wait, Site Reloading...</div></center>";
jQuery("#verifying").fadeOut("slow",function(){ jQuery('#verifying').html(successData).fadeIn("slow"); });
setTimeout(function(){ window.location.reload(); }, 1200 );
}
},
error:function (xhr, ajaxOptions, thrownError){
jQuery("#verifying").fadeOut("slow",function(){ jQuery("#inner-dialog").fadeIn("slow"); jQuery("#login-note").html("There was an error finding your login infomation. Please Try Again...").addClass("warning"); });
}
});
});
})
}
}
```
I've tried this code, but can't seem to get the homepage as a logged-in user; it still returns as it would to someone who isn't logged in (my full name should appear in the header, etc.):
```
>>> import requests, json
>>> s = requests.Session()
>>> page_signed_out = s.get('http://www.surfline.com/home/index.cfm')
>>> form_data = {'type':'POST',
... 'url':'/myaccount/inc_login_handler.cfm',
... 'data':"username=myemail@example.com&password=mypassword&rememberMe=true&top_login=true",
... 'cache':False}
>>> s.post(url,
... data=json.dumps(form_data),
... headers= {'content-type': 'application/json'})
<Response [200]>
>>> page_signed_in = s.get('http://www.surfline.com/home/index.cfm')
```
Basically, how can I log into surfline.com from a python file, and get a page as a logged in user? I don't mind using a different library, if it is not possible with the `requests` library. Thank you.
**EDIT:**
Here are the cookies after a POST has been made, as suggested by @André
<[Cookie(version=0, name='CRYPTOPASS', value='%296%25%2A%2521B%3FO%2A%3C%28', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CRYPTOUSER', value='23%252600%2E3Z%5FM%5EEZV1IK%27TFIW%3E', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='LOGGED\_OUT', value='true', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='USER\_ID', value='259829', port=None, port\_specified=False, domain='.surfline.com', domain\_specified=True, domain\_initial\_dot=True, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CFID', value='437349255', port=None, port\_specified=False, domain='www.surfline.com', domain\_specified=False, domain\_initial\_dot=False, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False), Cookie(version=0, name='CFTOKEN', value='1082d1233da237c-3E4C2D43-FFC6-4ECC-1E80D9B505E495CE', port=None, port\_specified=False, domain='www.surfline.com', domain\_specified=False, domain\_initial\_dot=False, path='/', path\_specified=True, secure=False, expires=2338694583, discard=False, comment=None, comment\_url=None, rest={}, rfc2109=False)]>
|
2014/02/17
|
[
"https://Stackoverflow.com/questions/21820104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1337422/"
] |
Consider using WPF's built in validation techniques. See this MSDN documentation on the [`ValidationRule`](http://msdn.microsoft.com/en-us/library/system.windows.controls.validationrule%28v=vs.110%29.aspx) class, and this [how-to](http://msdn.microsoft.com/en-us/library/ms753962%28v=vs.110%29.aspx).
|
Based on your clarification, you want to limit user input to be a number with decimal points.
You also mentioned you are creating the TextBox programmatically.
Use the TextBox.PreviewTextInput event to determine the type of characters and validate the string inside the TextBox, and then use e.Handled to cancel the user input where appropriate.
This will do the trick:
```
public MainWindow()
{
InitializeComponent();
TextBox textBox = new TextBox();
textBox.PreviewTextInput += TextBox_PreviewTextInput;
this.SomeCanvas.Children.Add(textBox);
}
```
Meat and potatoes that does the validation:
```
void TextBox_PreviewTextInput(object sender, TextCompositionEventArgs e)
{
// change this for more decimal places after the period
const int maxDecimalLength = 2;
// Let's first make sure the new letter is not illegal
char newChar = char.Parse(e.Text);
if (newChar != '.' && !Char.IsNumber(newChar))
{
e.Handled = true;
return;
}
// combine TextBox current Text with the new character being added
// and split by the period
string text = (sender as TextBox).Text + e.Text;
string[] textParts = text.Split(new char[] { '.' });
// If more than one period, the number is invalid
if (textParts.Length > 2) e.Handled = true;
// validate if period has more than two digits after it
if (textParts.Length == 2 && textParts[1].Length > maxDecimalLength) e.Handled = true;
}
```
| 13,185
|
66,200,173
|
I have the following question, why is `myf(x)` giving less accurate results than `myf2(x)`. Here is my python code:
```
from math import e, log
def Q1():
n = 15
for i in range(1, n):
#print(myf(10**(-i)))
#print(myf2(10**(-i)))
return
def myf(x):
return ((e**x - 1)/x)
def myf2(x):
return ((e**x - 1)/(log(e**x)))
```
Here is the output for myf(x):
```
1.0517091807564771
1.005016708416795
1.0005001667083846
1.000050001667141
1.000005000006965
1.0000004999621837
1.0000000494336803
0.999999993922529
1.000000082740371
1.000000082740371
1.000000082740371
1.000088900582341
0.9992007221626409
0.9992007221626409
```
myf2(x):
```
1.0517091807564762
1.0050167084168058
1.0005001667083415
1.0000500016667082
1.0000050000166667
1.0000005000001666
1.0000000500000017
1.000000005
1.0000000005
1.00000000005
1.000000000005
1.0000000000005
1.00000000000005
1.000000000000005
```
I believe it has something to do with the floating point number system in python in combination with my machine. The natural log of euler's number produces a number with more digits of precision than its equivalent number x as an integer.
|
2021/02/14
|
[
"https://Stackoverflow.com/questions/66200173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14858513/"
] |
Let's start with the difference between `x` and `log(exp(x))`, because the rest of the computation is the same.
```py
>>> for i in range(10):
... x = 10**-i
... y = exp(x)
... print(x, log(y))
...
1 1.0
0.1 0.10000000000000007
0.01 0.009999999999999893
0.001 0.001000000000000043
0.0001 0.00010000000000004326
9.999999999999999e-06 9.999999999902983e-06
1e-06 9.99999999962017e-07
1e-07 9.999999994336786e-08
1e-08 9.999999889225291e-09
1e-09 1.000000082240371e-09
```
If you look closely, you might notice that there are errors creeping in.
When = 0, there's no error.
When = 1, it prints `0.1` for and `0.10000000000000007` for `log(y)`, which is wrong only after 16 digits.
By the time = 9, `log(y)` is wrong in half the digits from log().
Since we know what the true answer is (), we can easily compute what the relative error of the approximation is:
```py
>>> for i in range(10):
... x = 10**-i
... y = exp(x)
... z = log(y)
... print(i, abs((x - z)/z))
...
0 0.0
1 6.938893903907223e-16
2 1.0755285551056319e-14
3 4.293440603042413e-14
4 4.325966668215291e-13
5 9.701576564765975e-12
6 3.798286318045685e-11
7 5.663213319457187e-10
8 1.1077471033430869e-08
9 8.224036409872509e-08
```
Each step loses us about a digit of accuracy!
Why?
Each of the operations `10**-i`, `exp(x)`, and `log(y)` only introduces a tiny relative error into the result, less than 10−15.
Suppose `exp(x)` introduces a relative error of , returning the number ⋅(1 + ) instead of (which, after all, is a transcendental number that can't be represented by a finite string of digits).
We know that || < 10−15, but what happens when we then try to compute log(⋅(1 + )) as an approximation to log() = ?
We might hope to get ⋅(1 + ) where is very small.
But log(⋅(1 + )) = log() + log(1 + ) = + log(1 + ) = ⋅(1 + log(1 + )/), so = log(1 + )/.
And even if is small, ≈ 10− gets closer and closer to zero as increases, so the error log(1 + )/ ≈ / can get worse and worse as increases because 1/ → ∞.
**We say that the logarithm function is *ill-conditioned* near 1: if you evaluate it at an approximation to an input near 1, it can turn a very small input error into an arbitrarily large output error.** In fact, you can only go a few more steps before `exp(x)` is rounded to 1 and so `log(y)` returns zero exactly.
This is not because of anything in particular about floating-point—any kind of approximation would have the same effect with log!
The condition number of a function is a property of the mathematical function itself, not of the floating-point arithmetic system.
If the inputs came from physical measurements, you could run into the same problem.
---
This is related to why the functions `expm1` and `log1p` exist.
Although the function log() is ill-conditioned near 1, the function log(1 + ) is not, so `log1p(y)` computes it more accurately than evaluating it `log(1 + y)` can.
Similarly, the subtraction in `exp(x) - 1` is subject to [catastrophic cancellation](https://en.wikipedia.org/wiki/Catastrophic_cancellation) when ≈ 1, so `expm1(x)` computes − 1 more accurately than evaluating `exp(x) - 1` can.
`expm1` and `log1p` are not the same functions as `exp` and `log`, of course, but sometimes you can rewrite subexpressions in terms of them to avoid ill-conditioned domains.
In this case, for example, if you rewrite log() as log(1 + [ − 1]), and use `expm1` and `log1p` to compute it, the round-trip is often computed exactly:
```py
>>> for i in range(10):
... x = 10**-i
... y = expm1(x)
... z = log1p(y)
... print(i, x, z, abs((x - z)/z))
...
0 1 1.0 0.0
1 0.1 0.1 0.0
2 0.01 0.01 0.0
3 0.001 0.001 0.0
4 0.0001 0.0001 0.0
5 9.999999999999999e-06 9.999999999999999e-06 0.0
6 1e-06 1e-06 0.0
7 1e-07 1e-07 0.0
8 1e-08 1e-08 0.0
9 1e-09 1e-09 0.0
```
**For similar reasons, you might want to rewrite `(exp(x) - 1)/x` as `expm1(x)/x`.**
If you don't, then when `exp(x)` returns ⋅(1 + ) rather than , you will end up with (⋅(1 + ) − 1)/ = ( − 1 + )/ = ( − 1)⋅[1 + /( − 1)]/, which again may blow up because the error is /( − 1) ≈ /.
---
**However, it is *not* simply luck that the second definition seems to produce the correct results!**
This happens because the compounded errors—/ from `exp(x) - 1` in the numerator, / from `log(exp(x))` in the denominator—cancel each other out.
The first definition computes the numerator badly and the denominator accurately, but the second definition computes them both *about equally badly*!
In particular, when ≈ 0, we have
log(⋅(1 + )) = + log(1 + ) ≈ +
and
⋅(1 + ) − 1 = + − 1 ≈ 1 + + − 1 = + .
Note that is the *same* in both cases, because it's the error of using `exp(x)` to approximate .
You can test this experimentally by comparing against `expm1(x)/x` (which is an expression guaranteed to have low relative error, since division never makes errors much worse):
```py
>>> for i in range(10):
... x = 10**-i
... u = (exp(x) - 1)/log(exp(x))
... v = expm1(x)/x
... print(u, v, abs((u - v)/v))
...
1.718281828459045 1.718281828459045 0.0
1.0517091807564762 1.0517091807564762 0.0
1.0050167084168058 1.0050167084168058 0.0
1.0005001667083415 1.0005001667083417 2.2193360112628554e-16
1.0000500016667082 1.0000500016667084 2.220335028798222e-16
1.0000050000166667 1.0000050000166667 0.0
1.0000005000001666 1.0000005000001666 0.0
1.0000000500000017 1.0000000500000017 0.0
1.000000005 1.0000000050000002 2.2204460381480824e-16
1.0000000005 1.0000000005 0.0
```
This approximation + to the numerator and the denominator is best for closest to zero and worst for farthest from zero—but as gets farther from zero, the error magnification in `exp(x) - 1` (from catastrophic cancellation) and in `log(exp(x))` (from ill-conditioning of logarithm near 1) both diminish anyway, so the answer remains accurate.
---
However, the benign cancellation in the second definition only works until is so close to zero that `exp(x)` is simply rounded to 1—at that point, `exp(x) - 1` and `log(exp(x))` both give zero and will end up trying to compute 0/0 which yields NaN and a floating-point exception.
**So you should use `expm1(x)/x` in practice, but it is a happy accident that *outside* the edge case where `exp(x)` is 1, two wrongs make a right in `(exp(x) - 1)/log(exp(x))` giving good accuracy even as `(exp(x) - 1)/x` and (for similar reasons) `expm1(x)/log(exp(x))` both give bad accuracy.**
|
>
> Why is this result more accurate than another result by equivalent functions
>
>
>
Bad luck. Neither function is reliable pass about `(17-n)/2` digits.
Difference in results relies on common implementations of `exp` and `log`, yet not language specified.
---
For a given `x` as a negative `n` power of 10, the extended math result is `1.(n zeroes)5(n-1 zeros)166666....`
With `exp(x) - 1`, both functions lose about half of their significance as `n` becomes greater due to severe calculation of `exp(small_value)` and `1`.
`exp(x)-1` retains a semblance of the math value as long of the typical float with it typical 17-ish digits of precision.
When `n` == `7`, `exp(x) = 1.00000010000000494...` and `exp(x)-1` is `1.0000000494...e-07`.
`n = 8`: the wheels fell off.
When `n` == `8`, `exp(x) = 1.00000000999999994...` and `exp(x)-1` is `9.99999993922529029...e-09`.
`9.99999993922529029...e-09` is not near enough to `1.000000005...e-08`.
At that point, both functions suffer a loss of precision in the `1.00000000x` place.
---
Let `n` go up to 16 or so and then it all breaks down
| 13,187
|
5,591,557
|
I have some python code that does a certain task. I need to call this code from C# without converting the python file as an .exe, since the whole application is built on C#.
How can I do this?
|
2011/04/08
|
[
"https://Stackoverflow.com/questions/5591557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/692856/"
] |
If your python code can be executed via [IronPython](http://www.ironpython.net/) then this is definitely the way to go - it offers the best interop and means that you will be able to use .Net objects in your scripts.
There are many ways to invoke IronPython scripts from C# ranging from compiling the script up as an executable to executing a single script file or event dynamically executing expressions as they are typed in by the user - check the documentation and post another question if you are still haivng problems.
If your script is a CPython script and can't be adapted to work with IronPython then your options are more limited. I believe that some CPython / C# interop exists, but I couldn't find it after a quick Google search. The best thing I can think of would be to just invoke the script directly using `python.exe` and the `Process` class.
|
Have a look at [IronPython](http://www.ironpython.net/).
Based on your answer and comments, I believe that the best thing you can do is to embed IronPython in your application. As always, there is a relevant SO [question](https://stackoverflow.com/questions/208393/how-to-embed-ironpython-in-a-net-application) about this. As Kragen said, it is important not to rely on a CPython module.
| 13,188
|
52,943,850
|
Example I have this two csv, how can overwrite the value of column `type` in a.csv or replace if it matched both the string in column `fruit` in a.csv and b.csv
```
a.csv
fruit,name,type
apple,anna,A
banana,lisa,A
orange,red,A
pine,tin,A
b.csv
fruit,type
banana,B
apple,B
```
**How to output this:** OR how to overwrite
```
fruit,name,type
apple,anna,B
banana,lisa,B
orange,red,A
pine,tin,A
```
Im trying this using pandas but i dont know whats next
```
df1=pd.read_csv("sha1_vsdt.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3)
df2=pd.read_csv("final.csv",delimiter=",",error_bad_lines=False,engine = 'python',quoting=3)
df = pd.merge(df1, df2, on='SHA-1', how='outer')
```
|
2018/10/23
|
[
"https://Stackoverflow.com/questions/52943850",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
As per your input you have given
```
import pandas as pd
df1=pd.read_csv("a.csv")
df2=pd.read_csv("b.csv")
df = pd.merge(df1, df2, on='fruit', how='outer')
df['type_x'] = df['type_y'].combine_first(df['type_x'])
del df["type_y"]
df = df[pd.notnull(df['name'])]
```
input df1
```
fruit name type
0 apple anna A
1 banana lisa A
2 orange red A
3 pine tin A
```
input df2
```
fruit type
0 banana B
1 lemon B
```
output
```
fruit name type_x
0 apple anna A
1 banana lisa B
2 orange red A
3 pine tin A
```
if you have different files with different column names
```
import pandas as pd
df1=pd.read_csv("a.csv")
df2=pd.read_csv("b.csv")
df = pd.merge(df1, df2, on='fruit', how='outer')
df[df.columns[2]] = df[df.columns[3]].combine_first(df[df.columns[2]])
del df[df.columns[3]]
df = df[pd.notnull(df[df.columns[1]])]
```
|
Use [`map`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html) by `Series` created by [`set_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html) and then rewrite missing unmatched values by original column values by [`fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html):
```
#if possible duplicated fruit column
s = df2.drop_duplicates('fruit').set_index('fruit')['type']
df1['type'] = df1['fruit'].map(s).fillna(df1['type'])
print (df1)
fruit name type
0 apple anna B
1 banana lisa B
2 orange red A
3 pine tin A
```
| 13,191
|
56,923,131
|
I have a string converted to an MD5 hash using a python script with the following command:
```py
admininfo['password'] = hashlib.md5(admininfo['password'].encode("utf-8")).hexdigest()
```
This value is now stored in an online database.
Now I'm creating a C++ script to do a login on this database. During the login, I ask for the password and I convert it to an MD5 hash to compare it with the value from the online database.
But giving the same string, I obtain a different MD5 hash value every time.
How can I fix it?
```cpp
cin >> Admin_pwd;
cout << endl;
unsigned char digest[MD5_DIGEST_LENGTH];
const char* string = Admin_pwd.c_str();
MD5((unsigned char*)&string, strlen(string), (unsigned char*)&digest);
char mdString[33];
for(int i = 0; i < 16; i++)
sprintf(&mdString[i*2], "%02x", (unsigned int)digest[i]);
printf("md5 digest: %s\n", mdString);
```
First try:
`md5 digest: dcbb3e6add7fb94b98c56d7f70b7c46e`
Second try:
`md5 digest: 2870f4de491ad17d53d6d6e9dae19ca9`
Third try:
`md5 digest: 84656428baf461093e9fca2c8b05a296`
|
2019/07/07
|
[
"https://Stackoverflow.com/questions/56923131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10006055/"
] |
```
(unsigned char*)&string
```
You're hashing the pointer itself (and unspecified data after it), not the string that it points to.
And it changes on every execution (maybe).
You meant just `(unsigned char*)string`.
|
Make it `MD5((unsigned char*)string, ...)`; drop the ampersand. You are not passing the character data to `MD5` - you are passing the value of the `string` pointer itself (namely, the address of the first character of the password), plus whatever garbage happens to be on the stack after it.
| 13,194
|
65,544,809
|
I'm trying to create several class instances of graphs and initialize each one with an empty set, so that I can add in-neighbors/out-neighbors to each instance:
```
class Graphs:
def __init__(self, name, in_neighbors=None, out_neighbors=None):
self.name = name
if in_neighbors is None:
self.in_neighbors = set()
if out_neighbors is None:
self.out_neighbors = set()
def add_in_neighbors(self, neighbor):
in_neighbors.add(neighbor)
def add_out_neighbors(self, neighbor):
out_neighbors.add(neighbor)
def print_in_neighbors(self):
print(list(in_neighbors))
graph_names = [1,2,3,33]
graph_instances = {}
for graph in graph_names:
graph_instances[graph] = Graphs(graph)
```
However, when I try to add an in-neighbor:
```
graph_instances[1].add_in_neighbors('1')
```
I get the following error:
```
NameError: name 'in_neighbors' is not defined
```
I was following [this](https://stackoverflow.com/questions/4535667/python-list-should-be-empty-on-class-instance-initialisation-but-its-not-why) SO question that has a class instance initialized with a `list`, but I couldn't figure out where I'm mistaken
|
2021/01/02
|
[
"https://Stackoverflow.com/questions/65544809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14735451/"
] |
```
Dog dog1 = new Dog(1, "Dog1", "Cheese");
Dog dog2 = new Dog(1, "Dog1", "Meat");
Dog dog3 = new Dog(2, "Dog2", "Fish");
Dog dog4 = new Dog(2, "Dog2", "Milk");
List<Dog> dogList = List.of(dog1, dog2, dog3, dog4);
//insert dog objects into dog list
//Creating HashMap that will have id as the key and dog objects as the values
Map<Integer, List<Dog>> map = new HashMap<>();
for (Dog dog : dogList) {
map.computeIfAbsent(dog.id, k -> new ArrayList<>())
.add(dog);
}
map.entrySet().forEach(System.out::println);
```
Prints
```
1=[{1, Dog1, Cheese, {1, Dog1, Meat]
2=[{2, Dog2, Fish, {2, Dog2, Milk]
```
The dog class
```
static class Dog {
int id;
String name;
String foodEaten;
public Dog(int id, String name, String foodEaten) {
this.id = id;
this.name = name;
this.foodEaten = foodEaten;
}
public String toString() {
return String.format("{%s, %s, %s", id, name, foodEaten);
}
}
```
|
Hi Tosh and welcome to Stackoverflow.
The problem is in your if statement when you check for IDs of your two objects.
```
for (Dog dog : dogList) {
if(dog.getId() == dog.getId()){
crMap.put(cr.getIndividualId(), clientReceivables.);
}
}
```
Here you check IDs of the same object named "dog" and you will always get **True** in this if statement. Thats why it fills your map with all the values for both IDs.
Another problem you have there is that your dog4 has the ID equals to 1 which is ID of dog1 and dog2. With that in mind you still won't achieve what you want, so check that too.
Now for the solution. If you want to go through the list and compare every dog to the next one than you need to write that different. I am not sure if you are just working with java or with android but there is a solution to this and cleaner version of your code.
With Java 8 you got Stream API which can help you with this. Check it [here](https://www.oracle.com/technical-resources/articles/java/ma14-java-se-8-streams.html)
Also for android you got android-linq which you can check [here](https://github.com/zbra-solutions/android-linq)
| 13,195
|
68,319,575
|
I see that the example python code from Intel offers a way to change the resolution as below:
```
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# Start streaming
pipeline.start(config)
```
<https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/opencv_viewer_example.py>
I am trying to do the same thing in MATLAB as below, but get an error:
```
config = realsense.config();
config.enable_stream(realsense.stream.depth,1280, 720, realsense.format.z16, 30);
pipeline = realsense.pipeline();
% Start streaming on an arbitrary camera with default settings
profile = pipeline.start(config);
```
Error is below:
```
Error using librealsense_mex
Couldn't resolve requests
Error in realsense.pipeline/start (line 31)
out = realsense.librealsense_mex('rs2::pipeline',
'start', this.objectHandle, varargin{1}.objectHandle);
Error in pointcloudRecordCfg (line 15)
profile = pipeline.start(config);
```
Any suggestions?
|
2021/07/09
|
[
"https://Stackoverflow.com/questions/68319575",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1231714/"
] |
You were close, but you need to set the revised object to a new variable. Also, you probably want to aggregate arrays since there are multiple 'completed'. This first creates the base object and then populates it using `reduce()` for both actions
```
let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}),
revised = todos.reduce((b, todo) => {
b[todo.status].push(todo);
return b;
}, keys);
```
```js
const todos = [{
id: 'a',
name: 'Buy dog',
action: 'a',
status: 'deleted',
},
{
id: 'b',
name: 'Buy food',
tooltip: null,
status: 'completed',
},
{
id: 'c',
name: 'Heal dog',
tooltip: null,
status: 'completed',
},
{
id: 'd',
name: 'Todo this',
action: 'd',
status: 'completed',
},
{
id: 'e',
name: 'Todo that',
action: 'e',
status: 'todo',
},
];
let keys=todos.reduce((b,a) => ({...b, [a.status]:[]}),{}),
revised = todos.reduce((b, todo) => {
b[todo.status].push(todo);
return b;
}, keys);
console.log(revised);
```
|
```
function getByValue(arr, value) {
var result = arr.filter(function(o){return o.status == value;} );
return result? result[0] : null; // or undefined
}
todo_obj = getByValue(arr, 'todo')
deleted_obj = getByValue(arr, 'deleted')
completed_obj = getByValue(arr, 'completed')
```
| 13,197
|
34,708,302
|
I've got a list of dictionaries in a JSON file that looks like this:
```
[{"url": "http://www.URL1.com", "date": "2001-01-01"},
{"url": "http://www.URL2.com", "date": "2001-01-02"}, ...]
```
but I'm struggling to import it into a pandas data frame — this should be pretty easy, but I'm blanking on it. Anyone able to set me straight here?
Likewise, what's the best way to simply read it into a list of dictionaries to use w/in python?
|
2016/01/10
|
[
"https://Stackoverflow.com/questions/34708302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4309943/"
] |
You can use [`from_dict`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html):
```
import pandas as pd
lis = [{"url": "http://www.URL1.com", "date": "2001-01-01"},
{"url": "http://www.URL2.com", "date": "2001-01-02"}]
print pd.DataFrame.from_dict(lis)
date url
0 2001-01-01 http://www.URL1.com
1 2001-01-02 http://www.URL2.com
```
Or you can use `DataFrame` constructor:
```
import pandas as pd
lis = [{"url": "http://www.URL1.com", "date": "2001-01-01"}, {"url": "http://www.URL2.com", "date": "2001-01-02"}]
print pd.DataFrame(lis)
date url
0 2001-01-01 http://www.URL1.com
1 2001-01-02 http://www.URL2.com
```
|
While `from_dict` will work here, the prescribed way would be to use [`pd.read_json`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html) with `orient='records'`. This parses an input that is
>
> list-like `[{column -> value}, ... , {column -> value}]`
>
>
>
Example: say this is the text of `lis.json`:
```
[{"url": "http://www.URL1.com", "date": "2001-01-01"},
{"url": "http://www.URL2.com", "date": "2001-01-02"}]
```
To pass the file path itself as input rather than a list as in @jezrael's answer:
```
print(pd.read_json('lis.json', orient='records'))
date url
0 2001-01-01 http://www.URL1.com
1 2001-01-02 http://www.URL2.com
```
| 13,207
|
43,259,717
|
I am trying to use a progress bar in a python script that I have since I have a for loop that takes quite a bit of time to process. I have looked at other explanations on here already but I am still confused. Here is what my for loop looks like in my script:
```
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
I have read a bit about using the `progressbar` library but my confusion lies in where I have to put the code to support a progress bar relative to my for loop I have included here.
|
2017/04/06
|
[
"https://Stackoverflow.com/questions/43259717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681184/"
] |
To show the progress bar:
```
from tqdm import tqdm
for x in tqdm(my_list):
# do something with x
#### In case using with enumerate:
for i, x in enumerate( tqdm(my_list) ):
# do something with i and x
```
[](https://i.stack.imgur.com/H9sUC.png)
*Some notes on the attached picture*:
`49%`: It already finished 49% of the whole process
`979/2000`: Working on the 979th element/iteration, out of 2000 elements/iterations
`01:50`: It's been running for 1 minute and 50 seconds
`01:55`: Estimated time left to run
`8.81 it/s`: On average, it processes 8.81 elements per second
|
Or you can use this (can be used for any situation):
```
for i in tqdm (range (1), desc="Loading..."):
for member in members:
url = "http://api.wiki123.com/v1.11/member?id="+str(member)
header = {"Authorization": authorization_code}
api_response = requests.get(url, headers=header)
member_check = json.loads(api_response.text)
member_status = member_check.get("response")
```
| 13,208
|
72,216,546
|
Used the standard python installation file (3.9.12) for windows.
The installation file have a built in option for pip installation:
[](https://i.stack.imgur.com/CEH51.png)
The resulting python is without pip.
Then I try to install pip directly by using "get-pip.py" file.
This action resulted in:
>
> WARNING: pip is configured with locations that require TLS/SSL,
> however the ssl module in Python is not available. WARNING: Retrying
> (Retry(total=4, connect=None, read=None, redirect=None, status=None))
> after connection broken by 'SSLError("Can't connect to HTTPS URL
> because the SSL module is not available.")': /simple/pip/ WARNING:
> Retrying (Retry(total=3, connect=None, read=None, redirect=None,
> status=None)) after connection broken by 'SSLError("Can't connect to
> HTTPS URL because the SSL module is not available.")': /simple/pip/
> WARNING: Retrying (Retry(total=2, connect=None, read=None,
> redirect=None, status=None)) after connection broken by
> 'SSLError("Can't connect to HTTPS URL because the SSL module is not
> available.")': /simple/pip/ ERROR: Operation cancelled by user
>
>
>
What can I do?
|
2022/05/12
|
[
"https://Stackoverflow.com/questions/72216546",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8812406/"
] |
One should add `min-h-0` to `<main>`.. I swear I tried everything.
|
Add `overflow-y-scroll h-3/5` to the div containing the paragraphs.
Please find below the solution code and watch result in full screen.
```html
<script src="https://cdn.tailwindcss.com" ></script>
<div class="flex h-full flex-col">
<mark class="bg-red-900 p-2 font-semibold text-white"> Warning message </mark>
<main class="flex flex-1 bg-green-100">
<div class="flex flex-[4] bg-slate-100">
<div class="flex flex-col bg-slate-200">
<button variant="ghost">Filter</button>
<button variant="ghost">Filter</button>
<button variant="ghost">Filter</button>
<button variant="ghost">Filter</button>
</div>
<div class="flex flex-1 flex-col bg-slate-400">
<div class="flex flex-row-reverse border-y p-2">
<h2 class="flex items-center font-black">F0KLM</h2>
<div class="flex flex-1 flex-col">
<span class="text-sm">User</span>
<span class="text-sm">11:30</span>
</div>
</div>
</div>
</div>
<div class="flex min-h-0 flex-[7] flex-col bg-red-100">
<div class="flex items-center justify-between p-4">
<div class="flex gap-4">
<h1 class="font-black">F0KLM</h1>
<BadgeStatus status={"new"} />
</div>
<div class="rounded-md bg-black p-2 text-white">11:24</div>
</div>
<div class="mx-4 flex flex-1 flex-col gap-2 overflow-y-auto bg-white">
<div class="rounded-md border p-2">Header</div>
<div class="rounded-md border p-2 overflow-y-scroll h-3/5">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut
labore et dolore magna aliqua. At erat pellentesque adipiscing commodo elit at imperdiet.
Vulputate mi sit amet mauris commodo quis imperdiet massa tincidunt. Feugiat in fermentum
posuere urna nec tincidunt praesent semper feugiat. Arcu felis bibendum ut tristique et
egestas quis ipsum.</p>
Pharetra
<p>sit amet aliquam id diam. Tellus mauris a diam maecenas sed enim ut. Sed faucibus turpis in
eu. Neque ornare aenean euismod elementum nisi quis eleifend. In nulla posuere sollicitudin
aliquam ultrices sagittis orci a. Non quam lacus suspendisse faucibus interdum posuere lorem
ipsum dolor. Eget magna fermentum iaculis eu. Elementum eu facilisis sed odio. Turpis egestas
sed tempus urna et pharetra pharetra massa massa. Nunc lobortis mattis aliquam faucibus. Leo
integer malesuada nunc vel risus commodo.</p>
<p>sit amet aliquam id diam. Tellus mauris a diam maecenas sed enim ut. Sed faucibus turpis in
eu. Neque ornare aenean euismod elementum nisi quis eleifend. In nulla posuere sollicitudin
aliquam ultrices sagittis orci a. Non quam lacus suspendisse faucibus interdum posuere lorem
ipsum dolor. Eget magna fermentum iaculis eu. Elementum eu facilisis sed odio. Turpis egestas
sed tempus urna et pharetra pharetra massa massa. Nunc lobortis mattis aliquam faucibus. Leo
integer malesuada nunc vel risus commodo.</p>
<p>sit amet aliquam id diam. Tellus mauris a diam maecenas sed enim ut. Sed faucibus turpis in
eu. Neque ornare aenean euismod elementum nisi quis eleifend. In nulla posuere sollicitudin
aliquam ultrices sagittis orci a. Non quam lacus suspendisse faucibus interdum posuere lorem
ipsum dolor. Eget magna fermentum iaculis eu. Elementum eu facilisis sed odio. Turpis egestas
sed tempus urna et pharetra pharetra massa massa. Nunc lobortis mattis aliquam faucibus. Leo
integer malesuada nunc vel risus commodo.</p>
</div>
<div class="rounded-md border p-2">Info</div>
</div>
<div class="flex justify-end gap-4 p-4">
<button variant="ghost">Ok</button>
<button variant="ghost">Reject</button>
</div>
</div>
</main>
<nav class="flex justify-around bg-white py-2">
<button variant="ghost">Button</button>
<button variant="ghost">Button</button>
<button variant="ghost">Button</button>
<button variant="ghost">Button</button>
</nav>
</div>
```
| 13,218
|
55,436,896
|
I have a table of IDs and dates of quarterly data and i would like to reindex this to daily (weekdays).
Example table:
[](https://i.stack.imgur.com/bLvFf.png)
I'm trying to figure out a pythonic or pandas way to reindex to a higher frequency date range e.g. daily and forward fill any NaNs.
so far have tried:
```
df = pd.read_sql('select date, id, type, value from db_table' con=conn, index_col=['date', 'id', 'type'])
dates = pd.bdate_range(start, end)
new_idx = pd.MultiIndex.from_product([dates, df.index.get_level_values(1), df.index.get_level_values(2)]
new_df = df.reindex(new_idx)
#this just hangs
new_df = new_df.groupby(level=1).fillna(method='ffill')
```
to no avail. I either get a
`Exception: cannot handle a non-unique multi-index!`
Or, if the dates are consistent between ids and types the individual dates are reproduced multiple times (which sounds like a bug?)
Ultimately I would just like to group the table by date, id and type and have a consistent date index across ids and types.
Is there a way to do this in pandas?
|
2019/03/31
|
[
"https://Stackoverflow.com/questions/55436896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5615327/"
] |
Yes you can do with `merge`
```
new_idx_frame=new_idx.to_frame()
new_idx_frame.columns=['date', 'id', 'type']
Yourdf=df.reset_index().merge(new_idx_frame,how='right',sort =True).groupby('id').ffill()# here I am using toy data
Out[408]:
id date type value
0 1 1 1 NaN
1 1 1 2 NaN
2 2 1 1 666666.0
3 2 1 2 99999.0
4 1 2 1 -1.0
5 1 2 1 -1.0
6 1 2 2 -1.0
7 2 2 1 99999.0
8 2 2 2 99999.0
```
---
Sample data
```
df=pd.DataFrame({'date':[1,1,2,2],'id':[2,2,1,1],'type':[2,1,1,1],'value':[99999,666666,-1,-1]})
df=df.set_index(['date', 'id', 'type'])
new_idx = pd.MultiIndex.from_product([[1,2], [1,2],[1,2]])
```
|
Wen-Ben's answer is almost there - thank you for that. The only thing missing is grouping by ['id', 'type'] when doing the forward fill.
Further, when creating the new multindex in my use case should have unique values:
```
new_idx = pd.MultiIndex.from_product([dates, df.index.get_level_values(1).unique(), df.index.get_level_values(2).unique()])
```
| 13,219
|
66,491,254
|
I have created python desktop software. Now I want to market that as a product. But my problem is, anyone can decompile my exe file and they will get the actual code.
So is there any way to encrypt my code and convert it to exe before deployment. I have tried different ways.
But nothing is working. Is there any way to do that?.Thanks in advance
|
2021/03/05
|
[
"https://Stackoverflow.com/questions/66491254",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14155439/"
] |
This [link](https://wiki.python.org/moin/Asking%20for%20Help/How%20do%20you%20protect%20Python%20source%20code%3F) has most of the info you need.
But since links are discouraged here:
There is py2exe, which compiles your code into an .exe file, but afaik it's not difficult to reverse-engineer the code from the exe file.
You can of course make your code more difficult to understand. Rename your classes, functions to be non-sensical (e.g. rename print(s) to delete(s) or to a()) people will have a difficult time then.
You can also avoid all of that by using SaaS (Software as a Service), where you can host your code online on a server and get paid by people using it.
Or consider open-sourcing it :)
|
You can install pyinstaller per `pip install pyinstaller` (make sure to also add it to your environment variables) and then open shell in the folder where your file is (shift+right-click somewhere where no file is and "open PowerShell here") and the do "pyinstaller --onefile YOUR\_FILE".
If there will be created a dist folder, take out the exe file and delete the build folder and the .spec I think it is.
And there you go with your standalone exe File.
| 13,220
|
4,331,348
|
If I have a python module that has a bunch of functions, say like this:
```
#funcs.py
def foo() :
print "foo!"
def bar() :
print "bar!"
```
And I have another module that is designed to parse a list of functions from a string and run those functions:
```
#parser.py
from funcs import *
def execute(command):
command = command.split()
for c in command:
function = globals()[c]
function()
```
Then I can open up python and do the following:
```
>>> import parser
>>> parser.execute("foo bar bar foo")
foo!
bar!
bar!
foo!
```
I want to add a convenience function to `funcs.py` that allows a list of functions to be called as a function itself:
```
#funcs.py (new version)
import parser
def foo() :
print "foo!"
def bar() :
print "bar!"
def parse(commands="foo foo") :
parser.execute(commands)
```
Now I can recursively parse from the parser itself:
```
>>> import parser
>>> parser.execute("parse")
foo!
foo!
>>> parser.execute("parse bar parse")
foo!
foo!
bar!
foo!
foo!
```
But for some reason I can't just run `parse` from `funcs`, as I get a key error:
```
>>> import funcs
>>> funcs.parse("foo bar")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "funcs.py", line 11, in parse
parser.execute(commands)
File "parser.py", line 6, in execute
function = globals()[c]
KeyError: 'foo'
```
So even though `foo` should be imported into `parser.py` through the `from funcs import *` line, I'm not finding `foo` in the `globals()` of `parser.py` when it is used through `funcs.py`. How could this happen?
I should finally point out that importing `parser` and then `funcs` (but only in that order) allows it to work as expected:
```
>>> import parser
>>> import funcs
>>> funcs.parse("foo bar")
foo!
bar!
```
|
2010/12/02
|
[
"https://Stackoverflow.com/questions/4331348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/67829/"
] |
`import module_name` does something fundamentally different from what `from module_name import *` does.
The former creates a global named `module_name`, which is of type `module` and which contains the module's names, accessed as attributes. The latter creates a global for each of those names within `module_name`, but not for `module_name` itself.
Thus, when you `import funcs`, `foo` and `bar` are not put into `globals()`, and therefore are not found when `execute` looks for them.
Cyclic dependencies like this (trying to have `parser` import names from `funcs` while `funcs` also imports `parser`) are bad. Explicit is better than implicit. Don't try to create this much magic. Tell `parse()` what functions are available.
|
* Print the globals after you import parser to see what it did
* `parser` is it built-in module also. Usually the built-in parser should load not yours.
I would change the name so you do not have problems.
* Your importing funcs but parser imports \* from funcs?
I would think carefully about what order you are importing modules and where you need them.
| 13,221
|
73,066,303
|
I have python 3.10, selenium 4.3.0 and i would like to click on the following element:
```
<a href="#" onclick="fireLoginOrRegisterModalRequest('sign_in');ga('send', 'event', 'main_navigation', 'login', '1st_level');"> Mein Konto </a>
```
Unfortunately,
`driver.find_element(By.CSS_SELECTOR,"a[onclick^='fireLoginOrRegisterModalRequest'][onclick*='login']").click()`
does not work. See the following error message. The other opportunities wont work as well and i get a simular error message..
```
"C:\Users\...\AppData\Local\Programs\Python\Python310\python.exe" "C:/Users/.../PycharmProjects/bot/main.py"
C:\Users\...\PycharmProjects\bot\main.py:7: DeprecationWarning: executable_path has been deprecated, please pass in a Service object
driver = webdriver.Chrome('./chromedriver.exe')
Traceback (most recent call last):
File "C:\Users\...\PycharmProjects\bot\main.py", line 15, in <module>
mkButton = driver.find_element(By.CSS_SELECTOR, "a[onclick^='fireLoginOrRegisterModalRequest'][onclick*='login']").click()
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webelement.py", line 88, in click
self._execute(Command.CLICK_ELEMENT)
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webelement.py", line 396, in _execute
return self._parent.execute(command, params)
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 435, in execute
self.error_handler.check_response(response)
File "C:\Users\...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
(Session info: chrome=103.0.5060.114)
Stacktrace:
Backtrace:
Ordinal0 [0x00D35FD3+2187219]
Ordinal0 [0x00CCE6D1+1763025]
Ordinal0 [0x00BE3D40+802112]
Ordinal0 [0x00C12C03+994307]
Ordinal0 [0x00C089B3+952755]
Ordinal0 [0x00C2CB8C+1100684]
Ordinal0 [0x00C08394+951188]
Ordinal0 [0x00C2CDA4+1101220]
Ordinal0 [0x00C3CFC2+1167298]
Ordinal0 [0x00C2C9A6+1100198]
Ordinal0 [0x00C06F80+946048]
Ordinal0 [0x00C07E76+949878]
GetHandleVerifier [0x00FD90C2+2721218]
GetHandleVerifier [0x00FCAAF0+2662384]
GetHandleVerifier [0x00DC137A+526458]
GetHandleVerifier [0x00DC0416+522518]
Ordinal0 [0x00CD4EAB+1789611]
Ordinal0 [0x00CD97A8+1808296]
Ordinal0 [0x00CD9895+1808533]
Ordinal0 [0x00CE26C1+1844929]
BaseThreadInitThunk [0x7688FA29+25]
RtlGetAppContainerNamedObjectPath [0x77D57A9E+286]
RtlGetAppContainerNamedObjectPath [0x77D57A6E+238]
Process finished with exit code 1
```
|
2022/07/21
|
[
"https://Stackoverflow.com/questions/73066303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19163184/"
] |
It turns out that despite opening up to all IP-addresses, the networkpolicy does not allow egress to the DNS pod, which is in another namespace.
```
# Identifying DNS pod
kubectl get pods -A | grep dns
# Identifying DNS pod label
kubectl describe pods -n kube-system coredns-64cfd66f7-rzgwk
```
Next I add the dns label to the egress policy:
```
# network_policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all
namespace: mytestnamespace
spec:
podSelector: {}
policyTypes:
- Egress
- Ingress
egress:
- to:
- ipBlock:
cidr: "0.0.0.0/0"
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "kube-system"
- podSelector:
matchLabels:
k8s-app: "kube-dns"
```
I apply the network policy and test the curl calls:
```
# Setting up policy
kubectl apply -f network_policy.yaml
# Testing curl call
kubectl -n mytestnamespace exec service-c-78f784b475-qsdqg -- bin/bash -c 'curl www.google.com'
```
SUCCESS! Now I can make egress calls, next I just have to block the appropriate IP-addresses in the private network.
|
The reason is, `curl` attempts to form a 2-way TCP connection with the HTTP server at `www.google.com`. This means, *both* egress and ingress traffic need to be allowed on your policy. Currently, only out-bound traffic is allowed. Maybe, you'll be able to see this in more detail if you ran curl in verbose mode:
```
kubectl -n mytestnamespace exec service-c-78f784b475-qsdqg -- bin/bash -c 'curl www.google.com' -v
```
You can then see the communication back and forth marked by arrows `>` (out-going) and `<` (in-coming). You'll notice only `>` arrows will be listed and no `<` traffic will be shown.
>
> Note: If you do something simpler like `ping google.com` it might work, since this is a simple state-less communication.
>
>
>
In order to have this, you can simply add an allow-all ingress rule to your policy like:
```
ingress:
- {}
```
Also, there's a simpler way to allow all egress traffic simply as:
```
egress:
- {}
```
I hope this helps. You can read more about policies [here](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
| 13,223
|
61,243,561
|
I wrote some classes which were a shopping cart, Customer, Order, and some functions for discounts for the orders. this is the final lines of my code.
```
anu = Customer('anu', 4500) # 1
user_cart = [Lineitem('apple', 7, 7), Lineitem('orange', 5, 6)] # 2
order1 = Order(anu, user_cart) # 2
print(type(joe)) # 4
print(order1) # 5
format() # 6
```
guys, I know that format throws an error for not passing any argument I am asking you why the error comes first and if it comes first how does the rest of the code execute well. I think the python interpreter keeps executing code and when it finds a bug it deletes all the output and throws that error.
this is my output.
```
Traceback (most recent call last):
File "C:\Users\User\PyCharm Community Edition with Anaconda plugin 2019.3.1\plugins\python-ce\helpers\pydev\pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Users\User\PyCharm Community Edition with Anaconda plugin 2019.3.1\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/programming/python/progs/my_programs/abc_module/abstract_classes.py", line 97, in <module>
format()
TypeError: format expected at least 1 argument, got 0
<class '__main__.Customer'>
Congratulations! your order was successfully processed.
Customer name: anu
Fidelity points: 4500 points
Order :-
product count: 2
product : apple
amount : 7 apple(s)
price per unit : 7$ per 1 apple
total price : 49$
product : orange
amount : 5 orange(s)
price per unit : 6$ per 1 orange
total price : 30$
subtotal : 79$
promotion : Fidelity Deal
discount amount : 24.88$
total : 54.115$
```
|
2020/04/16
|
[
"https://Stackoverflow.com/questions/61243561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12857878/"
] |
You can do this by awaiting the listen even, wrapping it in a promise, and calling the promise resolve as the callback to the server listen
```js
const app = express();
let server;
await new Promise(resolve => server = app.listen(0, "127.0.0.1", resolve));
this.global.server = server;
```
You could also put a custom callback that will just call the promise resolver as the third argument to the `app.listen()` and it should run that code then call resolve if you need some sort of diagnostics.
|
Extending [Robert Mennell](https://stackoverflow.com/users/5377773) answer:
>
> You could also put a custom callback that will just call the promise resolver as the third argument to the `app.listen()` and it should run that code then call resolve if you need some sort of diagnostics.
>
>
>
```js
let server;
const app = express();
await new Promise(function(resolve) {
server = app.listen(0, "127.0.0.1", function() {
console.log(`Running express server on '${JSON.stringify(server.address())}'...`);
resolve();
});
});
this.global.server = server;
```
Then, you can access the `this.global.server` in your tests files to get the server port/address: [Is it possible to create an Express.js server in my Jest test suite?](https://stackoverflow.com/a/61260044/4934640)
| 13,224
|
57,122,160
|
I am setting up a django rest api and need to integrate social login feature.I followed the following link
[Simple Facebook social Login using Django Rest Framework.](https://medium.com/@katherinekimetto/simple-facebook-social-login-using-django-rest-framework-e2ac10266be1)
After setting up all , when a migrate (7 th step :- python manage.py migrate
) , getting the error ' ModuleNotFoundError: No module named 'rest\_frameworkoauth2\_provider'
' . Is there any package i missed? i cant figure out the error.
|
2019/07/20
|
[
"https://Stackoverflow.com/questions/57122160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5113584/"
] |
Firefox uses different flags. I am not sure exactly what your aim is but I am assuming you are trying to avoid some website detecting that you are using selenium.
There are different methods to avoid websites detecting the use of Selenium.
1) The value of navigator.webdriver is set to true by default when using Selenium. This variable will be present in Chrome as well as Firefox. This variable should be set to "undefined" to avoid detection.
2) A proxy server can also be used to avoid detection.
3) Some websites are able to use the state of your browser to determine if you are using Selenium. You can set Selenium to use a custom browser profile to avoid this.
The code below uses all three of these approaches.
```
profile = webdriver.FirefoxProfile('C:\\Users\\You\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\something.default-release')
PROXY_HOST = "12.12.12.123"
PROXY_PORT = "1234"
profile.set_preference("network.proxy.type", 1)
profile.set_preference("network.proxy.http", PROXY_HOST)
profile.set_preference("network.proxy.http_port", int(PROXY_PORT))
profile.set_preference("dom.webdriver.enabled", False)
profile.set_preference('useAutomationExtension', False)
profile.update_preferences()
desired = DesiredCapabilities.FIREFOX
driver = webdriver.Firefox(firefox_profile=profile, desired_capabilities=desired)
```
|
you may try:
```
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
from selenium.webdriver import DesiredCapabilities
from selenium.webdriver import Firefox
profile = FirefoxProfile()
profile.set_preference('devtools.jsonview.enabled', False)
profile.update_preferences()
desired = DesiredCapabilities.FIREFOX
driver = Firefox(firefox_profile=profile, desired_capabilities=desired)
```
| 13,225
|
3,633,288
|
I'll to explain this right:
I'm in an environment where I can't use python built-in functions (like 'sorted', 'set'), can't declare methods, can't make conditions (if), and can't make loops, except for:
* can call methods (but just one each time, and saving returns on another variable
foo python:item.sort(); #foo variable takes the value that item.sort() returns
bar python:foo.index(x);
* and can do list comprehension
[item['bla'] for item in foo]
...what I don't think that will help on this question
I have a 'correct\_order' list, with this values:
```
correct_order = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
```
and I have a 'messed\_order' list, with this values:
```
messed_order = [55, 1, 44, 3, 66, 5, 4, 7, 2, 9, 0, 10, 6, 8]
```
Well, I have to reorder the 'messed\_order' list, using the index of 'correct\_order' as base. The order of the rest of items not included in correct\_order doesn't matter.
Something like this would solve (again, except that I can't use loops):
```
for item in correct_order:
messed_order[messed_order.index(item)], messed_order[correct_order.index(item)] = messed_order[correct_order.index(item)], messed_order[messed_order.index(item)]
```
And would result on the 'ordered\_list' that I want:
```
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 55, 66, 44]
```
So, how can I do this?
For those who know zope/plone, I'm on a skin page (.pt), that doesn't have a helper python script (what I think that is not possible for skin pages, only for browser pages. If it is, show me how and I'll do it).
|
2010/09/03
|
[
"https://Stackoverflow.com/questions/3633288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/334872/"
] |
It's hard to answer, not knowing exactly what's allowed and what's not. But how about this O(N^2) solution?
```
[x for x in correct_order if x in messed_order] + [x for x in messed_order if x not in correct_order]
```
|
Does the exact order of the 55/66/44 items matter, or do they just need to be listed at the end? If the order doesn't matter you could do this:
```
[i for i in correct_order if i in messed_order] +
list(set(messed_order) - set(correct_order))
```
| 13,226
|
41,954,262
|
I am running through a very large data set with a json object for each row. And after passing through almost 1 billion rows I am suddenly getting an error with my transaction.
I am using Python 3.x and psycopg2 to perform my transactions.
The json object I am trying to write is as follows:
```
{"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
and psycopg2 is reporting the following:
```
syntax error at or near "t"
LINE 3: ...": "ConfigManager: Config if valid JSON but doesn't seem to ...
```
I am aware that the problem is the single quote, however I have no idea of how to escape this problem so that the json object can be passed to my database.
My json column is of type `jsonb` and is named `first_row`
So if I try:
```
INSERT INTO "356002062376054" (first_row) VALUES {"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."}
```
I still get an error. And nothing I seem to do will fix this problem. I have tried escaping the `t` with a `\` and changing the double quotes to single qoutes, and removing the curly brackets as well. Nothing works.
I believe it's quite important that I am remembering to do
```
`json.dumps({"message": "ConfigManager: Config if valid JSON but doesn't seem to have the correct schema. Adopting new config aborted."})` in my python code and yet I am getting this problem.
```
|
2017/01/31
|
[
"https://Stackoverflow.com/questions/41954262",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2302843/"
] |
you're trying to write a `tuple` to your file.
While it would work for `print` since `print` knows how to print several arguments, `write` is more strict.
Just format your `var` properly.
```
var = "\n{}\n".format(test)
```
|
I assume you want to concatinate the string, so use `+` instead of `,`
```
import pandas as pd
import os
#path = '/test'
#os.chdir(path)
def writeScores(test):
with open('output.txt', 'w') as f:
var = "\n" + test + "\n"
f.write(var)
writeScores("asd")
```
| 13,233
|
38,589,830
|
I have following:
1. ```
python manage.py crontab show # this command give following
Currently active jobs in crontab:
12151a7f59f3f0be816fa30b31e7cc4d -> ('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
```
2. My app is in virtual environment (env) active
3. In my media\_api\_server/cron.py I have following function:
```
def cronSendEmail():
print("Hello")
return true
```
4. In settings.py module:
```
INSTALLED_APPS = (
......
'django_crontab',
)
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail')
]
```
To me all defined in place but when I run python manage.py runserver in virtual environment, console doesn't print any thing.
```
System check identified no issues (0 silenced).
July 26, 2016 - 12:12:52
Django version 1.8.1, using settings 'mediaserver.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
```
'Django-crontab' module is not working I followed its documentation here <https://pypi.python.org/pypi/django-crontab>
|
2016/07/26
|
[
"https://Stackoverflow.com/questions/38589830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5287027/"
] |
Your code actually works. You may be think that `print("Hello")` should appear in stdout? So it doesn't work that way, because cron doesn't use `stdour` and `stderr` for it's output. To see actual results you should point path to some log file in `CRONJOBS` list: just put `'>> /path/to/log/file.log'` as last argument, e.g:
```
CRONJOBS = [
('*/1 * * * *', 'media_api_server.cron.cronSendEmail', '>> /path/to/log/file.log')
]
```
Also it might be helpful to redirect your errors to stdout too. For this you heed to add `CRONTAB_COMMAND_SUFFIX = '2>&1'` to your `settings.py`
|
Try to change the crontab with a first line like:
```
SHELL=/bin/bash
```
Create the new line at crontab with:
```
./manage.py crontab add
```
And change the line created by crontab library with the command:
>
> crontab -e
>
>
>
```
*/1 * * * * source /home/app/env/bin/activate && /home/app/manage.py crontab run 123HASHOFTASK123
```
| 13,238
|
55,476,131
|
I'm trying to implement fastai pretrain language model and it requires torch to work. After run the code, I got some problem about the import torch.\_C
I run it on my linux, python 3.7.1, via pip: torch 1.0.1.post2, cuda V7.5.17. I'm getting this error:
```
Traceback (most recent call last):
File "pretrain_lm.py", line 7, in <module>
import fastai
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/__init__.py", line 1, in <module>
from .basic_train import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/basic_train.py", line 2, in <module>
from .torch_core import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/torch_core.py", line 2, in <module>
from .imports.torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/__init__.py", line 2, in <module>
from .torch import *
File "/home/andira/anaconda3/lib/python3.7/site-packages/fastai/imports/torch.py", line 1, in <module>
import torch, torch.nn.functional as F
File "/home/andira/anaconda3/lib/python3.7/site-packages/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
```
So I tried to run this line:
```
from torch._C import *
```
and got same result
```
ImportError: libtorch_python.so: cannot open shared object file: No such file or directory
```
I checked `/home/andira/anaconda3/lib/python3.7/site-packages/torch/lib` and there are only `libcaffe2_gpu.so` and `libshm.so` file, and I can't find libtorch\_python.so either. My question is, what is actually libtorch\_python.so? I've read some of article and like most of it talked about *undefined symbol*, not *cannot open shared object file: No such file or directory* like mine. I'm new at python and torch so I really appreciate your answer.
|
2019/04/02
|
[
"https://Stackoverflow.com/questions/55476131",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6332850/"
] |
My problem is solved. I'm uninstalling my torch twice
```
pip uninstall torch
pip uninstall torch
```
and then re-installing it back:
```
pip install torch==1.0.1.post2
```
|
Try to use `pytorch` 1.4.0. For which, upgrade the `pytorch` library using the following command,
```
pip install -U torch==1.5
```
If you are working on Colab then use the following command,
```
!pip install -U torch==1.5
```
Still facing challenges on the library then install `detectron2` library also.
```
!pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.9/index.html
```
| 13,243
|
60,693,395
|
I created a small app in Django and runserver and admin works fine.
I wrote some tests which can call with `python manage.py test` and the tests pass.
Now I would like to call one particular test via PyCharm.
This fails like this:
```
/home/guettli/x/venv/bin/python
/snap/pycharm-community/179/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py
--path /home/guettli/x/xyz/tests.py
Launching pytest with arguments /home/guettli/x/xyz/tests.py in /home/guettli/x
============================= test session starts ==============================
platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 --
cachedir: .pytest_cache
rootdir: /home/guettli/x
collecting ...
xyz/tests.py:None (xyz/tests.py)
xyz/tests.py:6: in <module>
from . import views
xyz/views.py:5: in <module>
from xyz.models import Term, SearchLog, GlobalConfig
xyz/models.py:1: in <module>
from django.contrib.auth.models import User
venv/lib/python3.6/site-packages/django/contrib/auth/models.py:2: in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
venv/lib/python3.6/site-packages/django/contrib/auth/base_user.py:47: in <module>
class AbstractBaseUser(models.Model):
venv/lib/python3.6/site-packages/django/db/models/base.py:107: in __new__
app_config = apps.get_containing_app_config(module)
venv/lib/python3.6/site-packages/django/apps/registry.py:252: in get_containing_app_config
self.check_apps_ready()
venv/lib/python3.6/site-packages/django/apps/registry.py:134: in check_apps_ready
settings.INSTALLED_APPS
venv/lib/python3.6/site-packages/django/conf/__init__.py:76: in __getattr__
self._setup(name)
venv/lib/python3.6/site-packages/django/conf/__init__.py:61: in _setup
% (desc, ENVIRONMENT_VARIABLE))
E django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS,
but settings are not configured. You must either define the environment variable
DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Assertion failed
collected 0 items / 1 error
```
I understand the background: My app `xyz` is reusable. It does not contain any settings.
The app does not know (and should not know) my project. But the settings are in my project.
How to solve this?
I read the great django docs, but could not find a solution.
How to set `DJANGO_SETTINGS_MODULE` if you execute one particular test directly from PyCharm with "Run" (ctrl-shift-F10)?
|
2020/03/15
|
[
"https://Stackoverflow.com/questions/60693395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633961/"
] |
You can add DJANGO\_SETTINGS\_MODULE as an environmental variable:
In the menu: Run -> Edit Configurations -> Templates -> Python Tests -> Unittests
[](https://i.stack.imgur.com/XKKTG.png)
And delete old "Unittests for tests...." entries.
|
You can specify the settings in your test command
Assuming your in the xyz directory, and the structure is
```
/xyz
- manage.py
- xyz/
- settings.py
```
The following command should work
```
python manage.py test --settings=xyz.settings
```
| 13,245
|
34,116,682
|
I am trying to save an image with python that is Base64 encoded. Here the string is to large to post but here is the image
[](https://i.stack.imgur.com/JYHLV.jpg)
And when received by python the last 2 characters are `==` although the string is not formatted so I do this
```
import base64
data = "data:image/png;base64," + photo_base64.replace(" ", "+")
```
And then I do this
```
imgdata = base64.b64decode(data)
filename = 'some_image.jpg' # I assume you have a way of picking unique filenames
with open(filename, 'wb') as f:
f.write(imgdata)
```
But this causes this error
```
Traceback (most recent call last):
File "/var/www/cgi-bin/save_info.py", line 83, in <module>
imgdata = base64.b64decode(data)
File "/usr/lib64/python2.7/base64.py", line 76, in b64decode
raise TypeError(msg)
TypeError: Incorrect padding
```
I also printed out the length of the string once the `data:image/png;base64,` has been added and the `spaces` replace with `+` and it has a length of `34354`, I have tried a bunch of different images but all of them when I try to open the saved file say that the file is damaged.
What is happening and why is the file corrupt?
Thanks
**EDIT**
Here is some base64 that also failed
```
iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAADBQTFRFA6b1q Ci5/f2lt/9yu3 Y8v2cMpb1/DSJbz5i9R2NLwfLrWbw m T8I8////////SvMAbAAAABB0Uk5T////////////////////AOAjXRkAAACYSURBVHjaLI8JDgMgCAQ5BVG3//9t0XYTE2Y5BPq0IGpwtxtTP4G5IFNMnmEKuCopPKUN8VTNpEylNgmCxjZa2c1kafpHSvMkX6sWe7PTkwRX1dY7gdyMRHZdZ98CF6NZT2ecMVaL9tmzTtMYcwbP y3XeTgZkF5s1OSHwRzo1fkILgWC5R0X4BHYu7t/136wO71DbvwVYADUkQegpokSjwAAAABJRU5ErkJggg==
```
This is what I receive in my python script from the POST Request
Note I have not replace the spaces with +'s
|
2015/12/06
|
[
"https://Stackoverflow.com/questions/34116682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3630528/"
] |
There is no need to add `data:image/png;base64,` before, I tried using the code below, it works fine.
```
import base64
data = 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAADBQTFRFA6b1q Ci5/f2lt/9yu3 Y8v2cMpb1/DSJbz5i9R2NLwfLrWbw m T8I8////////SvMAbAAAABB0Uk5T////////////////////AOAjXRkAAACYSURBVHjaLI8JDgMgCAQ5BVG3//9t0XYTE2Y5BPq0IGpwtxtTP4G5IFNMnmEKuCopPKUN8VTNpEylNgmCxjZa2c1kafpHSvMkX6sWe7PTkwRX1dY7gdyMRHZdZ98CF6NZT2ecMVaL9tmzTtMYcwbP y3XeTgZkF5s1OSHwRzo1fkILgWC5R0X4BHYu7t/136wO71DbvwVYADUkQegpokSjwAAAABJRU5ErkJggg=='.replace(' ', '+')
imgdata = base64.b64decode(data)
filename = 'some_image.jpg' # I assume you have a way of picking unique filenames
with open(filename, 'wb') as f:
f.write(imgdata)
```
|
If you append **data:image/png;base64,** to data, then you get error. If You have this, you must replace it.
```
new_data = initial_data.replace('data:image/png;base64,', '')
```
| 13,251
|
56,838,851
|
I am getting an error when execute robot scripts through CMD (windows)
>
> 'robot' is not recognized as an internal or external command, operable
> program or batch file"
>
>
>
My team installed Python in C:\Python27 folder and I installed ROBOT framework and all required libraries with the following command
```
"python -m pip install -U setuptools --user, python -m pip install -U robotframework --user"
```
We are not authorized to install anything in C drive and all libraries were installed successfully. But when I try to execute the scripts through CMD then getting error.
Note:
1. All Robot Libaries are installed in "C:\Users\bab\AppData\Roaming\Python\Python27\site-packages"
2. I did set up the Env variables with above path
3. Scripts are working through ECLIPSE and using the command below
Command
```
C:\Python27\python.exe -m robot.run --listener C:\Users\bab\AppData \Local\Temp\RobotTempDir2497069958731862237\TestRunnerAgent.py:61106 --argumentfile C:\Users\bab\AppData\Local\Temp\RobotTempDir2497069958731862237\args_c4fe2372.arg C:\Users\bab\Robot_Sframe\E2Automation
```
Please help me, as this step is very key to integrate my scripts with Jenkins
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56838851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9717530/"
] |
I've installed robotframework using this command "sudo pip3 install robotframework" in jenkins server. and now my jenkins pipeline script can now run my robot scripts
|
I'm not very comfort with Windows environment, so let me give my two cents:
1) Try to set the PATH or PYTHONPATH to the location where your robot file is
2) Try to run robot from python script. I saw that you tried it above, but take a look at the RF User Guide and see if you are doing something wrong:
<https://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#using-robot-and-rebot-scripts>
maybe just
```
python -m robot ....
```
is fine
| 13,252
|
56,646,940
|
I load a saved h5 model and want to save the model as pb.
The model is saved during training with the `tf.keras.callbacks.ModelCheckpoint` callback function.
TF version: 2.0.0a
**edit**: same issue also with 2.0.0-beta1
My steps to save a pb:
1. I first set `K.set_learning_phase(0)`
2. then I load the model with `tf.keras.models.load_model`
3. Then, I define the `freeze_session()` function.
4. (optional I compile the model)
5. Then using the `freeze_session()` function with `tf.keras.backend.get_session`
**The error** I get, with and without compiling:
>
> AttributeError: module 'tensorflow.python.keras.api.\_v2.keras.backend'
> has no attribute 'get\_session'
>
>
>
**My Question:**
1. Does TF2 not have the `get_session` anymore?
(I know that `tf.contrib.saved_model.save_keras_model` does not exist anymore and I also tried `tf.saved_model.save` which not really worked)
2. Or does `get_session` only work when I actually train the model and just loading the h5 does not work
**Edit**: Also with a freshly trained session, no get\_session is available.
* If so, how would I go about to convert the h5 without training to pb? Is there a good tutorial?
Thank you for your help
---
**update**:
Since the official release of TF2.x graph/session concept has changed. The `savedmodel` api should be used.
You can use the `tf.compat.v1.disable_eager_execution()` with TF2.x and it will result in a pb file. However, I am not sure what kind of pb file type it is, as saved model composition changed from TF1 to TF2. I will keep digging.
|
2019/06/18
|
[
"https://Stackoverflow.com/questions/56646940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6510273/"
] |
I do save the model to `pb` from `h5` model:
```py
import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")
```
I use TF2 to convert model like:
1. pass `keras.callbacks.ModelCheckpoint(save_weights_only=True)` to `model.fit` and save `checkpoint` while training;
2. After training, `self.model.load_weights(self.checkpoint_path)` load `checkpoint`;
3. `self.model.save(h5_path, overwrite=True, include_optimizer=False)` save as `h5`;
4. convert `h5` to `pb` just like above;
|
I'm wondering the same thing, as I'm trying to use get\_session() and set\_session() to free up GPU memory. These functions seem to be missing and [aren't in the TF2.0 Keras documentation](http://faroit.com/keras-docs/2.0.0/backend/). I imagine it has something to do with Tensorflow's switch to eager execution, as direct session access is no longer required.
| 13,261
|
62,655,911
|
I'm looking at using Pandas UDF's in PySpark (v3). For a number of reasons, I understand iterating and UDF's in general are bad and I understand that the simple examples I show here can be done PySpark using SQL functions - all of that is besides the point!
I've been following this guide: <https://databricks.com/blog/2020/05/20/new-pandas-udfs-and-python-type-hints-in-the-upcoming-release-of-apache-spark-3-0.html>
I have a simple example working from the docs:
```
import pandas as pd
from typing import Iterator, Tuple
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, pandas_udf
spark = SparkSession.builder.getOrCreate()
pdf = pd.DataFrame(([1, 2, 3], [4, 5, 6], [8, 9, 0]), columns=["x", "y", "z"])
df = spark.createDataFrame(pdf)
@pandas_udf('long')
def test1(x: pd.Series, y: pd.Series) -> pd.Series:
return x + y
df.select(test1(col("x"), col("y"))).show()
```
And this works well for performing basic arithmetic - if I want to add, multiply etc this is straight forward (but it is also straightforward in PySpark without functions).
I want to do a comparison between the values for example:
```
@pandas_udf('long')
def test2(x: pd.Series, y: pd.Series) -> pd.Series:
return x if x > y else y
df.select(test2(col("x"), col("y"))).show()
```
This will error with `ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().`. I understand that it is evaluating the series rather than the row value.
So there is an iterator example. Again this works fine for the basic arithmetic example they provide. But if I try to apply logic:
```
@pandas_udf("long")
def test3(batch_iter: Iterator[Tuple[pd.Series, pd.Series]]) -> Iterator[pd.Series]:
for x, y in batch_iter:
yield x if x > y else y
df.select(test3(col("x"), col("y"))).show()
```
I get the same ValueError as before.
So my question is how should I perform row by row comparisons like this? Is it possible in a vectorised function? And if not then what are the use cases for them?
|
2020/06/30
|
[
"https://Stackoverflow.com/questions/62655911",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802510/"
] |
I figured this out. So simple after you write it down and publish the problem to the world.
All that needs to happen is to return an array and then convert to a Pandas Series:
```
@pandas_udf('long')
def test4(x: pd.Series, y: pd.Series) -> pd.Series:
return pd.Series([a if a > b else b for a, b in zip(x, y)])
df.select(test4(col("x"),col("y"))).show()
```
|
I've spent the last two days looking for this answer, thank you simon\_dmorias!
I needed a slightly modified example here. I'm breaking out the single pandas\_udf into multiple components for easier management. Here is an example of what I'm using for others to reference:
```
xdf = pd.DataFrame(([1, 2, 3,'Fixed'], [4, 5, 6,'Variable'], [8, 9, 0,'Adjustable']), columns=["x", "y", "z", "Description"])
df = spark.createDataFrame(xdf)
def fnRate(x):
return pd.Series(['Fixed' if 'Fixed' in str(v) else 'Variable' if 'Variable' in str(v) else 'Other' for v in zip(x)])
@pandas_udf('string')
def fnRateRecommended(Description: pd.Series) -> pd.Series:
varProduct = fnRate(Description)
return varProduct
# call function
df.withColumn("Recommendation", fnRateRecommended(sf.col("Description"))).show()
```
| 13,263
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.