commit
stringlengths 40
40
| old_file
stringlengths 4
118
| new_file
stringlengths 4
118
| old_contents
stringlengths 0
2.94k
| new_contents
stringlengths 1
4.43k
| subject
stringlengths 15
444
| message
stringlengths 16
3.45k
| lang
stringclasses 1
value | license
stringclasses 13
values | repos
stringlengths 5
43.2k
| prompt
stringlengths 17
4.58k
| response
stringlengths 1
4.43k
| prompt_tagged
stringlengths 58
4.62k
| response_tagged
stringlengths 1
4.43k
| text
stringlengths 132
7.29k
| text_tagged
stringlengths 173
7.33k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
aee32ebbd0aa13a03d6c69eef8505f922c274af1
|
tools/cvs2svn/profile-cvs2svn.py
|
tools/cvs2svn/profile-cvs2svn.py
|
#!/usr/bin/env python
#
# Use this script to profile cvs2svn.py using Python's hotshot profiler.
#
# The profile data is stored in cvs2svn.hotshot. To view the data using
# hotshot, run the following in python:
#
# import hotshot.stats
# stats = hotshot.stats.load('cvs2svn.hotshot')
# stats.strip_dirs()
# stats.sort_stats('time', 'calls')
# stats.print_stats(20)
#
# It is also possible (and a lot better) to use kcachegrind to view the data.
# To do so, you must first convert the data to the cachegrind format using
# hotshot2cachegrind, which you can download from the following URL:
#
# http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindContribPython
#
# Convert the data using the following command:
#
# hotshot2cachegrind -o cachegrind.out cvs2svn.hotshot
#
# Depending on the size of the repository, this can take a long time. When
# the conversion is done, simply open cachegrind.out in kcachegrind.
import cvs2svn, hotshot
prof = hotshot.Profile('cvs2svn.hotshot')
prof.runcall(cvs2svn.main)
prof.close()
|
Add a new script to simplify profiling of cvs2svn.py. Document in the script how to use kcachegrind to view the results.
|
Add a new script to simplify profiling of cvs2svn.py. Document in the
script how to use kcachegrind to view the results.
* tools/cvs2svn/profile-cvs2svn.py: New script.
|
Python
|
apache-2.0
|
jmckaskill/subversion,jmckaskill/subversion,jmckaskill/subversion,jmckaskill/subversion,jmckaskill/subversion,jmckaskill/subversion,jmckaskill/subversion,jmckaskill/subversion
|
Add a new script to simplify profiling of cvs2svn.py. Document in the
script how to use kcachegrind to view the results.
* tools/cvs2svn/profile-cvs2svn.py: New script.
|
#!/usr/bin/env python
#
# Use this script to profile cvs2svn.py using Python's hotshot profiler.
#
# The profile data is stored in cvs2svn.hotshot. To view the data using
# hotshot, run the following in python:
#
# import hotshot.stats
# stats = hotshot.stats.load('cvs2svn.hotshot')
# stats.strip_dirs()
# stats.sort_stats('time', 'calls')
# stats.print_stats(20)
#
# It is also possible (and a lot better) to use kcachegrind to view the data.
# To do so, you must first convert the data to the cachegrind format using
# hotshot2cachegrind, which you can download from the following URL:
#
# http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindContribPython
#
# Convert the data using the following command:
#
# hotshot2cachegrind -o cachegrind.out cvs2svn.hotshot
#
# Depending on the size of the repository, this can take a long time. When
# the conversion is done, simply open cachegrind.out in kcachegrind.
import cvs2svn, hotshot
prof = hotshot.Profile('cvs2svn.hotshot')
prof.runcall(cvs2svn.main)
prof.close()
|
<commit_before><commit_msg>Add a new script to simplify profiling of cvs2svn.py. Document in the
script how to use kcachegrind to view the results.
* tools/cvs2svn/profile-cvs2svn.py: New script.<commit_after>
|
#!/usr/bin/env python
#
# Use this script to profile cvs2svn.py using Python's hotshot profiler.
#
# The profile data is stored in cvs2svn.hotshot. To view the data using
# hotshot, run the following in python:
#
# import hotshot.stats
# stats = hotshot.stats.load('cvs2svn.hotshot')
# stats.strip_dirs()
# stats.sort_stats('time', 'calls')
# stats.print_stats(20)
#
# It is also possible (and a lot better) to use kcachegrind to view the data.
# To do so, you must first convert the data to the cachegrind format using
# hotshot2cachegrind, which you can download from the following URL:
#
# http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindContribPython
#
# Convert the data using the following command:
#
# hotshot2cachegrind -o cachegrind.out cvs2svn.hotshot
#
# Depending on the size of the repository, this can take a long time. When
# the conversion is done, simply open cachegrind.out in kcachegrind.
import cvs2svn, hotshot
prof = hotshot.Profile('cvs2svn.hotshot')
prof.runcall(cvs2svn.main)
prof.close()
|
Add a new script to simplify profiling of cvs2svn.py. Document in the
script how to use kcachegrind to view the results.
* tools/cvs2svn/profile-cvs2svn.py: New script.#!/usr/bin/env python
#
# Use this script to profile cvs2svn.py using Python's hotshot profiler.
#
# The profile data is stored in cvs2svn.hotshot. To view the data using
# hotshot, run the following in python:
#
# import hotshot.stats
# stats = hotshot.stats.load('cvs2svn.hotshot')
# stats.strip_dirs()
# stats.sort_stats('time', 'calls')
# stats.print_stats(20)
#
# It is also possible (and a lot better) to use kcachegrind to view the data.
# To do so, you must first convert the data to the cachegrind format using
# hotshot2cachegrind, which you can download from the following URL:
#
# http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindContribPython
#
# Convert the data using the following command:
#
# hotshot2cachegrind -o cachegrind.out cvs2svn.hotshot
#
# Depending on the size of the repository, this can take a long time. When
# the conversion is done, simply open cachegrind.out in kcachegrind.
import cvs2svn, hotshot
prof = hotshot.Profile('cvs2svn.hotshot')
prof.runcall(cvs2svn.main)
prof.close()
|
<commit_before><commit_msg>Add a new script to simplify profiling of cvs2svn.py. Document in the
script how to use kcachegrind to view the results.
* tools/cvs2svn/profile-cvs2svn.py: New script.<commit_after>#!/usr/bin/env python
#
# Use this script to profile cvs2svn.py using Python's hotshot profiler.
#
# The profile data is stored in cvs2svn.hotshot. To view the data using
# hotshot, run the following in python:
#
# import hotshot.stats
# stats = hotshot.stats.load('cvs2svn.hotshot')
# stats.strip_dirs()
# stats.sort_stats('time', 'calls')
# stats.print_stats(20)
#
# It is also possible (and a lot better) to use kcachegrind to view the data.
# To do so, you must first convert the data to the cachegrind format using
# hotshot2cachegrind, which you can download from the following URL:
#
# http://kcachegrind.sourceforge.net/cgi-bin/show.cgi/KcacheGrindContribPython
#
# Convert the data using the following command:
#
# hotshot2cachegrind -o cachegrind.out cvs2svn.hotshot
#
# Depending on the size of the repository, this can take a long time. When
# the conversion is done, simply open cachegrind.out in kcachegrind.
import cvs2svn, hotshot
prof = hotshot.Profile('cvs2svn.hotshot')
prof.runcall(cvs2svn.main)
prof.close()
|
|
410b447f54838e4a28b28aa1a027bd058520d9b0
|
Python/HARPS-e2ds-to-order.py
|
Python/HARPS-e2ds-to-order.py
|
#!/usr/bin/env python
# encoding: utf-8
"""
HARPS-e2ds-to-order.py
Created by Jonathan Whitmore on 2011-10-14.
Copyright (c) 2011. All rights reserved.
"""
import sys
import os
import argparse
import pyfits as pf
import numpy as np
help_message = '''
Takes reduced HARPS***e2ds_A.fits data and reads the header to output a fits file that has the wavelength per pixel per order.
'''
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
parser = argparse.ArgumentParser(description='Process input file.')
parser.add_argument('f', type=str, help='input a filename')
args = parser.parse_args()
inputfile = vars(args)['f']
outputFITSfile = str('order_' + inputfile)
hdu = pf.open(inputfile)
print len(hdu[0].data)
polynomialOrder = hdu[0].header['HIERARCH ESO DRS CAL TH DEG LL']
orders = hdu[0].header['HIERARCH ESO DRS CAL LOC NBO']
coefficients = (polynomialOrder + 1) * orders # number of coefficients in the file
whatever = []
for x in xrange(coefficients):
whatever.append(hdu[0].header['HIERARCH ESO DRS CAL TH COEFF LL' + str(x)])
A = {}
for y in xrange(0,len(whatever),polynomialOrder+1):
A[y/(polynomialOrder+1)] = whatever[y:y+polynomialOrder+1]
for order in range(orders):
print "order: ", order
wavelength = []
for pixel in xrange(len(hdu[0].data[order])):
temp = 0.0
for x in range(polynomialOrder):
temp += A[order][x] * pixel ** x
wavelength.append(temp)
pf.append(outputFITSfile, np.array([np.array(wavelength), hdu[0].data[order], np.sqrt(np.abs(hdu[0].data[order]))]))
if __name__ == "__main__":
sys.exit(main())
|
Order by order of HARPS data
|
Order by order of HARPS data
|
Python
|
mit
|
jbwhit/CaliCompari
|
Order by order of HARPS data
|
#!/usr/bin/env python
# encoding: utf-8
"""
HARPS-e2ds-to-order.py
Created by Jonathan Whitmore on 2011-10-14.
Copyright (c) 2011. All rights reserved.
"""
import sys
import os
import argparse
import pyfits as pf
import numpy as np
help_message = '''
Takes reduced HARPS***e2ds_A.fits data and reads the header to output a fits file that has the wavelength per pixel per order.
'''
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
parser = argparse.ArgumentParser(description='Process input file.')
parser.add_argument('f', type=str, help='input a filename')
args = parser.parse_args()
inputfile = vars(args)['f']
outputFITSfile = str('order_' + inputfile)
hdu = pf.open(inputfile)
print len(hdu[0].data)
polynomialOrder = hdu[0].header['HIERARCH ESO DRS CAL TH DEG LL']
orders = hdu[0].header['HIERARCH ESO DRS CAL LOC NBO']
coefficients = (polynomialOrder + 1) * orders # number of coefficients in the file
whatever = []
for x in xrange(coefficients):
whatever.append(hdu[0].header['HIERARCH ESO DRS CAL TH COEFF LL' + str(x)])
A = {}
for y in xrange(0,len(whatever),polynomialOrder+1):
A[y/(polynomialOrder+1)] = whatever[y:y+polynomialOrder+1]
for order in range(orders):
print "order: ", order
wavelength = []
for pixel in xrange(len(hdu[0].data[order])):
temp = 0.0
for x in range(polynomialOrder):
temp += A[order][x] * pixel ** x
wavelength.append(temp)
pf.append(outputFITSfile, np.array([np.array(wavelength), hdu[0].data[order], np.sqrt(np.abs(hdu[0].data[order]))]))
if __name__ == "__main__":
sys.exit(main())
|
<commit_before><commit_msg>Order by order of HARPS data<commit_after>
|
#!/usr/bin/env python
# encoding: utf-8
"""
HARPS-e2ds-to-order.py
Created by Jonathan Whitmore on 2011-10-14.
Copyright (c) 2011. All rights reserved.
"""
import sys
import os
import argparse
import pyfits as pf
import numpy as np
help_message = '''
Takes reduced HARPS***e2ds_A.fits data and reads the header to output a fits file that has the wavelength per pixel per order.
'''
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
parser = argparse.ArgumentParser(description='Process input file.')
parser.add_argument('f', type=str, help='input a filename')
args = parser.parse_args()
inputfile = vars(args)['f']
outputFITSfile = str('order_' + inputfile)
hdu = pf.open(inputfile)
print len(hdu[0].data)
polynomialOrder = hdu[0].header['HIERARCH ESO DRS CAL TH DEG LL']
orders = hdu[0].header['HIERARCH ESO DRS CAL LOC NBO']
coefficients = (polynomialOrder + 1) * orders # number of coefficients in the file
whatever = []
for x in xrange(coefficients):
whatever.append(hdu[0].header['HIERARCH ESO DRS CAL TH COEFF LL' + str(x)])
A = {}
for y in xrange(0,len(whatever),polynomialOrder+1):
A[y/(polynomialOrder+1)] = whatever[y:y+polynomialOrder+1]
for order in range(orders):
print "order: ", order
wavelength = []
for pixel in xrange(len(hdu[0].data[order])):
temp = 0.0
for x in range(polynomialOrder):
temp += A[order][x] * pixel ** x
wavelength.append(temp)
pf.append(outputFITSfile, np.array([np.array(wavelength), hdu[0].data[order], np.sqrt(np.abs(hdu[0].data[order]))]))
if __name__ == "__main__":
sys.exit(main())
|
Order by order of HARPS data#!/usr/bin/env python
# encoding: utf-8
"""
HARPS-e2ds-to-order.py
Created by Jonathan Whitmore on 2011-10-14.
Copyright (c) 2011. All rights reserved.
"""
import sys
import os
import argparse
import pyfits as pf
import numpy as np
help_message = '''
Takes reduced HARPS***e2ds_A.fits data and reads the header to output a fits file that has the wavelength per pixel per order.
'''
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
parser = argparse.ArgumentParser(description='Process input file.')
parser.add_argument('f', type=str, help='input a filename')
args = parser.parse_args()
inputfile = vars(args)['f']
outputFITSfile = str('order_' + inputfile)
hdu = pf.open(inputfile)
print len(hdu[0].data)
polynomialOrder = hdu[0].header['HIERARCH ESO DRS CAL TH DEG LL']
orders = hdu[0].header['HIERARCH ESO DRS CAL LOC NBO']
coefficients = (polynomialOrder + 1) * orders # number of coefficients in the file
whatever = []
for x in xrange(coefficients):
whatever.append(hdu[0].header['HIERARCH ESO DRS CAL TH COEFF LL' + str(x)])
A = {}
for y in xrange(0,len(whatever),polynomialOrder+1):
A[y/(polynomialOrder+1)] = whatever[y:y+polynomialOrder+1]
for order in range(orders):
print "order: ", order
wavelength = []
for pixel in xrange(len(hdu[0].data[order])):
temp = 0.0
for x in range(polynomialOrder):
temp += A[order][x] * pixel ** x
wavelength.append(temp)
pf.append(outputFITSfile, np.array([np.array(wavelength), hdu[0].data[order], np.sqrt(np.abs(hdu[0].data[order]))]))
if __name__ == "__main__":
sys.exit(main())
|
<commit_before><commit_msg>Order by order of HARPS data<commit_after>#!/usr/bin/env python
# encoding: utf-8
"""
HARPS-e2ds-to-order.py
Created by Jonathan Whitmore on 2011-10-14.
Copyright (c) 2011. All rights reserved.
"""
import sys
import os
import argparse
import pyfits as pf
import numpy as np
help_message = '''
Takes reduced HARPS***e2ds_A.fits data and reads the header to output a fits file that has the wavelength per pixel per order.
'''
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def main(argv=None):
parser = argparse.ArgumentParser(description='Process input file.')
parser.add_argument('f', type=str, help='input a filename')
args = parser.parse_args()
inputfile = vars(args)['f']
outputFITSfile = str('order_' + inputfile)
hdu = pf.open(inputfile)
print len(hdu[0].data)
polynomialOrder = hdu[0].header['HIERARCH ESO DRS CAL TH DEG LL']
orders = hdu[0].header['HIERARCH ESO DRS CAL LOC NBO']
coefficients = (polynomialOrder + 1) * orders # number of coefficients in the file
whatever = []
for x in xrange(coefficients):
whatever.append(hdu[0].header['HIERARCH ESO DRS CAL TH COEFF LL' + str(x)])
A = {}
for y in xrange(0,len(whatever),polynomialOrder+1):
A[y/(polynomialOrder+1)] = whatever[y:y+polynomialOrder+1]
for order in range(orders):
print "order: ", order
wavelength = []
for pixel in xrange(len(hdu[0].data[order])):
temp = 0.0
for x in range(polynomialOrder):
temp += A[order][x] * pixel ** x
wavelength.append(temp)
pf.append(outputFITSfile, np.array([np.array(wavelength), hdu[0].data[order], np.sqrt(np.abs(hdu[0].data[order]))]))
if __name__ == "__main__":
sys.exit(main())
|
|
7881a0561dd74cfb792c06f824fde22e9764ea4c
|
CodeFights/arrayMaxConsecutiveSum.py
|
CodeFights/arrayMaxConsecutiveSum.py
|
#!/usr/local/bin/python
# Code Fights Array Max Consecutive Sum Problem
def arrayMaxConsecutiveSum(inputArray, k):
rolling_sum = sum(inputArray[:k])
max_sum = rolling_sum
i = 0
for j in range(k, len(inputArray)):
rolling_sum = rolling_sum + inputArray[j] - inputArray[i]
max_sum = max(rolling_sum, max_sum)
i += 1
return max_sum
def main():
tests = [
[[2, 3, 5, 1, 6], 2, 8],
[[2, 4, 10, 1], 2, 14],
[[1, 3, 2, 4], 3, 9],
[[3, 2, 1, 1], 1, 3]
]
for t in tests:
res = arrayMaxConsecutiveSum(t[0], t[1])
ans = t[2]
if ans == res:
print("PASSED: arrayMaxConsecutiveSum({}, {}) returned {}"
.format(t[0], t[1], res))
else:
print(("FAILED: arrayMaxConsecutiveSum({}, {}) returned {},"
"answer: {}").format(t[0], t[1], res, ans))
if __name__ == '__main__':
main()
|
Solve Code Fights array max consecutive sum problem
|
Solve Code Fights array max consecutive sum problem
|
Python
|
mit
|
HKuz/Test_Code
|
Solve Code Fights array max consecutive sum problem
|
#!/usr/local/bin/python
# Code Fights Array Max Consecutive Sum Problem
def arrayMaxConsecutiveSum(inputArray, k):
rolling_sum = sum(inputArray[:k])
max_sum = rolling_sum
i = 0
for j in range(k, len(inputArray)):
rolling_sum = rolling_sum + inputArray[j] - inputArray[i]
max_sum = max(rolling_sum, max_sum)
i += 1
return max_sum
def main():
tests = [
[[2, 3, 5, 1, 6], 2, 8],
[[2, 4, 10, 1], 2, 14],
[[1, 3, 2, 4], 3, 9],
[[3, 2, 1, 1], 1, 3]
]
for t in tests:
res = arrayMaxConsecutiveSum(t[0], t[1])
ans = t[2]
if ans == res:
print("PASSED: arrayMaxConsecutiveSum({}, {}) returned {}"
.format(t[0], t[1], res))
else:
print(("FAILED: arrayMaxConsecutiveSum({}, {}) returned {},"
"answer: {}").format(t[0], t[1], res, ans))
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Solve Code Fights array max consecutive sum problem<commit_after>
|
#!/usr/local/bin/python
# Code Fights Array Max Consecutive Sum Problem
def arrayMaxConsecutiveSum(inputArray, k):
rolling_sum = sum(inputArray[:k])
max_sum = rolling_sum
i = 0
for j in range(k, len(inputArray)):
rolling_sum = rolling_sum + inputArray[j] - inputArray[i]
max_sum = max(rolling_sum, max_sum)
i += 1
return max_sum
def main():
tests = [
[[2, 3, 5, 1, 6], 2, 8],
[[2, 4, 10, 1], 2, 14],
[[1, 3, 2, 4], 3, 9],
[[3, 2, 1, 1], 1, 3]
]
for t in tests:
res = arrayMaxConsecutiveSum(t[0], t[1])
ans = t[2]
if ans == res:
print("PASSED: arrayMaxConsecutiveSum({}, {}) returned {}"
.format(t[0], t[1], res))
else:
print(("FAILED: arrayMaxConsecutiveSum({}, {}) returned {},"
"answer: {}").format(t[0], t[1], res, ans))
if __name__ == '__main__':
main()
|
Solve Code Fights array max consecutive sum problem#!/usr/local/bin/python
# Code Fights Array Max Consecutive Sum Problem
def arrayMaxConsecutiveSum(inputArray, k):
rolling_sum = sum(inputArray[:k])
max_sum = rolling_sum
i = 0
for j in range(k, len(inputArray)):
rolling_sum = rolling_sum + inputArray[j] - inputArray[i]
max_sum = max(rolling_sum, max_sum)
i += 1
return max_sum
def main():
tests = [
[[2, 3, 5, 1, 6], 2, 8],
[[2, 4, 10, 1], 2, 14],
[[1, 3, 2, 4], 3, 9],
[[3, 2, 1, 1], 1, 3]
]
for t in tests:
res = arrayMaxConsecutiveSum(t[0], t[1])
ans = t[2]
if ans == res:
print("PASSED: arrayMaxConsecutiveSum({}, {}) returned {}"
.format(t[0], t[1], res))
else:
print(("FAILED: arrayMaxConsecutiveSum({}, {}) returned {},"
"answer: {}").format(t[0], t[1], res, ans))
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Solve Code Fights array max consecutive sum problem<commit_after>#!/usr/local/bin/python
# Code Fights Array Max Consecutive Sum Problem
def arrayMaxConsecutiveSum(inputArray, k):
rolling_sum = sum(inputArray[:k])
max_sum = rolling_sum
i = 0
for j in range(k, len(inputArray)):
rolling_sum = rolling_sum + inputArray[j] - inputArray[i]
max_sum = max(rolling_sum, max_sum)
i += 1
return max_sum
def main():
tests = [
[[2, 3, 5, 1, 6], 2, 8],
[[2, 4, 10, 1], 2, 14],
[[1, 3, 2, 4], 3, 9],
[[3, 2, 1, 1], 1, 3]
]
for t in tests:
res = arrayMaxConsecutiveSum(t[0], t[1])
ans = t[2]
if ans == res:
print("PASSED: arrayMaxConsecutiveSum({}, {}) returned {}"
.format(t[0], t[1], res))
else:
print(("FAILED: arrayMaxConsecutiveSum({}, {}) returned {},"
"answer: {}").format(t[0], t[1], res, ans))
if __name__ == '__main__':
main()
|
|
a18151a4675ef014680a4e74daba3b19670ab4f1
|
src/files_set_utime.py
|
src/files_set_utime.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Find name patterns in a tree"""
import KmdCmd
import KmdFiles
import os, sys, re, time
import logging
class KmdFindPatterns(KmdCmd.KmdCommand):
def extendParser(self):
super(KmdFindPatterns, self).extendParser()
#Extend parser
self.parser.add_argument('srctree', metavar='</path/to/srctree>', nargs=1, help='Root of a src tree')
def run(self):
patterns = {} #store patterns and matching file names
knownpatterns = [
r'^(\d\d\d\d)(\d\d)(\d\d)_.*?$',
]
for root, dirs, files in os.walk(self.args.srctree[0]):
for name in files:
#for each file in the folder
p = os.path.join(root, name)
m = None
for k in knownpatterns :
m = re.compile(k, re.I).match(name)
if m :
break
if not m :
logging.debug("not matching file : %s", p)
continue
logging.debug("Groups %s", m.groups())
logging.debug("Matching file : %s", p)
olddt = KmdFiles.get_modification_date(p)
logging.debug("OLDDT : %s", olddt)
newdt = olddt.replace(
year = int(m.groups()[0]),
month = int(m.groups()[1]),
day = int(m.groups()[2])
)
logging.debug("NEWDT : %s", newdt)
ts = time.mktime(newdt.timetuple())
logging.debug("TS : %s", ts)
logging.info("Changing time for %s to %s", p, ts)
if self.args.doit :
KmdFiles.set_modification_date(p, ts)
if __name__ == "__main__":
cmd = KmdFindPatterns(__doc__)
cmd.run()
|
Set time according to date in filename
|
Set time according to date in filename
|
Python
|
mit
|
pzia/keepmydatas
|
Set time according to date in filename
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Find name patterns in a tree"""
import KmdCmd
import KmdFiles
import os, sys, re, time
import logging
class KmdFindPatterns(KmdCmd.KmdCommand):
def extendParser(self):
super(KmdFindPatterns, self).extendParser()
#Extend parser
self.parser.add_argument('srctree', metavar='</path/to/srctree>', nargs=1, help='Root of a src tree')
def run(self):
patterns = {} #store patterns and matching file names
knownpatterns = [
r'^(\d\d\d\d)(\d\d)(\d\d)_.*?$',
]
for root, dirs, files in os.walk(self.args.srctree[0]):
for name in files:
#for each file in the folder
p = os.path.join(root, name)
m = None
for k in knownpatterns :
m = re.compile(k, re.I).match(name)
if m :
break
if not m :
logging.debug("not matching file : %s", p)
continue
logging.debug("Groups %s", m.groups())
logging.debug("Matching file : %s", p)
olddt = KmdFiles.get_modification_date(p)
logging.debug("OLDDT : %s", olddt)
newdt = olddt.replace(
year = int(m.groups()[0]),
month = int(m.groups()[1]),
day = int(m.groups()[2])
)
logging.debug("NEWDT : %s", newdt)
ts = time.mktime(newdt.timetuple())
logging.debug("TS : %s", ts)
logging.info("Changing time for %s to %s", p, ts)
if self.args.doit :
KmdFiles.set_modification_date(p, ts)
if __name__ == "__main__":
cmd = KmdFindPatterns(__doc__)
cmd.run()
|
<commit_before><commit_msg>Set time according to date in filename<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Find name patterns in a tree"""
import KmdCmd
import KmdFiles
import os, sys, re, time
import logging
class KmdFindPatterns(KmdCmd.KmdCommand):
def extendParser(self):
super(KmdFindPatterns, self).extendParser()
#Extend parser
self.parser.add_argument('srctree', metavar='</path/to/srctree>', nargs=1, help='Root of a src tree')
def run(self):
patterns = {} #store patterns and matching file names
knownpatterns = [
r'^(\d\d\d\d)(\d\d)(\d\d)_.*?$',
]
for root, dirs, files in os.walk(self.args.srctree[0]):
for name in files:
#for each file in the folder
p = os.path.join(root, name)
m = None
for k in knownpatterns :
m = re.compile(k, re.I).match(name)
if m :
break
if not m :
logging.debug("not matching file : %s", p)
continue
logging.debug("Groups %s", m.groups())
logging.debug("Matching file : %s", p)
olddt = KmdFiles.get_modification_date(p)
logging.debug("OLDDT : %s", olddt)
newdt = olddt.replace(
year = int(m.groups()[0]),
month = int(m.groups()[1]),
day = int(m.groups()[2])
)
logging.debug("NEWDT : %s", newdt)
ts = time.mktime(newdt.timetuple())
logging.debug("TS : %s", ts)
logging.info("Changing time for %s to %s", p, ts)
if self.args.doit :
KmdFiles.set_modification_date(p, ts)
if __name__ == "__main__":
cmd = KmdFindPatterns(__doc__)
cmd.run()
|
Set time according to date in filename#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Find name patterns in a tree"""
import KmdCmd
import KmdFiles
import os, sys, re, time
import logging
class KmdFindPatterns(KmdCmd.KmdCommand):
def extendParser(self):
super(KmdFindPatterns, self).extendParser()
#Extend parser
self.parser.add_argument('srctree', metavar='</path/to/srctree>', nargs=1, help='Root of a src tree')
def run(self):
patterns = {} #store patterns and matching file names
knownpatterns = [
r'^(\d\d\d\d)(\d\d)(\d\d)_.*?$',
]
for root, dirs, files in os.walk(self.args.srctree[0]):
for name in files:
#for each file in the folder
p = os.path.join(root, name)
m = None
for k in knownpatterns :
m = re.compile(k, re.I).match(name)
if m :
break
if not m :
logging.debug("not matching file : %s", p)
continue
logging.debug("Groups %s", m.groups())
logging.debug("Matching file : %s", p)
olddt = KmdFiles.get_modification_date(p)
logging.debug("OLDDT : %s", olddt)
newdt = olddt.replace(
year = int(m.groups()[0]),
month = int(m.groups()[1]),
day = int(m.groups()[2])
)
logging.debug("NEWDT : %s", newdt)
ts = time.mktime(newdt.timetuple())
logging.debug("TS : %s", ts)
logging.info("Changing time for %s to %s", p, ts)
if self.args.doit :
KmdFiles.set_modification_date(p, ts)
if __name__ == "__main__":
cmd = KmdFindPatterns(__doc__)
cmd.run()
|
<commit_before><commit_msg>Set time according to date in filename<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Find name patterns in a tree"""
import KmdCmd
import KmdFiles
import os, sys, re, time
import logging
class KmdFindPatterns(KmdCmd.KmdCommand):
def extendParser(self):
super(KmdFindPatterns, self).extendParser()
#Extend parser
self.parser.add_argument('srctree', metavar='</path/to/srctree>', nargs=1, help='Root of a src tree')
def run(self):
patterns = {} #store patterns and matching file names
knownpatterns = [
r'^(\d\d\d\d)(\d\d)(\d\d)_.*?$',
]
for root, dirs, files in os.walk(self.args.srctree[0]):
for name in files:
#for each file in the folder
p = os.path.join(root, name)
m = None
for k in knownpatterns :
m = re.compile(k, re.I).match(name)
if m :
break
if not m :
logging.debug("not matching file : %s", p)
continue
logging.debug("Groups %s", m.groups())
logging.debug("Matching file : %s", p)
olddt = KmdFiles.get_modification_date(p)
logging.debug("OLDDT : %s", olddt)
newdt = olddt.replace(
year = int(m.groups()[0]),
month = int(m.groups()[1]),
day = int(m.groups()[2])
)
logging.debug("NEWDT : %s", newdt)
ts = time.mktime(newdt.timetuple())
logging.debug("TS : %s", ts)
logging.info("Changing time for %s to %s", p, ts)
if self.args.doit :
KmdFiles.set_modification_date(p, ts)
if __name__ == "__main__":
cmd = KmdFindPatterns(__doc__)
cmd.run()
|
|
b6cafc70f43dbe98991d00f9413c389f908cbb38
|
science/physics_python/standalone_modules/pointmass_spring.py
|
science/physics_python/standalone_modules/pointmass_spring.py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
import numpy as np
class State:
def __init__(self, ndim):
self.ndim = ndim
self.position = np.zeros(self.ndim)
self.velocity = np.zeros(self.ndim)
class Model:
def __init__(self, mass=1., stiffness=1.):
self.mass = mass
self.stiffness = stiffness
def compute_acceleration(self, state):
# F = -k.x
self.stretch = state.position[0] # 0 is the "rest" point
total_external_force = -1. * self.stiffness * self.stretch
# a = f/m
acceleration = total_external_force / self.mass
return acceleration
def compute_new_state(state, acceleration, delta_time):
"Compute the forward kinematics with finite difference method."
new_state = State(state.ndim)
# Velocity (m/s) at time_n+1
new_state.velocity = state.velocity + acceleration * delta_time
# Position (m) at time_n+1
new_state.position = state.position + state.velocity * delta_time
return new_state
def main():
state = State(ndim=1)
state.position[0] = 1. # initial stiffness (0 is the "rest" point)
time = 0.
delta_time = 0.01
model = Model()
print("#time acceleration velocity position")
while time < 12:
time = time + delta_time
# Update state (physics)
acceleration = model.compute_acceleration(state)
state = compute_new_state(state, acceleration, delta_time)
print(time, acceleration, state.velocity[0], state.position[0])
if __name__ == "__main__":
main()
|
Add a snippet (Python physics).
|
Add a snippet (Python physics).
|
Python
|
mit
|
jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets
|
Add a snippet (Python physics).
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
import numpy as np
class State:
def __init__(self, ndim):
self.ndim = ndim
self.position = np.zeros(self.ndim)
self.velocity = np.zeros(self.ndim)
class Model:
def __init__(self, mass=1., stiffness=1.):
self.mass = mass
self.stiffness = stiffness
def compute_acceleration(self, state):
# F = -k.x
self.stretch = state.position[0] # 0 is the "rest" point
total_external_force = -1. * self.stiffness * self.stretch
# a = f/m
acceleration = total_external_force / self.mass
return acceleration
def compute_new_state(state, acceleration, delta_time):
"Compute the forward kinematics with finite difference method."
new_state = State(state.ndim)
# Velocity (m/s) at time_n+1
new_state.velocity = state.velocity + acceleration * delta_time
# Position (m) at time_n+1
new_state.position = state.position + state.velocity * delta_time
return new_state
def main():
state = State(ndim=1)
state.position[0] = 1. # initial stiffness (0 is the "rest" point)
time = 0.
delta_time = 0.01
model = Model()
print("#time acceleration velocity position")
while time < 12:
time = time + delta_time
# Update state (physics)
acceleration = model.compute_acceleration(state)
state = compute_new_state(state, acceleration, delta_time)
print(time, acceleration, state.velocity[0], state.position[0])
if __name__ == "__main__":
main()
|
<commit_before><commit_msg>Add a snippet (Python physics).<commit_after>
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
import numpy as np
class State:
def __init__(self, ndim):
self.ndim = ndim
self.position = np.zeros(self.ndim)
self.velocity = np.zeros(self.ndim)
class Model:
def __init__(self, mass=1., stiffness=1.):
self.mass = mass
self.stiffness = stiffness
def compute_acceleration(self, state):
# F = -k.x
self.stretch = state.position[0] # 0 is the "rest" point
total_external_force = -1. * self.stiffness * self.stretch
# a = f/m
acceleration = total_external_force / self.mass
return acceleration
def compute_new_state(state, acceleration, delta_time):
"Compute the forward kinematics with finite difference method."
new_state = State(state.ndim)
# Velocity (m/s) at time_n+1
new_state.velocity = state.velocity + acceleration * delta_time
# Position (m) at time_n+1
new_state.position = state.position + state.velocity * delta_time
return new_state
def main():
state = State(ndim=1)
state.position[0] = 1. # initial stiffness (0 is the "rest" point)
time = 0.
delta_time = 0.01
model = Model()
print("#time acceleration velocity position")
while time < 12:
time = time + delta_time
# Update state (physics)
acceleration = model.compute_acceleration(state)
state = compute_new_state(state, acceleration, delta_time)
print(time, acceleration, state.velocity[0], state.position[0])
if __name__ == "__main__":
main()
|
Add a snippet (Python physics).#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
import numpy as np
class State:
def __init__(self, ndim):
self.ndim = ndim
self.position = np.zeros(self.ndim)
self.velocity = np.zeros(self.ndim)
class Model:
def __init__(self, mass=1., stiffness=1.):
self.mass = mass
self.stiffness = stiffness
def compute_acceleration(self, state):
# F = -k.x
self.stretch = state.position[0] # 0 is the "rest" point
total_external_force = -1. * self.stiffness * self.stretch
# a = f/m
acceleration = total_external_force / self.mass
return acceleration
def compute_new_state(state, acceleration, delta_time):
"Compute the forward kinematics with finite difference method."
new_state = State(state.ndim)
# Velocity (m/s) at time_n+1
new_state.velocity = state.velocity + acceleration * delta_time
# Position (m) at time_n+1
new_state.position = state.position + state.velocity * delta_time
return new_state
def main():
state = State(ndim=1)
state.position[0] = 1. # initial stiffness (0 is the "rest" point)
time = 0.
delta_time = 0.01
model = Model()
print("#time acceleration velocity position")
while time < 12:
time = time + delta_time
# Update state (physics)
acceleration = model.compute_acceleration(state)
state = compute_new_state(state, acceleration, delta_time)
print(time, acceleration, state.velocity[0], state.position[0])
if __name__ == "__main__":
main()
|
<commit_before><commit_msg>Add a snippet (Python physics).<commit_after>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
import numpy as np
class State:
def __init__(self, ndim):
self.ndim = ndim
self.position = np.zeros(self.ndim)
self.velocity = np.zeros(self.ndim)
class Model:
def __init__(self, mass=1., stiffness=1.):
self.mass = mass
self.stiffness = stiffness
def compute_acceleration(self, state):
# F = -k.x
self.stretch = state.position[0] # 0 is the "rest" point
total_external_force = -1. * self.stiffness * self.stretch
# a = f/m
acceleration = total_external_force / self.mass
return acceleration
def compute_new_state(state, acceleration, delta_time):
"Compute the forward kinematics with finite difference method."
new_state = State(state.ndim)
# Velocity (m/s) at time_n+1
new_state.velocity = state.velocity + acceleration * delta_time
# Position (m) at time_n+1
new_state.position = state.position + state.velocity * delta_time
return new_state
def main():
state = State(ndim=1)
state.position[0] = 1. # initial stiffness (0 is the "rest" point)
time = 0.
delta_time = 0.01
model = Model()
print("#time acceleration velocity position")
while time < 12:
time = time + delta_time
# Update state (physics)
acceleration = model.compute_acceleration(state)
state = compute_new_state(state, acceleration, delta_time)
print(time, acceleration, state.velocity[0], state.position[0])
if __name__ == "__main__":
main()
|
|
abd97f71e54515c057e94f7d21aa953faba3f5fc
|
taskflow/examples/delayed_return.py
|
taskflow/examples/delayed_return.py
|
# -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import sys
from concurrent import futures
logging.basicConfig(level=logging.ERROR)
self_dir = os.path.abspath(os.path.dirname(__file__))
top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),
os.pardir,
os.pardir))
sys.path.insert(0, top_dir)
sys.path.insert(0, self_dir)
# INTRO: in this example linear_flow we will attach a listener to an engine
# and delay the return from a function until after the result of a task has
# occured in that engine. The engine will continue running (in the background)
# while the function will have returned.
import taskflow.engines
from taskflow.listeners import base
from taskflow.patterns import linear_flow as lf
from taskflow import states
from taskflow import task
from taskflow.utils import misc
class PokeFutureListener(base.ListenerBase):
def __init__(self, engine, future, task_name):
super(PokeFutureListener, self).__init__(
engine,
task_listen_for=(misc.Notifier.ANY,),
flow_listen_for=[])
self._future = future
self._task_name = task_name
def _task_receiver(self, state, details):
if state in (states.SUCCESS, states.FAILURE):
if details.get('task_name') == self._task_name:
if state == states.SUCCESS:
self._future.set_result(details['result'])
else:
failure = details['result']
self._future.set_exception(failure.exception)
class Hi(task.Task):
def execute(self):
# raise IOError("I broken")
return 'hi'
class Bye(task.Task):
def execute(self):
return 'bye'
def return_from_flow(pool):
wf = lf.Flow("root").add(Hi("hi"), Bye("bye"))
eng = taskflow.engines.load(wf, engine_conf='serial')
f = futures.Future()
watcher = PokeFutureListener(eng, f, 'hi')
watcher.register()
pool.submit(eng.run)
return (eng, f.result())
with futures.ThreadPoolExecutor(1) as pool:
engine, hi_result = return_from_flow(pool)
print(hi_result)
print(engine.storage.get_flow_state())
|
Add a example that activates a future when a result is ready
|
Add a example that activates a future when a result is ready
To allow for an engine to continue to run while at the same time
returning from a function when a component of that engine finishes
a pattern can be used that ties and engines listeners to the function
return, allowing for both to be used simulatenously.
Change-Id: Iab49e0c7b233138bc2d02247ab7aa3d99a82cd67
|
Python
|
apache-2.0
|
openstack/taskflow,junneyang/taskflow,openstack/taskflow,jimbobhickville/taskflow,pombredanne/taskflow-1,junneyang/taskflow,pombredanne/taskflow-1,jimbobhickville/taskflow
|
Add a example that activates a future when a result is ready
To allow for an engine to continue to run while at the same time
returning from a function when a component of that engine finishes
a pattern can be used that ties and engines listeners to the function
return, allowing for both to be used simulatenously.
Change-Id: Iab49e0c7b233138bc2d02247ab7aa3d99a82cd67
|
# -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import sys
from concurrent import futures
logging.basicConfig(level=logging.ERROR)
self_dir = os.path.abspath(os.path.dirname(__file__))
top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),
os.pardir,
os.pardir))
sys.path.insert(0, top_dir)
sys.path.insert(0, self_dir)
# INTRO: in this example linear_flow we will attach a listener to an engine
# and delay the return from a function until after the result of a task has
# occured in that engine. The engine will continue running (in the background)
# while the function will have returned.
import taskflow.engines
from taskflow.listeners import base
from taskflow.patterns import linear_flow as lf
from taskflow import states
from taskflow import task
from taskflow.utils import misc
class PokeFutureListener(base.ListenerBase):
def __init__(self, engine, future, task_name):
super(PokeFutureListener, self).__init__(
engine,
task_listen_for=(misc.Notifier.ANY,),
flow_listen_for=[])
self._future = future
self._task_name = task_name
def _task_receiver(self, state, details):
if state in (states.SUCCESS, states.FAILURE):
if details.get('task_name') == self._task_name:
if state == states.SUCCESS:
self._future.set_result(details['result'])
else:
failure = details['result']
self._future.set_exception(failure.exception)
class Hi(task.Task):
def execute(self):
# raise IOError("I broken")
return 'hi'
class Bye(task.Task):
def execute(self):
return 'bye'
def return_from_flow(pool):
wf = lf.Flow("root").add(Hi("hi"), Bye("bye"))
eng = taskflow.engines.load(wf, engine_conf='serial')
f = futures.Future()
watcher = PokeFutureListener(eng, f, 'hi')
watcher.register()
pool.submit(eng.run)
return (eng, f.result())
with futures.ThreadPoolExecutor(1) as pool:
engine, hi_result = return_from_flow(pool)
print(hi_result)
print(engine.storage.get_flow_state())
|
<commit_before><commit_msg>Add a example that activates a future when a result is ready
To allow for an engine to continue to run while at the same time
returning from a function when a component of that engine finishes
a pattern can be used that ties and engines listeners to the function
return, allowing for both to be used simulatenously.
Change-Id: Iab49e0c7b233138bc2d02247ab7aa3d99a82cd67<commit_after>
|
# -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import sys
from concurrent import futures
logging.basicConfig(level=logging.ERROR)
self_dir = os.path.abspath(os.path.dirname(__file__))
top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),
os.pardir,
os.pardir))
sys.path.insert(0, top_dir)
sys.path.insert(0, self_dir)
# INTRO: in this example linear_flow we will attach a listener to an engine
# and delay the return from a function until after the result of a task has
# occured in that engine. The engine will continue running (in the background)
# while the function will have returned.
import taskflow.engines
from taskflow.listeners import base
from taskflow.patterns import linear_flow as lf
from taskflow import states
from taskflow import task
from taskflow.utils import misc
class PokeFutureListener(base.ListenerBase):
def __init__(self, engine, future, task_name):
super(PokeFutureListener, self).__init__(
engine,
task_listen_for=(misc.Notifier.ANY,),
flow_listen_for=[])
self._future = future
self._task_name = task_name
def _task_receiver(self, state, details):
if state in (states.SUCCESS, states.FAILURE):
if details.get('task_name') == self._task_name:
if state == states.SUCCESS:
self._future.set_result(details['result'])
else:
failure = details['result']
self._future.set_exception(failure.exception)
class Hi(task.Task):
def execute(self):
# raise IOError("I broken")
return 'hi'
class Bye(task.Task):
def execute(self):
return 'bye'
def return_from_flow(pool):
wf = lf.Flow("root").add(Hi("hi"), Bye("bye"))
eng = taskflow.engines.load(wf, engine_conf='serial')
f = futures.Future()
watcher = PokeFutureListener(eng, f, 'hi')
watcher.register()
pool.submit(eng.run)
return (eng, f.result())
with futures.ThreadPoolExecutor(1) as pool:
engine, hi_result = return_from_flow(pool)
print(hi_result)
print(engine.storage.get_flow_state())
|
Add a example that activates a future when a result is ready
To allow for an engine to continue to run while at the same time
returning from a function when a component of that engine finishes
a pattern can be used that ties and engines listeners to the function
return, allowing for both to be used simulatenously.
Change-Id: Iab49e0c7b233138bc2d02247ab7aa3d99a82cd67# -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import sys
from concurrent import futures
logging.basicConfig(level=logging.ERROR)
self_dir = os.path.abspath(os.path.dirname(__file__))
top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),
os.pardir,
os.pardir))
sys.path.insert(0, top_dir)
sys.path.insert(0, self_dir)
# INTRO: in this example linear_flow we will attach a listener to an engine
# and delay the return from a function until after the result of a task has
# occured in that engine. The engine will continue running (in the background)
# while the function will have returned.
import taskflow.engines
from taskflow.listeners import base
from taskflow.patterns import linear_flow as lf
from taskflow import states
from taskflow import task
from taskflow.utils import misc
class PokeFutureListener(base.ListenerBase):
def __init__(self, engine, future, task_name):
super(PokeFutureListener, self).__init__(
engine,
task_listen_for=(misc.Notifier.ANY,),
flow_listen_for=[])
self._future = future
self._task_name = task_name
def _task_receiver(self, state, details):
if state in (states.SUCCESS, states.FAILURE):
if details.get('task_name') == self._task_name:
if state == states.SUCCESS:
self._future.set_result(details['result'])
else:
failure = details['result']
self._future.set_exception(failure.exception)
class Hi(task.Task):
def execute(self):
# raise IOError("I broken")
return 'hi'
class Bye(task.Task):
def execute(self):
return 'bye'
def return_from_flow(pool):
wf = lf.Flow("root").add(Hi("hi"), Bye("bye"))
eng = taskflow.engines.load(wf, engine_conf='serial')
f = futures.Future()
watcher = PokeFutureListener(eng, f, 'hi')
watcher.register()
pool.submit(eng.run)
return (eng, f.result())
with futures.ThreadPoolExecutor(1) as pool:
engine, hi_result = return_from_flow(pool)
print(hi_result)
print(engine.storage.get_flow_state())
|
<commit_before><commit_msg>Add a example that activates a future when a result is ready
To allow for an engine to continue to run while at the same time
returning from a function when a component of that engine finishes
a pattern can be used that ties and engines listeners to the function
return, allowing for both to be used simulatenously.
Change-Id: Iab49e0c7b233138bc2d02247ab7aa3d99a82cd67<commit_after># -*- coding: utf-8 -*-
# Copyright (C) 2014 Yahoo! Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import logging
import os
import sys
from concurrent import futures
logging.basicConfig(level=logging.ERROR)
self_dir = os.path.abspath(os.path.dirname(__file__))
top_dir = os.path.abspath(os.path.join(os.path.dirname(__file__),
os.pardir,
os.pardir))
sys.path.insert(0, top_dir)
sys.path.insert(0, self_dir)
# INTRO: in this example linear_flow we will attach a listener to an engine
# and delay the return from a function until after the result of a task has
# occured in that engine. The engine will continue running (in the background)
# while the function will have returned.
import taskflow.engines
from taskflow.listeners import base
from taskflow.patterns import linear_flow as lf
from taskflow import states
from taskflow import task
from taskflow.utils import misc
class PokeFutureListener(base.ListenerBase):
def __init__(self, engine, future, task_name):
super(PokeFutureListener, self).__init__(
engine,
task_listen_for=(misc.Notifier.ANY,),
flow_listen_for=[])
self._future = future
self._task_name = task_name
def _task_receiver(self, state, details):
if state in (states.SUCCESS, states.FAILURE):
if details.get('task_name') == self._task_name:
if state == states.SUCCESS:
self._future.set_result(details['result'])
else:
failure = details['result']
self._future.set_exception(failure.exception)
class Hi(task.Task):
def execute(self):
# raise IOError("I broken")
return 'hi'
class Bye(task.Task):
def execute(self):
return 'bye'
def return_from_flow(pool):
wf = lf.Flow("root").add(Hi("hi"), Bye("bye"))
eng = taskflow.engines.load(wf, engine_conf='serial')
f = futures.Future()
watcher = PokeFutureListener(eng, f, 'hi')
watcher.register()
pool.submit(eng.run)
return (eng, f.result())
with futures.ThreadPoolExecutor(1) as pool:
engine, hi_result = return_from_flow(pool)
print(hi_result)
print(engine.storage.get_flow_state())
|
|
8cf2a16ff98cc831734c82116c4a91ddb1094865
|
tests/integration/modules/cmdmod.py
|
tests/integration/modules/cmdmod.py
|
# Import python libs
import os
# Import salt libs
import integration
class CMDModuleTest(integration.ModuleCase):
'''
Validate the cmd module
'''
def test_run(self):
'''
cmd.run
'''
self.assertTrue(self.run_function('cmd.run', ['echo $SHELL']))
self.assertEqual(
self.run_function('cmd.run',
['echo $SHELL', 'shell=/bin/bash']),
'/bin/bash')
def test_stdout(self):
'''
cmd.run_stdout
'''
self.assertEqual(
self.run_function('cmd.run_stdout',
['echo "cheese"']),
'cheese')
def test_stderr(self):
'''
cmd.run_stderr
'''
self.assertEqual(
self.run_function('cmd.run_stderr',
['echo "cheese" 1>&2']),
'cheese')
def test_run_all(self):
'''
cmd.run_all
'''
ret = self.run_function('cmd.run_all', ['echo "cheese" 1>&2'])
self.assertTrue('pid' in ret)
self.assertTrue('retcode' in ret)
self.assertTrue('stdout' in ret)
self.assertTrue('stderr' in ret)
self.assertTrue(isinstance(ret.get('pid'), int))
self.assertTrue(isinstance(ret.get('retcode'), int))
self.assertTrue(isinstance(ret.get('stdout'), basestring))
self.assertTrue(isinstance(ret.get('stderr'), basestring))
self.assertEqual(ret.get('stderr'), 'cheese')
def test_retcode(self):
'''
cmd.retcode
'''
self.assertEqual(self.run_function('cmd.retcode', ['true']), 0)
self.assertEqual(self.run_function('cmd.retcode', ['false']), 1)
def test_which(self):
'''
cmd.which
'''
self.assertEqual(
self.run_function('cmd.which', ['echo']),
self.run_function('cmd.run', ['which echo']))
def test_has_exec(self):
'''
cmd.has_exec
'''
self.assertTrue(self.run_function('cmd.has_exec', ['python']))
self.assertFalse(self.run_function(
'cmd.has_exec',
['alllfsdfnwieulrrh9123857ygf']
))
def test_exec_code(self):
'''
cmd.exec_code
'''
code = '''
import sys
sys.stdout.write('cheese')
'''
self.assertEqual(
self.run_function('cmd.exec_code', ['python', code]),
'cheese'
)
|
Add tests for the cmd module
|
Add tests for the cmd module
|
Python
|
apache-2.0
|
saltstack/salt,saltstack/salt,saltstack/salt,saltstack/salt,saltstack/salt
|
Add tests for the cmd module
|
# Import python libs
import os
# Import salt libs
import integration
class CMDModuleTest(integration.ModuleCase):
'''
Validate the cmd module
'''
def test_run(self):
'''
cmd.run
'''
self.assertTrue(self.run_function('cmd.run', ['echo $SHELL']))
self.assertEqual(
self.run_function('cmd.run',
['echo $SHELL', 'shell=/bin/bash']),
'/bin/bash')
def test_stdout(self):
'''
cmd.run_stdout
'''
self.assertEqual(
self.run_function('cmd.run_stdout',
['echo "cheese"']),
'cheese')
def test_stderr(self):
'''
cmd.run_stderr
'''
self.assertEqual(
self.run_function('cmd.run_stderr',
['echo "cheese" 1>&2']),
'cheese')
def test_run_all(self):
'''
cmd.run_all
'''
ret = self.run_function('cmd.run_all', ['echo "cheese" 1>&2'])
self.assertTrue('pid' in ret)
self.assertTrue('retcode' in ret)
self.assertTrue('stdout' in ret)
self.assertTrue('stderr' in ret)
self.assertTrue(isinstance(ret.get('pid'), int))
self.assertTrue(isinstance(ret.get('retcode'), int))
self.assertTrue(isinstance(ret.get('stdout'), basestring))
self.assertTrue(isinstance(ret.get('stderr'), basestring))
self.assertEqual(ret.get('stderr'), 'cheese')
def test_retcode(self):
'''
cmd.retcode
'''
self.assertEqual(self.run_function('cmd.retcode', ['true']), 0)
self.assertEqual(self.run_function('cmd.retcode', ['false']), 1)
def test_which(self):
'''
cmd.which
'''
self.assertEqual(
self.run_function('cmd.which', ['echo']),
self.run_function('cmd.run', ['which echo']))
def test_has_exec(self):
'''
cmd.has_exec
'''
self.assertTrue(self.run_function('cmd.has_exec', ['python']))
self.assertFalse(self.run_function(
'cmd.has_exec',
['alllfsdfnwieulrrh9123857ygf']
))
def test_exec_code(self):
'''
cmd.exec_code
'''
code = '''
import sys
sys.stdout.write('cheese')
'''
self.assertEqual(
self.run_function('cmd.exec_code', ['python', code]),
'cheese'
)
|
<commit_before><commit_msg>Add tests for the cmd module<commit_after>
|
# Import python libs
import os
# Import salt libs
import integration
class CMDModuleTest(integration.ModuleCase):
'''
Validate the cmd module
'''
def test_run(self):
'''
cmd.run
'''
self.assertTrue(self.run_function('cmd.run', ['echo $SHELL']))
self.assertEqual(
self.run_function('cmd.run',
['echo $SHELL', 'shell=/bin/bash']),
'/bin/bash')
def test_stdout(self):
'''
cmd.run_stdout
'''
self.assertEqual(
self.run_function('cmd.run_stdout',
['echo "cheese"']),
'cheese')
def test_stderr(self):
'''
cmd.run_stderr
'''
self.assertEqual(
self.run_function('cmd.run_stderr',
['echo "cheese" 1>&2']),
'cheese')
def test_run_all(self):
'''
cmd.run_all
'''
ret = self.run_function('cmd.run_all', ['echo "cheese" 1>&2'])
self.assertTrue('pid' in ret)
self.assertTrue('retcode' in ret)
self.assertTrue('stdout' in ret)
self.assertTrue('stderr' in ret)
self.assertTrue(isinstance(ret.get('pid'), int))
self.assertTrue(isinstance(ret.get('retcode'), int))
self.assertTrue(isinstance(ret.get('stdout'), basestring))
self.assertTrue(isinstance(ret.get('stderr'), basestring))
self.assertEqual(ret.get('stderr'), 'cheese')
def test_retcode(self):
'''
cmd.retcode
'''
self.assertEqual(self.run_function('cmd.retcode', ['true']), 0)
self.assertEqual(self.run_function('cmd.retcode', ['false']), 1)
def test_which(self):
'''
cmd.which
'''
self.assertEqual(
self.run_function('cmd.which', ['echo']),
self.run_function('cmd.run', ['which echo']))
def test_has_exec(self):
'''
cmd.has_exec
'''
self.assertTrue(self.run_function('cmd.has_exec', ['python']))
self.assertFalse(self.run_function(
'cmd.has_exec',
['alllfsdfnwieulrrh9123857ygf']
))
def test_exec_code(self):
'''
cmd.exec_code
'''
code = '''
import sys
sys.stdout.write('cheese')
'''
self.assertEqual(
self.run_function('cmd.exec_code', ['python', code]),
'cheese'
)
|
Add tests for the cmd module# Import python libs
import os
# Import salt libs
import integration
class CMDModuleTest(integration.ModuleCase):
'''
Validate the cmd module
'''
def test_run(self):
'''
cmd.run
'''
self.assertTrue(self.run_function('cmd.run', ['echo $SHELL']))
self.assertEqual(
self.run_function('cmd.run',
['echo $SHELL', 'shell=/bin/bash']),
'/bin/bash')
def test_stdout(self):
'''
cmd.run_stdout
'''
self.assertEqual(
self.run_function('cmd.run_stdout',
['echo "cheese"']),
'cheese')
def test_stderr(self):
'''
cmd.run_stderr
'''
self.assertEqual(
self.run_function('cmd.run_stderr',
['echo "cheese" 1>&2']),
'cheese')
def test_run_all(self):
'''
cmd.run_all
'''
ret = self.run_function('cmd.run_all', ['echo "cheese" 1>&2'])
self.assertTrue('pid' in ret)
self.assertTrue('retcode' in ret)
self.assertTrue('stdout' in ret)
self.assertTrue('stderr' in ret)
self.assertTrue(isinstance(ret.get('pid'), int))
self.assertTrue(isinstance(ret.get('retcode'), int))
self.assertTrue(isinstance(ret.get('stdout'), basestring))
self.assertTrue(isinstance(ret.get('stderr'), basestring))
self.assertEqual(ret.get('stderr'), 'cheese')
def test_retcode(self):
'''
cmd.retcode
'''
self.assertEqual(self.run_function('cmd.retcode', ['true']), 0)
self.assertEqual(self.run_function('cmd.retcode', ['false']), 1)
def test_which(self):
'''
cmd.which
'''
self.assertEqual(
self.run_function('cmd.which', ['echo']),
self.run_function('cmd.run', ['which echo']))
def test_has_exec(self):
'''
cmd.has_exec
'''
self.assertTrue(self.run_function('cmd.has_exec', ['python']))
self.assertFalse(self.run_function(
'cmd.has_exec',
['alllfsdfnwieulrrh9123857ygf']
))
def test_exec_code(self):
'''
cmd.exec_code
'''
code = '''
import sys
sys.stdout.write('cheese')
'''
self.assertEqual(
self.run_function('cmd.exec_code', ['python', code]),
'cheese'
)
|
<commit_before><commit_msg>Add tests for the cmd module<commit_after># Import python libs
import os
# Import salt libs
import integration
class CMDModuleTest(integration.ModuleCase):
'''
Validate the cmd module
'''
def test_run(self):
'''
cmd.run
'''
self.assertTrue(self.run_function('cmd.run', ['echo $SHELL']))
self.assertEqual(
self.run_function('cmd.run',
['echo $SHELL', 'shell=/bin/bash']),
'/bin/bash')
def test_stdout(self):
'''
cmd.run_stdout
'''
self.assertEqual(
self.run_function('cmd.run_stdout',
['echo "cheese"']),
'cheese')
def test_stderr(self):
'''
cmd.run_stderr
'''
self.assertEqual(
self.run_function('cmd.run_stderr',
['echo "cheese" 1>&2']),
'cheese')
def test_run_all(self):
'''
cmd.run_all
'''
ret = self.run_function('cmd.run_all', ['echo "cheese" 1>&2'])
self.assertTrue('pid' in ret)
self.assertTrue('retcode' in ret)
self.assertTrue('stdout' in ret)
self.assertTrue('stderr' in ret)
self.assertTrue(isinstance(ret.get('pid'), int))
self.assertTrue(isinstance(ret.get('retcode'), int))
self.assertTrue(isinstance(ret.get('stdout'), basestring))
self.assertTrue(isinstance(ret.get('stderr'), basestring))
self.assertEqual(ret.get('stderr'), 'cheese')
def test_retcode(self):
'''
cmd.retcode
'''
self.assertEqual(self.run_function('cmd.retcode', ['true']), 0)
self.assertEqual(self.run_function('cmd.retcode', ['false']), 1)
def test_which(self):
'''
cmd.which
'''
self.assertEqual(
self.run_function('cmd.which', ['echo']),
self.run_function('cmd.run', ['which echo']))
def test_has_exec(self):
'''
cmd.has_exec
'''
self.assertTrue(self.run_function('cmd.has_exec', ['python']))
self.assertFalse(self.run_function(
'cmd.has_exec',
['alllfsdfnwieulrrh9123857ygf']
))
def test_exec_code(self):
'''
cmd.exec_code
'''
code = '''
import sys
sys.stdout.write('cheese')
'''
self.assertEqual(
self.run_function('cmd.exec_code', ['python', code]),
'cheese'
)
|
|
04e6770c489734b72d6f14dee7f3e7ac30517456
|
graph/graph_search.py
|
graph/graph_search.py
|
# -*- coding:utf-8 -*-
from collections import deque
from graph import Undigraph
def bfs(graph, s, t):
if s == t:
return
queue = deque()
pre = [ -1 ] * len(graph)
visited = [False] * len(graph)
visited[s] = True
queue.append(s)
while len(queue) > 0:
vertex = queue.popleft()
for adj_v in graph[vertex]:
if not visited[adj_v]:
pre[adj_v] = vertex
if adj_v == t:
return pre
visited[adj_v] = True
queue.append(adj_v)
return pre
def print_vertex_trace(prev, s, t, level=1):
if prev[t] != -1 and t != s:
print_vertex_trace(prev, s, prev[t], level+1)
if level == 1:
print("%d" % t)
else:
print("%d -> " % t, end="")
if __name__ == '__main__':
g = Undigraph(8)
g.add_edge(0, 1)
g.add_edge(0, 3)
g.add_edge(1, 2)
g.add_edge(1, 4)
g.add_edge(2, 5)
g.add_edge(3, 4)
g.add_edge(4, 5)
g.add_edge(4, 6)
g.add_edge(5, 7)
g.add_edge(6, 7)
print(g)
pre = bfs(g, 0, 7)
print_vertex_trace(pre, 0, 7)
|
Implement the broad first search of the undirected graph
|
Implement the broad first search of the undirected graph
|
Python
|
apache-2.0
|
free-free/algorithm,free-free/algorithm
|
Implement the broad first search of the undirected graph
|
# -*- coding:utf-8 -*-
from collections import deque
from graph import Undigraph
def bfs(graph, s, t):
if s == t:
return
queue = deque()
pre = [ -1 ] * len(graph)
visited = [False] * len(graph)
visited[s] = True
queue.append(s)
while len(queue) > 0:
vertex = queue.popleft()
for adj_v in graph[vertex]:
if not visited[adj_v]:
pre[adj_v] = vertex
if adj_v == t:
return pre
visited[adj_v] = True
queue.append(adj_v)
return pre
def print_vertex_trace(prev, s, t, level=1):
if prev[t] != -1 and t != s:
print_vertex_trace(prev, s, prev[t], level+1)
if level == 1:
print("%d" % t)
else:
print("%d -> " % t, end="")
if __name__ == '__main__':
g = Undigraph(8)
g.add_edge(0, 1)
g.add_edge(0, 3)
g.add_edge(1, 2)
g.add_edge(1, 4)
g.add_edge(2, 5)
g.add_edge(3, 4)
g.add_edge(4, 5)
g.add_edge(4, 6)
g.add_edge(5, 7)
g.add_edge(6, 7)
print(g)
pre = bfs(g, 0, 7)
print_vertex_trace(pre, 0, 7)
|
<commit_before><commit_msg>Implement the broad first search of the undirected graph<commit_after>
|
# -*- coding:utf-8 -*-
from collections import deque
from graph import Undigraph
def bfs(graph, s, t):
if s == t:
return
queue = deque()
pre = [ -1 ] * len(graph)
visited = [False] * len(graph)
visited[s] = True
queue.append(s)
while len(queue) > 0:
vertex = queue.popleft()
for adj_v in graph[vertex]:
if not visited[adj_v]:
pre[adj_v] = vertex
if adj_v == t:
return pre
visited[adj_v] = True
queue.append(adj_v)
return pre
def print_vertex_trace(prev, s, t, level=1):
if prev[t] != -1 and t != s:
print_vertex_trace(prev, s, prev[t], level+1)
if level == 1:
print("%d" % t)
else:
print("%d -> " % t, end="")
if __name__ == '__main__':
g = Undigraph(8)
g.add_edge(0, 1)
g.add_edge(0, 3)
g.add_edge(1, 2)
g.add_edge(1, 4)
g.add_edge(2, 5)
g.add_edge(3, 4)
g.add_edge(4, 5)
g.add_edge(4, 6)
g.add_edge(5, 7)
g.add_edge(6, 7)
print(g)
pre = bfs(g, 0, 7)
print_vertex_trace(pre, 0, 7)
|
Implement the broad first search of the undirected graph# -*- coding:utf-8 -*-
from collections import deque
from graph import Undigraph
def bfs(graph, s, t):
if s == t:
return
queue = deque()
pre = [ -1 ] * len(graph)
visited = [False] * len(graph)
visited[s] = True
queue.append(s)
while len(queue) > 0:
vertex = queue.popleft()
for adj_v in graph[vertex]:
if not visited[adj_v]:
pre[adj_v] = vertex
if adj_v == t:
return pre
visited[adj_v] = True
queue.append(adj_v)
return pre
def print_vertex_trace(prev, s, t, level=1):
if prev[t] != -1 and t != s:
print_vertex_trace(prev, s, prev[t], level+1)
if level == 1:
print("%d" % t)
else:
print("%d -> " % t, end="")
if __name__ == '__main__':
g = Undigraph(8)
g.add_edge(0, 1)
g.add_edge(0, 3)
g.add_edge(1, 2)
g.add_edge(1, 4)
g.add_edge(2, 5)
g.add_edge(3, 4)
g.add_edge(4, 5)
g.add_edge(4, 6)
g.add_edge(5, 7)
g.add_edge(6, 7)
print(g)
pre = bfs(g, 0, 7)
print_vertex_trace(pre, 0, 7)
|
<commit_before><commit_msg>Implement the broad first search of the undirected graph<commit_after># -*- coding:utf-8 -*-
from collections import deque
from graph import Undigraph
def bfs(graph, s, t):
if s == t:
return
queue = deque()
pre = [ -1 ] * len(graph)
visited = [False] * len(graph)
visited[s] = True
queue.append(s)
while len(queue) > 0:
vertex = queue.popleft()
for adj_v in graph[vertex]:
if not visited[adj_v]:
pre[adj_v] = vertex
if adj_v == t:
return pre
visited[adj_v] = True
queue.append(adj_v)
return pre
def print_vertex_trace(prev, s, t, level=1):
if prev[t] != -1 and t != s:
print_vertex_trace(prev, s, prev[t], level+1)
if level == 1:
print("%d" % t)
else:
print("%d -> " % t, end="")
if __name__ == '__main__':
g = Undigraph(8)
g.add_edge(0, 1)
g.add_edge(0, 3)
g.add_edge(1, 2)
g.add_edge(1, 4)
g.add_edge(2, 5)
g.add_edge(3, 4)
g.add_edge(4, 5)
g.add_edge(4, 6)
g.add_edge(5, 7)
g.add_edge(6, 7)
print(g)
pre = bfs(g, 0, 7)
print_vertex_trace(pre, 0, 7)
|
|
f86cdb5391aef61d9aa81e28a7777d394b37410a
|
Regression/PolynomialRegression/regularPolynomialRegression.py
|
Regression/PolynomialRegression/regularPolynomialRegression.py
|
# -*- coding: utf-8 -*-
"""Polynomial regression for machine learning.
polynomial regression is a form of regression analysis in which the
relationship between the independent variable x and the dependent variable y is
modelled as an nth degree polynomial in x. Polynomial regression fits a
nonlinear relationship between the value of x and the corresponding conditional
mean of y, denoted E(y |x)Although polynomial regression fits a nonlinear model
to the data, as a statistical estimation problem it is linear, in the sense
that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data. For this reason, polynomial regression is
considered to be a special case of multiple linear regression.
Example:
$ python regularPolynomialRegression.py
Todo:
*
"""
|
Add Polynomial regression Python file
|
Add Polynomial regression Python file
|
Python
|
mit
|
a-holm/MachinelearningAlgorithms,a-holm/MachinelearningAlgorithms
|
Add Polynomial regression Python file
|
# -*- coding: utf-8 -*-
"""Polynomial regression for machine learning.
polynomial regression is a form of regression analysis in which the
relationship between the independent variable x and the dependent variable y is
modelled as an nth degree polynomial in x. Polynomial regression fits a
nonlinear relationship between the value of x and the corresponding conditional
mean of y, denoted E(y |x)Although polynomial regression fits a nonlinear model
to the data, as a statistical estimation problem it is linear, in the sense
that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data. For this reason, polynomial regression is
considered to be a special case of multiple linear regression.
Example:
$ python regularPolynomialRegression.py
Todo:
*
"""
|
<commit_before><commit_msg>Add Polynomial regression Python file<commit_after>
|
# -*- coding: utf-8 -*-
"""Polynomial regression for machine learning.
polynomial regression is a form of regression analysis in which the
relationship between the independent variable x and the dependent variable y is
modelled as an nth degree polynomial in x. Polynomial regression fits a
nonlinear relationship between the value of x and the corresponding conditional
mean of y, denoted E(y |x)Although polynomial regression fits a nonlinear model
to the data, as a statistical estimation problem it is linear, in the sense
that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data. For this reason, polynomial regression is
considered to be a special case of multiple linear regression.
Example:
$ python regularPolynomialRegression.py
Todo:
*
"""
|
Add Polynomial regression Python file# -*- coding: utf-8 -*-
"""Polynomial regression for machine learning.
polynomial regression is a form of regression analysis in which the
relationship between the independent variable x and the dependent variable y is
modelled as an nth degree polynomial in x. Polynomial regression fits a
nonlinear relationship between the value of x and the corresponding conditional
mean of y, denoted E(y |x)Although polynomial regression fits a nonlinear model
to the data, as a statistical estimation problem it is linear, in the sense
that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data. For this reason, polynomial regression is
considered to be a special case of multiple linear regression.
Example:
$ python regularPolynomialRegression.py
Todo:
*
"""
|
<commit_before><commit_msg>Add Polynomial regression Python file<commit_after># -*- coding: utf-8 -*-
"""Polynomial regression for machine learning.
polynomial regression is a form of regression analysis in which the
relationship between the independent variable x and the dependent variable y is
modelled as an nth degree polynomial in x. Polynomial regression fits a
nonlinear relationship between the value of x and the corresponding conditional
mean of y, denoted E(y |x)Although polynomial regression fits a nonlinear model
to the data, as a statistical estimation problem it is linear, in the sense
that the regression function E(y | x) is linear in the unknown parameters that
are estimated from the data. For this reason, polynomial regression is
considered to be a special case of multiple linear regression.
Example:
$ python regularPolynomialRegression.py
Todo:
*
"""
|
|
3638aa4949d7f0f2b79fed0e829e56777c0eb9c0
|
gooey/tests/test_language_parity.py
|
gooey/tests/test_language_parity.py
|
import os
import unittest
import json
from collections import OrderedDict
from gooey import languages
from gooey.gui.processor import ProcessController
class TestLanguageParity(unittest.TestCase):
"""
Checks that all language files have the same set of keys so that non-english
languages don't silently break as features are added to Gooey.
"""
def test_languageParity(self):
langDir = os.path.dirname(languages.__file__)
englishFile = os.path.join(langDir, 'english.json')
english = self.readFile(englishFile)
jsonFiles = [(path, self.readFile(os.path.join(langDir, path)))
for path in os.listdir(langDir)
if path.endswith('json') and 'poooo' not in path and '2' not in path]
allKeys = set(english.keys())
for name, contents in jsonFiles:
missing = allKeys.difference(set(contents.keys()))
self.assertEqual(
set(),
missing,
"{} language file is missing keys: [{}]".format(name, missing)
)
def readFile(self, path):
with open(path, 'r', encoding='utf-8') as f:
return json.loads(f.read())
if __name__ == '__main__':
unittest.main()
|
Add regression test for language files
|
Add regression test for language files
|
Python
|
mit
|
partrita/Gooey,chriskiehl/Gooey,codingsnippets/Gooey
|
Add regression test for language files
|
import os
import unittest
import json
from collections import OrderedDict
from gooey import languages
from gooey.gui.processor import ProcessController
class TestLanguageParity(unittest.TestCase):
"""
Checks that all language files have the same set of keys so that non-english
languages don't silently break as features are added to Gooey.
"""
def test_languageParity(self):
langDir = os.path.dirname(languages.__file__)
englishFile = os.path.join(langDir, 'english.json')
english = self.readFile(englishFile)
jsonFiles = [(path, self.readFile(os.path.join(langDir, path)))
for path in os.listdir(langDir)
if path.endswith('json') and 'poooo' not in path and '2' not in path]
allKeys = set(english.keys())
for name, contents in jsonFiles:
missing = allKeys.difference(set(contents.keys()))
self.assertEqual(
set(),
missing,
"{} language file is missing keys: [{}]".format(name, missing)
)
def readFile(self, path):
with open(path, 'r', encoding='utf-8') as f:
return json.loads(f.read())
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add regression test for language files<commit_after>
|
import os
import unittest
import json
from collections import OrderedDict
from gooey import languages
from gooey.gui.processor import ProcessController
class TestLanguageParity(unittest.TestCase):
"""
Checks that all language files have the same set of keys so that non-english
languages don't silently break as features are added to Gooey.
"""
def test_languageParity(self):
langDir = os.path.dirname(languages.__file__)
englishFile = os.path.join(langDir, 'english.json')
english = self.readFile(englishFile)
jsonFiles = [(path, self.readFile(os.path.join(langDir, path)))
for path in os.listdir(langDir)
if path.endswith('json') and 'poooo' not in path and '2' not in path]
allKeys = set(english.keys())
for name, contents in jsonFiles:
missing = allKeys.difference(set(contents.keys()))
self.assertEqual(
set(),
missing,
"{} language file is missing keys: [{}]".format(name, missing)
)
def readFile(self, path):
with open(path, 'r', encoding='utf-8') as f:
return json.loads(f.read())
if __name__ == '__main__':
unittest.main()
|
Add regression test for language filesimport os
import unittest
import json
from collections import OrderedDict
from gooey import languages
from gooey.gui.processor import ProcessController
class TestLanguageParity(unittest.TestCase):
"""
Checks that all language files have the same set of keys so that non-english
languages don't silently break as features are added to Gooey.
"""
def test_languageParity(self):
langDir = os.path.dirname(languages.__file__)
englishFile = os.path.join(langDir, 'english.json')
english = self.readFile(englishFile)
jsonFiles = [(path, self.readFile(os.path.join(langDir, path)))
for path in os.listdir(langDir)
if path.endswith('json') and 'poooo' not in path and '2' not in path]
allKeys = set(english.keys())
for name, contents in jsonFiles:
missing = allKeys.difference(set(contents.keys()))
self.assertEqual(
set(),
missing,
"{} language file is missing keys: [{}]".format(name, missing)
)
def readFile(self, path):
with open(path, 'r', encoding='utf-8') as f:
return json.loads(f.read())
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add regression test for language files<commit_after>import os
import unittest
import json
from collections import OrderedDict
from gooey import languages
from gooey.gui.processor import ProcessController
class TestLanguageParity(unittest.TestCase):
"""
Checks that all language files have the same set of keys so that non-english
languages don't silently break as features are added to Gooey.
"""
def test_languageParity(self):
langDir = os.path.dirname(languages.__file__)
englishFile = os.path.join(langDir, 'english.json')
english = self.readFile(englishFile)
jsonFiles = [(path, self.readFile(os.path.join(langDir, path)))
for path in os.listdir(langDir)
if path.endswith('json') and 'poooo' not in path and '2' not in path]
allKeys = set(english.keys())
for name, contents in jsonFiles:
missing = allKeys.difference(set(contents.keys()))
self.assertEqual(
set(),
missing,
"{} language file is missing keys: [{}]".format(name, missing)
)
def readFile(self, path):
with open(path, 'r', encoding='utf-8') as f:
return json.loads(f.read())
if __name__ == '__main__':
unittest.main()
|
|
f4715ec87f03cda417a33a5fba901229f6687ce2
|
gdb/vectorwrapper.py
|
gdb/vectorwrapper.py
|
#
# Copyright 2015-2017 Michele "King_DuckZ" Santullo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gdb.printing
#NOTE:
#This pretty-printer has been written for vectorwrapper + std::array.
#You will have to customize it if your wrapped type is different.
class VectorWrapperPrinter(object):
"Print values in a VectorWrapper"
class _iterator(object):
def __init__(self, start, size):
self.item = start
self.pos = 0
self.size = size
def __iter__(self):
return self
def __next__(self):
if self.pos == self.size:
raise StopIteration
elt = self.item.dereference()
self.item = self.item + 1
self.pos = self.pos + 1
return str(elt)
def __init__(self, val):
self.val = val
self.dimensions = int(val.type.template_argument(1))
def to_string(self):
#get the scalar[n] value
stdarray_elems = self.val['m_wrapped']['_M_elems']
#get the actual scalar type
elem_type = self.val['m_wrapped'].type.template_argument(0)
#cast scalar[n] to scalar*
ptr = stdarray_elems.address.cast(elem_type.pointer())
#iterate over the values in scalar* and str() them
retval = "vec" + str(self.dimensions) + "<" + \
", ".join(
str(itm) for itm in self._iterator(ptr, self.dimensions)
) + ">"
return retval
def display_hint(self):
return 'string'
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter(__name__)
#add your namespace before 'vwr' if you're using a custom one
pp.add_printer('vwr', '^vwr::Vec<.+$', VectorWrapperPrinter)
return pp
gdb.printing.register_pretty_printer(
gdb.current_objfile(),
build_pretty_printer()
)
|
Add a sample gdb pretty printer for Vec<std::array> types
|
Add a sample gdb pretty printer for Vec<std::array> types
|
Python
|
apache-2.0
|
KingDuckZ/vectorwrapper,KingDuckZ/vectorwrapper,KingDuckZ/vectorwrapper,KingDuckZ/vectorwrapper
|
Add a sample gdb pretty printer for Vec<std::array> types
|
#
# Copyright 2015-2017 Michele "King_DuckZ" Santullo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gdb.printing
#NOTE:
#This pretty-printer has been written for vectorwrapper + std::array.
#You will have to customize it if your wrapped type is different.
class VectorWrapperPrinter(object):
"Print values in a VectorWrapper"
class _iterator(object):
def __init__(self, start, size):
self.item = start
self.pos = 0
self.size = size
def __iter__(self):
return self
def __next__(self):
if self.pos == self.size:
raise StopIteration
elt = self.item.dereference()
self.item = self.item + 1
self.pos = self.pos + 1
return str(elt)
def __init__(self, val):
self.val = val
self.dimensions = int(val.type.template_argument(1))
def to_string(self):
#get the scalar[n] value
stdarray_elems = self.val['m_wrapped']['_M_elems']
#get the actual scalar type
elem_type = self.val['m_wrapped'].type.template_argument(0)
#cast scalar[n] to scalar*
ptr = stdarray_elems.address.cast(elem_type.pointer())
#iterate over the values in scalar* and str() them
retval = "vec" + str(self.dimensions) + "<" + \
", ".join(
str(itm) for itm in self._iterator(ptr, self.dimensions)
) + ">"
return retval
def display_hint(self):
return 'string'
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter(__name__)
#add your namespace before 'vwr' if you're using a custom one
pp.add_printer('vwr', '^vwr::Vec<.+$', VectorWrapperPrinter)
return pp
gdb.printing.register_pretty_printer(
gdb.current_objfile(),
build_pretty_printer()
)
|
<commit_before><commit_msg>Add a sample gdb pretty printer for Vec<std::array> types<commit_after>
|
#
# Copyright 2015-2017 Michele "King_DuckZ" Santullo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gdb.printing
#NOTE:
#This pretty-printer has been written for vectorwrapper + std::array.
#You will have to customize it if your wrapped type is different.
class VectorWrapperPrinter(object):
"Print values in a VectorWrapper"
class _iterator(object):
def __init__(self, start, size):
self.item = start
self.pos = 0
self.size = size
def __iter__(self):
return self
def __next__(self):
if self.pos == self.size:
raise StopIteration
elt = self.item.dereference()
self.item = self.item + 1
self.pos = self.pos + 1
return str(elt)
def __init__(self, val):
self.val = val
self.dimensions = int(val.type.template_argument(1))
def to_string(self):
#get the scalar[n] value
stdarray_elems = self.val['m_wrapped']['_M_elems']
#get the actual scalar type
elem_type = self.val['m_wrapped'].type.template_argument(0)
#cast scalar[n] to scalar*
ptr = stdarray_elems.address.cast(elem_type.pointer())
#iterate over the values in scalar* and str() them
retval = "vec" + str(self.dimensions) + "<" + \
", ".join(
str(itm) for itm in self._iterator(ptr, self.dimensions)
) + ">"
return retval
def display_hint(self):
return 'string'
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter(__name__)
#add your namespace before 'vwr' if you're using a custom one
pp.add_printer('vwr', '^vwr::Vec<.+$', VectorWrapperPrinter)
return pp
gdb.printing.register_pretty_printer(
gdb.current_objfile(),
build_pretty_printer()
)
|
Add a sample gdb pretty printer for Vec<std::array> types #
# Copyright 2015-2017 Michele "King_DuckZ" Santullo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gdb.printing
#NOTE:
#This pretty-printer has been written for vectorwrapper + std::array.
#You will have to customize it if your wrapped type is different.
class VectorWrapperPrinter(object):
"Print values in a VectorWrapper"
class _iterator(object):
def __init__(self, start, size):
self.item = start
self.pos = 0
self.size = size
def __iter__(self):
return self
def __next__(self):
if self.pos == self.size:
raise StopIteration
elt = self.item.dereference()
self.item = self.item + 1
self.pos = self.pos + 1
return str(elt)
def __init__(self, val):
self.val = val
self.dimensions = int(val.type.template_argument(1))
def to_string(self):
#get the scalar[n] value
stdarray_elems = self.val['m_wrapped']['_M_elems']
#get the actual scalar type
elem_type = self.val['m_wrapped'].type.template_argument(0)
#cast scalar[n] to scalar*
ptr = stdarray_elems.address.cast(elem_type.pointer())
#iterate over the values in scalar* and str() them
retval = "vec" + str(self.dimensions) + "<" + \
", ".join(
str(itm) for itm in self._iterator(ptr, self.dimensions)
) + ">"
return retval
def display_hint(self):
return 'string'
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter(__name__)
#add your namespace before 'vwr' if you're using a custom one
pp.add_printer('vwr', '^vwr::Vec<.+$', VectorWrapperPrinter)
return pp
gdb.printing.register_pretty_printer(
gdb.current_objfile(),
build_pretty_printer()
)
|
<commit_before><commit_msg>Add a sample gdb pretty printer for Vec<std::array> types<commit_after> #
# Copyright 2015-2017 Michele "King_DuckZ" Santullo
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gdb.printing
#NOTE:
#This pretty-printer has been written for vectorwrapper + std::array.
#You will have to customize it if your wrapped type is different.
class VectorWrapperPrinter(object):
"Print values in a VectorWrapper"
class _iterator(object):
def __init__(self, start, size):
self.item = start
self.pos = 0
self.size = size
def __iter__(self):
return self
def __next__(self):
if self.pos == self.size:
raise StopIteration
elt = self.item.dereference()
self.item = self.item + 1
self.pos = self.pos + 1
return str(elt)
def __init__(self, val):
self.val = val
self.dimensions = int(val.type.template_argument(1))
def to_string(self):
#get the scalar[n] value
stdarray_elems = self.val['m_wrapped']['_M_elems']
#get the actual scalar type
elem_type = self.val['m_wrapped'].type.template_argument(0)
#cast scalar[n] to scalar*
ptr = stdarray_elems.address.cast(elem_type.pointer())
#iterate over the values in scalar* and str() them
retval = "vec" + str(self.dimensions) + "<" + \
", ".join(
str(itm) for itm in self._iterator(ptr, self.dimensions)
) + ">"
return retval
def display_hint(self):
return 'string'
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter(__name__)
#add your namespace before 'vwr' if you're using a custom one
pp.add_printer('vwr', '^vwr::Vec<.+$', VectorWrapperPrinter)
return pp
gdb.printing.register_pretty_printer(
gdb.current_objfile(),
build_pretty_printer()
)
|
|
e407b99d0d1f89bab3d90fa519631b0088735f4e
|
InvenTree/part/test_migrations.py
|
InvenTree/part/test_migrations.py
|
"""
Unit tests for the part model database migrations
"""
from django_test_migrations.contrib.unittest_case import MigratorTestCase
from InvenTree import helpers
class TestForwardMigrations(MigratorTestCase):
"""
Test entire schema migration sequence for the part app
"""
migrate_from = ('part', helpers.getOldestMigrationFile('part'))
migrate_to = ('part', helpers.getNewestMigrationFile('part'))
def prepare(self):
"""
Create initial data
"""
Part = self.old_state.apps.get_model('part', 'part')
Part.objects.create(name='A', description='My part A')
Part.objects.create(name='B', description='My part B')
Part.objects.create(name='C', description='My part C')
Part.objects.create(name='D', description='My part D')
Part.objects.create(name='E', description='My part E')
# Extract one part object to investigate
p = Part.objects.all().last()
# Initially some fields are not present
with self.assertRaises(AttributeError):
print(p.has_variants)
with self.assertRaises(AttributeError):
print(p.is_template)
def test_models_exist(self):
Part = self.new_state.apps.get_model('part', 'part')
self.assertEqual(Part.objects.count(), 5)
for part in Part.objects.all():
part.is_template = True
part.save()
part.is_template = False
part.save()
|
Add initial migration unit test for the 'part' app
|
Add initial migration unit test for the 'part' app
|
Python
|
mit
|
inventree/InvenTree,SchrodingersGat/InvenTree,inventree/InvenTree,SchrodingersGat/InvenTree,inventree/InvenTree,inventree/InvenTree,SchrodingersGat/InvenTree,SchrodingersGat/InvenTree
|
Add initial migration unit test for the 'part' app
|
"""
Unit tests for the part model database migrations
"""
from django_test_migrations.contrib.unittest_case import MigratorTestCase
from InvenTree import helpers
class TestForwardMigrations(MigratorTestCase):
"""
Test entire schema migration sequence for the part app
"""
migrate_from = ('part', helpers.getOldestMigrationFile('part'))
migrate_to = ('part', helpers.getNewestMigrationFile('part'))
def prepare(self):
"""
Create initial data
"""
Part = self.old_state.apps.get_model('part', 'part')
Part.objects.create(name='A', description='My part A')
Part.objects.create(name='B', description='My part B')
Part.objects.create(name='C', description='My part C')
Part.objects.create(name='D', description='My part D')
Part.objects.create(name='E', description='My part E')
# Extract one part object to investigate
p = Part.objects.all().last()
# Initially some fields are not present
with self.assertRaises(AttributeError):
print(p.has_variants)
with self.assertRaises(AttributeError):
print(p.is_template)
def test_models_exist(self):
Part = self.new_state.apps.get_model('part', 'part')
self.assertEqual(Part.objects.count(), 5)
for part in Part.objects.all():
part.is_template = True
part.save()
part.is_template = False
part.save()
|
<commit_before><commit_msg>Add initial migration unit test for the 'part' app<commit_after>
|
"""
Unit tests for the part model database migrations
"""
from django_test_migrations.contrib.unittest_case import MigratorTestCase
from InvenTree import helpers
class TestForwardMigrations(MigratorTestCase):
"""
Test entire schema migration sequence for the part app
"""
migrate_from = ('part', helpers.getOldestMigrationFile('part'))
migrate_to = ('part', helpers.getNewestMigrationFile('part'))
def prepare(self):
"""
Create initial data
"""
Part = self.old_state.apps.get_model('part', 'part')
Part.objects.create(name='A', description='My part A')
Part.objects.create(name='B', description='My part B')
Part.objects.create(name='C', description='My part C')
Part.objects.create(name='D', description='My part D')
Part.objects.create(name='E', description='My part E')
# Extract one part object to investigate
p = Part.objects.all().last()
# Initially some fields are not present
with self.assertRaises(AttributeError):
print(p.has_variants)
with self.assertRaises(AttributeError):
print(p.is_template)
def test_models_exist(self):
Part = self.new_state.apps.get_model('part', 'part')
self.assertEqual(Part.objects.count(), 5)
for part in Part.objects.all():
part.is_template = True
part.save()
part.is_template = False
part.save()
|
Add initial migration unit test for the 'part' app"""
Unit tests for the part model database migrations
"""
from django_test_migrations.contrib.unittest_case import MigratorTestCase
from InvenTree import helpers
class TestForwardMigrations(MigratorTestCase):
"""
Test entire schema migration sequence for the part app
"""
migrate_from = ('part', helpers.getOldestMigrationFile('part'))
migrate_to = ('part', helpers.getNewestMigrationFile('part'))
def prepare(self):
"""
Create initial data
"""
Part = self.old_state.apps.get_model('part', 'part')
Part.objects.create(name='A', description='My part A')
Part.objects.create(name='B', description='My part B')
Part.objects.create(name='C', description='My part C')
Part.objects.create(name='D', description='My part D')
Part.objects.create(name='E', description='My part E')
# Extract one part object to investigate
p = Part.objects.all().last()
# Initially some fields are not present
with self.assertRaises(AttributeError):
print(p.has_variants)
with self.assertRaises(AttributeError):
print(p.is_template)
def test_models_exist(self):
Part = self.new_state.apps.get_model('part', 'part')
self.assertEqual(Part.objects.count(), 5)
for part in Part.objects.all():
part.is_template = True
part.save()
part.is_template = False
part.save()
|
<commit_before><commit_msg>Add initial migration unit test for the 'part' app<commit_after>"""
Unit tests for the part model database migrations
"""
from django_test_migrations.contrib.unittest_case import MigratorTestCase
from InvenTree import helpers
class TestForwardMigrations(MigratorTestCase):
"""
Test entire schema migration sequence for the part app
"""
migrate_from = ('part', helpers.getOldestMigrationFile('part'))
migrate_to = ('part', helpers.getNewestMigrationFile('part'))
def prepare(self):
"""
Create initial data
"""
Part = self.old_state.apps.get_model('part', 'part')
Part.objects.create(name='A', description='My part A')
Part.objects.create(name='B', description='My part B')
Part.objects.create(name='C', description='My part C')
Part.objects.create(name='D', description='My part D')
Part.objects.create(name='E', description='My part E')
# Extract one part object to investigate
p = Part.objects.all().last()
# Initially some fields are not present
with self.assertRaises(AttributeError):
print(p.has_variants)
with self.assertRaises(AttributeError):
print(p.is_template)
def test_models_exist(self):
Part = self.new_state.apps.get_model('part', 'part')
self.assertEqual(Part.objects.count(), 5)
for part in Part.objects.all():
part.is_template = True
part.save()
part.is_template = False
part.save()
|
|
9638f0e932f6efb65b8a1dd8dc965cdffd99cc39
|
kdvadmin/migrations/0001_initial.py
|
kdvadmin/migrations/0001_initial.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-10-18 21:24
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='KDVPricing',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.PositiveIntegerField(verbose_name='Preis in Cent')),
('quantity', models.PositiveIntegerField(verbose_name='Anzahl im Vorrat')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'db_table': 'pricings',
},
),
migrations.CreateModel(
name='KDVProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=255, null=True, verbose_name='Produktname')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'verbose_name': 'KDV-Produkt',
'verbose_name_plural': 'KDV-Produkte',
'db_table': 'products',
},
),
migrations.CreateModel(
name='KDVProductBarcode',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(blank=True, max_length=255, null=True, verbose_name='Barcode')),
('identifiable_type', models.CharField(default='Product', max_length=255)),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
('identifiable', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct')),
],
options={
'db_table': 'kdv_product_identifiers',
},
),
migrations.AddField(
model_name='kdvpricing',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct'),
),
]
|
Add missing migration for kdvadmin
|
Add missing migration for kdvadmin
|
Python
|
agpl-3.0
|
d120/kifplan,d120/kifplan,d120/kifplan
|
Add missing migration for kdvadmin
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-10-18 21:24
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='KDVPricing',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.PositiveIntegerField(verbose_name='Preis in Cent')),
('quantity', models.PositiveIntegerField(verbose_name='Anzahl im Vorrat')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'db_table': 'pricings',
},
),
migrations.CreateModel(
name='KDVProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=255, null=True, verbose_name='Produktname')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'verbose_name': 'KDV-Produkt',
'verbose_name_plural': 'KDV-Produkte',
'db_table': 'products',
},
),
migrations.CreateModel(
name='KDVProductBarcode',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(blank=True, max_length=255, null=True, verbose_name='Barcode')),
('identifiable_type', models.CharField(default='Product', max_length=255)),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
('identifiable', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct')),
],
options={
'db_table': 'kdv_product_identifiers',
},
),
migrations.AddField(
model_name='kdvpricing',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct'),
),
]
|
<commit_before><commit_msg>Add missing migration for kdvadmin<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-10-18 21:24
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='KDVPricing',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.PositiveIntegerField(verbose_name='Preis in Cent')),
('quantity', models.PositiveIntegerField(verbose_name='Anzahl im Vorrat')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'db_table': 'pricings',
},
),
migrations.CreateModel(
name='KDVProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=255, null=True, verbose_name='Produktname')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'verbose_name': 'KDV-Produkt',
'verbose_name_plural': 'KDV-Produkte',
'db_table': 'products',
},
),
migrations.CreateModel(
name='KDVProductBarcode',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(blank=True, max_length=255, null=True, verbose_name='Barcode')),
('identifiable_type', models.CharField(default='Product', max_length=255)),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
('identifiable', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct')),
],
options={
'db_table': 'kdv_product_identifiers',
},
),
migrations.AddField(
model_name='kdvpricing',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct'),
),
]
|
Add missing migration for kdvadmin# -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-10-18 21:24
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='KDVPricing',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.PositiveIntegerField(verbose_name='Preis in Cent')),
('quantity', models.PositiveIntegerField(verbose_name='Anzahl im Vorrat')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'db_table': 'pricings',
},
),
migrations.CreateModel(
name='KDVProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=255, null=True, verbose_name='Produktname')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'verbose_name': 'KDV-Produkt',
'verbose_name_plural': 'KDV-Produkte',
'db_table': 'products',
},
),
migrations.CreateModel(
name='KDVProductBarcode',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(blank=True, max_length=255, null=True, verbose_name='Barcode')),
('identifiable_type', models.CharField(default='Product', max_length=255)),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
('identifiable', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct')),
],
options={
'db_table': 'kdv_product_identifiers',
},
),
migrations.AddField(
model_name='kdvpricing',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct'),
),
]
|
<commit_before><commit_msg>Add missing migration for kdvadmin<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.11.16 on 2018-10-18 21:24
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='KDVPricing',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('price', models.PositiveIntegerField(verbose_name='Preis in Cent')),
('quantity', models.PositiveIntegerField(verbose_name='Anzahl im Vorrat')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'db_table': 'pricings',
},
),
migrations.CreateModel(
name='KDVProduct',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(blank=True, max_length=255, null=True, verbose_name='Produktname')),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
],
options={
'verbose_name': 'KDV-Produkt',
'verbose_name_plural': 'KDV-Produkte',
'db_table': 'products',
},
),
migrations.CreateModel(
name='KDVProductBarcode',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('code', models.CharField(blank=True, max_length=255, null=True, verbose_name='Barcode')),
('identifiable_type', models.CharField(default='Product', max_length=255)),
('created_at', models.DateTimeField(auto_now_add=True, null=True)),
('updated_at', models.DateTimeField(auto_now=True, null=True)),
('identifiable', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct')),
],
options={
'db_table': 'kdv_product_identifiers',
},
),
migrations.AddField(
model_name='kdvpricing',
name='product',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='kdvadmin.KDVProduct'),
),
]
|
|
70fb8e9055e106e66f330e3c70b85c977dce53d8
|
salt/utils/dicttrim.py
|
salt/utils/dicttrim.py
|
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
import sys
import salt.payload
def trim_dict(
data,
max_dict_bytes,
percent=50.0,
stepper_size=10,
replace_with='VALUE_TRIMMED',
is_msgpacked=False):
'''
Takes a dictionary and iterates over its keys, looking for
large values and replacing them with a trimmed string.
If after the first pass over dictionary keys, the dictionary
is not sufficiently small, the stepper_size will be increased
and the dictionary will be rescanned. This allows for progressive
scanning, removing large items first and only making additional
passes for smaller items if necessary.
This function uses msgpack to calculate the size of the dictionary
in question. While this might seem like unnecessary overhead, a
data structure in python must be serialized in order for sys.getsizeof()
to accurately return the items referenced in the structure.
Ex:
>>> salt.utils.trim_dict({'a': 'b', 'c': 'x' * 10000}, 100)
{'a': 'b', 'c': 'VALUE_TRIMMED'}
To improve performance, it is adviseable to pass in msgpacked
data structures instead of raw dictionaries. If a msgpack
structure is passed in, it will not be unserialized unless
necessary.
If a msgpack is passed in, it will be repacked if necessary
before being returned.
'''
serializer = salt.payload.Serial({'serial': 'msgpack'})
if is_msgpacked:
dict_size = sys.getsizeof(data)
else:
dict_size = sys.getsizeof(serializer.dumps(data))
if dict_size > max_dict_bytes:
if is_msgpacked:
data = serializer.loads(data)
while True:
percent = float(percent)
max_val_size = float(max_dict_bytes * (percent / 100))
try:
for key in data:
if sys.getsizeof(data[key]) > max_val_size:
data[key] = replace_with
percent = percent - stepper_size
max_val_size = float(max_dict_bytes * (percent / 100))
cur_dict_size = sys.getsizeof(serializer.dumps(data))
if cur_dict_size < max_dict_bytes:
if is_msgpacked: # Repack it
return serializer.dumps(data)
else:
return data
elif max_val_size == 0:
if is_msgpacked:
return serializer.dumps(data)
else:
return data
except ValueError:
pass
if is_msgpacked:
return serializer.dumps(data)
else:
return data
else:
return data
|
Add the needed file :]
|
Add the needed file :]
|
Python
|
apache-2.0
|
saltstack/salt,saltstack/salt,saltstack/salt,saltstack/salt,saltstack/salt
|
Add the needed file :]
|
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
import sys
import salt.payload
def trim_dict(
data,
max_dict_bytes,
percent=50.0,
stepper_size=10,
replace_with='VALUE_TRIMMED',
is_msgpacked=False):
'''
Takes a dictionary and iterates over its keys, looking for
large values and replacing them with a trimmed string.
If after the first pass over dictionary keys, the dictionary
is not sufficiently small, the stepper_size will be increased
and the dictionary will be rescanned. This allows for progressive
scanning, removing large items first and only making additional
passes for smaller items if necessary.
This function uses msgpack to calculate the size of the dictionary
in question. While this might seem like unnecessary overhead, a
data structure in python must be serialized in order for sys.getsizeof()
to accurately return the items referenced in the structure.
Ex:
>>> salt.utils.trim_dict({'a': 'b', 'c': 'x' * 10000}, 100)
{'a': 'b', 'c': 'VALUE_TRIMMED'}
To improve performance, it is adviseable to pass in msgpacked
data structures instead of raw dictionaries. If a msgpack
structure is passed in, it will not be unserialized unless
necessary.
If a msgpack is passed in, it will be repacked if necessary
before being returned.
'''
serializer = salt.payload.Serial({'serial': 'msgpack'})
if is_msgpacked:
dict_size = sys.getsizeof(data)
else:
dict_size = sys.getsizeof(serializer.dumps(data))
if dict_size > max_dict_bytes:
if is_msgpacked:
data = serializer.loads(data)
while True:
percent = float(percent)
max_val_size = float(max_dict_bytes * (percent / 100))
try:
for key in data:
if sys.getsizeof(data[key]) > max_val_size:
data[key] = replace_with
percent = percent - stepper_size
max_val_size = float(max_dict_bytes * (percent / 100))
cur_dict_size = sys.getsizeof(serializer.dumps(data))
if cur_dict_size < max_dict_bytes:
if is_msgpacked: # Repack it
return serializer.dumps(data)
else:
return data
elif max_val_size == 0:
if is_msgpacked:
return serializer.dumps(data)
else:
return data
except ValueError:
pass
if is_msgpacked:
return serializer.dumps(data)
else:
return data
else:
return data
|
<commit_before><commit_msg>Add the needed file :]<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
import sys
import salt.payload
def trim_dict(
data,
max_dict_bytes,
percent=50.0,
stepper_size=10,
replace_with='VALUE_TRIMMED',
is_msgpacked=False):
'''
Takes a dictionary and iterates over its keys, looking for
large values and replacing them with a trimmed string.
If after the first pass over dictionary keys, the dictionary
is not sufficiently small, the stepper_size will be increased
and the dictionary will be rescanned. This allows for progressive
scanning, removing large items first and only making additional
passes for smaller items if necessary.
This function uses msgpack to calculate the size of the dictionary
in question. While this might seem like unnecessary overhead, a
data structure in python must be serialized in order for sys.getsizeof()
to accurately return the items referenced in the structure.
Ex:
>>> salt.utils.trim_dict({'a': 'b', 'c': 'x' * 10000}, 100)
{'a': 'b', 'c': 'VALUE_TRIMMED'}
To improve performance, it is adviseable to pass in msgpacked
data structures instead of raw dictionaries. If a msgpack
structure is passed in, it will not be unserialized unless
necessary.
If a msgpack is passed in, it will be repacked if necessary
before being returned.
'''
serializer = salt.payload.Serial({'serial': 'msgpack'})
if is_msgpacked:
dict_size = sys.getsizeof(data)
else:
dict_size = sys.getsizeof(serializer.dumps(data))
if dict_size > max_dict_bytes:
if is_msgpacked:
data = serializer.loads(data)
while True:
percent = float(percent)
max_val_size = float(max_dict_bytes * (percent / 100))
try:
for key in data:
if sys.getsizeof(data[key]) > max_val_size:
data[key] = replace_with
percent = percent - stepper_size
max_val_size = float(max_dict_bytes * (percent / 100))
cur_dict_size = sys.getsizeof(serializer.dumps(data))
if cur_dict_size < max_dict_bytes:
if is_msgpacked: # Repack it
return serializer.dumps(data)
else:
return data
elif max_val_size == 0:
if is_msgpacked:
return serializer.dumps(data)
else:
return data
except ValueError:
pass
if is_msgpacked:
return serializer.dumps(data)
else:
return data
else:
return data
|
Add the needed file :]# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
import sys
import salt.payload
def trim_dict(
data,
max_dict_bytes,
percent=50.0,
stepper_size=10,
replace_with='VALUE_TRIMMED',
is_msgpacked=False):
'''
Takes a dictionary and iterates over its keys, looking for
large values and replacing them with a trimmed string.
If after the first pass over dictionary keys, the dictionary
is not sufficiently small, the stepper_size will be increased
and the dictionary will be rescanned. This allows for progressive
scanning, removing large items first and only making additional
passes for smaller items if necessary.
This function uses msgpack to calculate the size of the dictionary
in question. While this might seem like unnecessary overhead, a
data structure in python must be serialized in order for sys.getsizeof()
to accurately return the items referenced in the structure.
Ex:
>>> salt.utils.trim_dict({'a': 'b', 'c': 'x' * 10000}, 100)
{'a': 'b', 'c': 'VALUE_TRIMMED'}
To improve performance, it is adviseable to pass in msgpacked
data structures instead of raw dictionaries. If a msgpack
structure is passed in, it will not be unserialized unless
necessary.
If a msgpack is passed in, it will be repacked if necessary
before being returned.
'''
serializer = salt.payload.Serial({'serial': 'msgpack'})
if is_msgpacked:
dict_size = sys.getsizeof(data)
else:
dict_size = sys.getsizeof(serializer.dumps(data))
if dict_size > max_dict_bytes:
if is_msgpacked:
data = serializer.loads(data)
while True:
percent = float(percent)
max_val_size = float(max_dict_bytes * (percent / 100))
try:
for key in data:
if sys.getsizeof(data[key]) > max_val_size:
data[key] = replace_with
percent = percent - stepper_size
max_val_size = float(max_dict_bytes * (percent / 100))
cur_dict_size = sys.getsizeof(serializer.dumps(data))
if cur_dict_size < max_dict_bytes:
if is_msgpacked: # Repack it
return serializer.dumps(data)
else:
return data
elif max_val_size == 0:
if is_msgpacked:
return serializer.dumps(data)
else:
return data
except ValueError:
pass
if is_msgpacked:
return serializer.dumps(data)
else:
return data
else:
return data
|
<commit_before><commit_msg>Add the needed file :]<commit_after># -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
import sys
import salt.payload
def trim_dict(
data,
max_dict_bytes,
percent=50.0,
stepper_size=10,
replace_with='VALUE_TRIMMED',
is_msgpacked=False):
'''
Takes a dictionary and iterates over its keys, looking for
large values and replacing them with a trimmed string.
If after the first pass over dictionary keys, the dictionary
is not sufficiently small, the stepper_size will be increased
and the dictionary will be rescanned. This allows for progressive
scanning, removing large items first and only making additional
passes for smaller items if necessary.
This function uses msgpack to calculate the size of the dictionary
in question. While this might seem like unnecessary overhead, a
data structure in python must be serialized in order for sys.getsizeof()
to accurately return the items referenced in the structure.
Ex:
>>> salt.utils.trim_dict({'a': 'b', 'c': 'x' * 10000}, 100)
{'a': 'b', 'c': 'VALUE_TRIMMED'}
To improve performance, it is adviseable to pass in msgpacked
data structures instead of raw dictionaries. If a msgpack
structure is passed in, it will not be unserialized unless
necessary.
If a msgpack is passed in, it will be repacked if necessary
before being returned.
'''
serializer = salt.payload.Serial({'serial': 'msgpack'})
if is_msgpacked:
dict_size = sys.getsizeof(data)
else:
dict_size = sys.getsizeof(serializer.dumps(data))
if dict_size > max_dict_bytes:
if is_msgpacked:
data = serializer.loads(data)
while True:
percent = float(percent)
max_val_size = float(max_dict_bytes * (percent / 100))
try:
for key in data:
if sys.getsizeof(data[key]) > max_val_size:
data[key] = replace_with
percent = percent - stepper_size
max_val_size = float(max_dict_bytes * (percent / 100))
cur_dict_size = sys.getsizeof(serializer.dumps(data))
if cur_dict_size < max_dict_bytes:
if is_msgpacked: # Repack it
return serializer.dumps(data)
else:
return data
elif max_val_size == 0:
if is_msgpacked:
return serializer.dumps(data)
else:
return data
except ValueError:
pass
if is_msgpacked:
return serializer.dumps(data)
else:
return data
else:
return data
|
|
b371f277de0f4df99663b8b9e49f187f54012c79
|
development/build_log_simplifier.py
|
development/build_log_simplifier.py
|
#!/usr/bin/python3
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
def usage():
print("""USAGE:
Simplifies a build.log from hundreds of megabytes to <100 lines. Prints output to terminal.
Pass this script a filepath to parse. You should be able to type "python3 build_log_simplifier.py"
And then drag-and-drop a log file onto the terminal window to get its path.
Sample usage: python3 development/build_log_simplifier.py Users/owengray/Desktop/build.log
""")
exit(1)
try:
build_log_loc = sys.argv[1]
infile = open(build_log_loc)
tasks_of_interest = []
# first, find tasks of interest
for line in infile:
if line.startswith("Execution failed for task"):
tasks_of_interest.append(line.split("task '")[1][:-3])
infile.close()
infile = open(build_log_loc)
print("Tasks of interest: " + str(tasks_of_interest))
# next, save all excerpts between start(interesting task) and end(interesting task)
current_interesting_tasks = []
retained_lines = []
for line in infile:
if line.startswith("Task ") and line.split(" ")[1] in tasks_of_interest:
if line.split(" ")[-1].strip() == "Starting":
current_interesting_tasks.append(line.split(" ")[1])
elif line.split(" ")[-1].strip() == "Finished":
current_interesting_tasks.remove(line.split(" ")[1])
retained_lines.append(line)
if current_interesting_tasks: retained_lines.append(line)
print(len(retained_lines))
print(''.join(retained_lines))
except Exception as e:
print("An error occurred! "+str(e))
usage()
|
Add a script for simplifying build.log files
|
Add a script for simplifying build.log files
Change-Id: Icfc409816b7f7cd028c3022e7073741f60ff30b6
Test: run the script
Bug: b/155305020
|
Python
|
apache-2.0
|
androidx/androidx,androidx/androidx,androidx/androidx,AndroidX/androidx,androidx/androidx,AndroidX/androidx,androidx/androidx,AndroidX/androidx,androidx/androidx,androidx/androidx,AndroidX/androidx,androidx/androidx,AndroidX/androidx,AndroidX/androidx,androidx/androidx,AndroidX/androidx,AndroidX/androidx,AndroidX/androidx,androidx/androidx,AndroidX/androidx
|
Add a script for simplifying build.log files
Change-Id: Icfc409816b7f7cd028c3022e7073741f60ff30b6
Test: run the script
Bug: b/155305020
|
#!/usr/bin/python3
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
def usage():
print("""USAGE:
Simplifies a build.log from hundreds of megabytes to <100 lines. Prints output to terminal.
Pass this script a filepath to parse. You should be able to type "python3 build_log_simplifier.py"
And then drag-and-drop a log file onto the terminal window to get its path.
Sample usage: python3 development/build_log_simplifier.py Users/owengray/Desktop/build.log
""")
exit(1)
try:
build_log_loc = sys.argv[1]
infile = open(build_log_loc)
tasks_of_interest = []
# first, find tasks of interest
for line in infile:
if line.startswith("Execution failed for task"):
tasks_of_interest.append(line.split("task '")[1][:-3])
infile.close()
infile = open(build_log_loc)
print("Tasks of interest: " + str(tasks_of_interest))
# next, save all excerpts between start(interesting task) and end(interesting task)
current_interesting_tasks = []
retained_lines = []
for line in infile:
if line.startswith("Task ") and line.split(" ")[1] in tasks_of_interest:
if line.split(" ")[-1].strip() == "Starting":
current_interesting_tasks.append(line.split(" ")[1])
elif line.split(" ")[-1].strip() == "Finished":
current_interesting_tasks.remove(line.split(" ")[1])
retained_lines.append(line)
if current_interesting_tasks: retained_lines.append(line)
print(len(retained_lines))
print(''.join(retained_lines))
except Exception as e:
print("An error occurred! "+str(e))
usage()
|
<commit_before><commit_msg>Add a script for simplifying build.log files
Change-Id: Icfc409816b7f7cd028c3022e7073741f60ff30b6
Test: run the script
Bug: b/155305020<commit_after>
|
#!/usr/bin/python3
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
def usage():
print("""USAGE:
Simplifies a build.log from hundreds of megabytes to <100 lines. Prints output to terminal.
Pass this script a filepath to parse. You should be able to type "python3 build_log_simplifier.py"
And then drag-and-drop a log file onto the terminal window to get its path.
Sample usage: python3 development/build_log_simplifier.py Users/owengray/Desktop/build.log
""")
exit(1)
try:
build_log_loc = sys.argv[1]
infile = open(build_log_loc)
tasks_of_interest = []
# first, find tasks of interest
for line in infile:
if line.startswith("Execution failed for task"):
tasks_of_interest.append(line.split("task '")[1][:-3])
infile.close()
infile = open(build_log_loc)
print("Tasks of interest: " + str(tasks_of_interest))
# next, save all excerpts between start(interesting task) and end(interesting task)
current_interesting_tasks = []
retained_lines = []
for line in infile:
if line.startswith("Task ") and line.split(" ")[1] in tasks_of_interest:
if line.split(" ")[-1].strip() == "Starting":
current_interesting_tasks.append(line.split(" ")[1])
elif line.split(" ")[-1].strip() == "Finished":
current_interesting_tasks.remove(line.split(" ")[1])
retained_lines.append(line)
if current_interesting_tasks: retained_lines.append(line)
print(len(retained_lines))
print(''.join(retained_lines))
except Exception as e:
print("An error occurred! "+str(e))
usage()
|
Add a script for simplifying build.log files
Change-Id: Icfc409816b7f7cd028c3022e7073741f60ff30b6
Test: run the script
Bug: b/155305020#!/usr/bin/python3
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
def usage():
print("""USAGE:
Simplifies a build.log from hundreds of megabytes to <100 lines. Prints output to terminal.
Pass this script a filepath to parse. You should be able to type "python3 build_log_simplifier.py"
And then drag-and-drop a log file onto the terminal window to get its path.
Sample usage: python3 development/build_log_simplifier.py Users/owengray/Desktop/build.log
""")
exit(1)
try:
build_log_loc = sys.argv[1]
infile = open(build_log_loc)
tasks_of_interest = []
# first, find tasks of interest
for line in infile:
if line.startswith("Execution failed for task"):
tasks_of_interest.append(line.split("task '")[1][:-3])
infile.close()
infile = open(build_log_loc)
print("Tasks of interest: " + str(tasks_of_interest))
# next, save all excerpts between start(interesting task) and end(interesting task)
current_interesting_tasks = []
retained_lines = []
for line in infile:
if line.startswith("Task ") and line.split(" ")[1] in tasks_of_interest:
if line.split(" ")[-1].strip() == "Starting":
current_interesting_tasks.append(line.split(" ")[1])
elif line.split(" ")[-1].strip() == "Finished":
current_interesting_tasks.remove(line.split(" ")[1])
retained_lines.append(line)
if current_interesting_tasks: retained_lines.append(line)
print(len(retained_lines))
print(''.join(retained_lines))
except Exception as e:
print("An error occurred! "+str(e))
usage()
|
<commit_before><commit_msg>Add a script for simplifying build.log files
Change-Id: Icfc409816b7f7cd028c3022e7073741f60ff30b6
Test: run the script
Bug: b/155305020<commit_after>#!/usr/bin/python3
#
# Copyright (C) 2016 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
def usage():
print("""USAGE:
Simplifies a build.log from hundreds of megabytes to <100 lines. Prints output to terminal.
Pass this script a filepath to parse. You should be able to type "python3 build_log_simplifier.py"
And then drag-and-drop a log file onto the terminal window to get its path.
Sample usage: python3 development/build_log_simplifier.py Users/owengray/Desktop/build.log
""")
exit(1)
try:
build_log_loc = sys.argv[1]
infile = open(build_log_loc)
tasks_of_interest = []
# first, find tasks of interest
for line in infile:
if line.startswith("Execution failed for task"):
tasks_of_interest.append(line.split("task '")[1][:-3])
infile.close()
infile = open(build_log_loc)
print("Tasks of interest: " + str(tasks_of_interest))
# next, save all excerpts between start(interesting task) and end(interesting task)
current_interesting_tasks = []
retained_lines = []
for line in infile:
if line.startswith("Task ") and line.split(" ")[1] in tasks_of_interest:
if line.split(" ")[-1].strip() == "Starting":
current_interesting_tasks.append(line.split(" ")[1])
elif line.split(" ")[-1].strip() == "Finished":
current_interesting_tasks.remove(line.split(" ")[1])
retained_lines.append(line)
if current_interesting_tasks: retained_lines.append(line)
print(len(retained_lines))
print(''.join(retained_lines))
except Exception as e:
print("An error occurred! "+str(e))
usage()
|
|
0e3cecfb94417b7a29765880732684c325ae2406
|
test/test_synthesis.py
|
test/test_synthesis.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2015-2016:
# Matthieu Estrada, ttamalfor@gmail.com
#
# This file is part of (AlignakApp).
#
# (AlignakApp) is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# (AlignakApp) is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with (AlignakApp). If not, see <http://www.gnu.org/licenses/>.
import sys
import unittest2
from alignak_app.core.utils import init_config
from alignak_app.core.backend import AppBackend
from alignak_app.synthesis.synthesis import Synthesis
try:
__import__('PyQt5')
from PyQt5.QtWidgets import QApplication, QPushButton
from PyQt5.QtWidgets import QWidget, QLabel
except ImportError:
from PyQt4.Qt import QApplication, QPushButton
from PyQt4.Qt import QWidget, QLabel
class TestServicesView(unittest2.TestCase):
"""
This file test the Synthesis class.
"""
init_config()
app_backend = AppBackend()
app_backend.login()
@classmethod
def setUpClass(cls):
"""Create QApplication"""
try:
cls.app = QApplication(sys.argv)
cls.widget = QWidget()
except:
pass
def test_initialize(self):
"""Initialize Synthesis"""
under_test = Synthesis()
self.assertFalse(under_test.app_backend)
self.assertFalse(under_test.action_manager)
self.assertFalse(under_test.host_view)
self.assertFalse(under_test.services_view)
self.assertTrue(under_test.line_search)
under_test.initialize(self.app_backend)
self.assertTrue(under_test.app_backend)
self.assertTrue(under_test.action_manager)
self.assertTrue(under_test.host_view)
self.assertTrue(under_test.services_view)
self.assertTrue(under_test.line_search)
|
Add Unit Tests for Synthesis
|
Add Unit Tests for Synthesis
Ref #127 synthesis.py
|
Python
|
agpl-3.0
|
Alignak-monitoring-contrib/alignak-app,Alignak-monitoring-contrib/alignak-app
|
Add Unit Tests for Synthesis
Ref #127 synthesis.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2015-2016:
# Matthieu Estrada, ttamalfor@gmail.com
#
# This file is part of (AlignakApp).
#
# (AlignakApp) is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# (AlignakApp) is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with (AlignakApp). If not, see <http://www.gnu.org/licenses/>.
import sys
import unittest2
from alignak_app.core.utils import init_config
from alignak_app.core.backend import AppBackend
from alignak_app.synthesis.synthesis import Synthesis
try:
__import__('PyQt5')
from PyQt5.QtWidgets import QApplication, QPushButton
from PyQt5.QtWidgets import QWidget, QLabel
except ImportError:
from PyQt4.Qt import QApplication, QPushButton
from PyQt4.Qt import QWidget, QLabel
class TestServicesView(unittest2.TestCase):
"""
This file test the Synthesis class.
"""
init_config()
app_backend = AppBackend()
app_backend.login()
@classmethod
def setUpClass(cls):
"""Create QApplication"""
try:
cls.app = QApplication(sys.argv)
cls.widget = QWidget()
except:
pass
def test_initialize(self):
"""Initialize Synthesis"""
under_test = Synthesis()
self.assertFalse(under_test.app_backend)
self.assertFalse(under_test.action_manager)
self.assertFalse(under_test.host_view)
self.assertFalse(under_test.services_view)
self.assertTrue(under_test.line_search)
under_test.initialize(self.app_backend)
self.assertTrue(under_test.app_backend)
self.assertTrue(under_test.action_manager)
self.assertTrue(under_test.host_view)
self.assertTrue(under_test.services_view)
self.assertTrue(under_test.line_search)
|
<commit_before><commit_msg>Add Unit Tests for Synthesis
Ref #127 synthesis.py<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2015-2016:
# Matthieu Estrada, ttamalfor@gmail.com
#
# This file is part of (AlignakApp).
#
# (AlignakApp) is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# (AlignakApp) is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with (AlignakApp). If not, see <http://www.gnu.org/licenses/>.
import sys
import unittest2
from alignak_app.core.utils import init_config
from alignak_app.core.backend import AppBackend
from alignak_app.synthesis.synthesis import Synthesis
try:
__import__('PyQt5')
from PyQt5.QtWidgets import QApplication, QPushButton
from PyQt5.QtWidgets import QWidget, QLabel
except ImportError:
from PyQt4.Qt import QApplication, QPushButton
from PyQt4.Qt import QWidget, QLabel
class TestServicesView(unittest2.TestCase):
"""
This file test the Synthesis class.
"""
init_config()
app_backend = AppBackend()
app_backend.login()
@classmethod
def setUpClass(cls):
"""Create QApplication"""
try:
cls.app = QApplication(sys.argv)
cls.widget = QWidget()
except:
pass
def test_initialize(self):
"""Initialize Synthesis"""
under_test = Synthesis()
self.assertFalse(under_test.app_backend)
self.assertFalse(under_test.action_manager)
self.assertFalse(under_test.host_view)
self.assertFalse(under_test.services_view)
self.assertTrue(under_test.line_search)
under_test.initialize(self.app_backend)
self.assertTrue(under_test.app_backend)
self.assertTrue(under_test.action_manager)
self.assertTrue(under_test.host_view)
self.assertTrue(under_test.services_view)
self.assertTrue(under_test.line_search)
|
Add Unit Tests for Synthesis
Ref #127 synthesis.py#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2015-2016:
# Matthieu Estrada, ttamalfor@gmail.com
#
# This file is part of (AlignakApp).
#
# (AlignakApp) is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# (AlignakApp) is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with (AlignakApp). If not, see <http://www.gnu.org/licenses/>.
import sys
import unittest2
from alignak_app.core.utils import init_config
from alignak_app.core.backend import AppBackend
from alignak_app.synthesis.synthesis import Synthesis
try:
__import__('PyQt5')
from PyQt5.QtWidgets import QApplication, QPushButton
from PyQt5.QtWidgets import QWidget, QLabel
except ImportError:
from PyQt4.Qt import QApplication, QPushButton
from PyQt4.Qt import QWidget, QLabel
class TestServicesView(unittest2.TestCase):
"""
This file test the Synthesis class.
"""
init_config()
app_backend = AppBackend()
app_backend.login()
@classmethod
def setUpClass(cls):
"""Create QApplication"""
try:
cls.app = QApplication(sys.argv)
cls.widget = QWidget()
except:
pass
def test_initialize(self):
"""Initialize Synthesis"""
under_test = Synthesis()
self.assertFalse(under_test.app_backend)
self.assertFalse(under_test.action_manager)
self.assertFalse(under_test.host_view)
self.assertFalse(under_test.services_view)
self.assertTrue(under_test.line_search)
under_test.initialize(self.app_backend)
self.assertTrue(under_test.app_backend)
self.assertTrue(under_test.action_manager)
self.assertTrue(under_test.host_view)
self.assertTrue(under_test.services_view)
self.assertTrue(under_test.line_search)
|
<commit_before><commit_msg>Add Unit Tests for Synthesis
Ref #127 synthesis.py<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c) 2015-2016:
# Matthieu Estrada, ttamalfor@gmail.com
#
# This file is part of (AlignakApp).
#
# (AlignakApp) is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# (AlignakApp) is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with (AlignakApp). If not, see <http://www.gnu.org/licenses/>.
import sys
import unittest2
from alignak_app.core.utils import init_config
from alignak_app.core.backend import AppBackend
from alignak_app.synthesis.synthesis import Synthesis
try:
__import__('PyQt5')
from PyQt5.QtWidgets import QApplication, QPushButton
from PyQt5.QtWidgets import QWidget, QLabel
except ImportError:
from PyQt4.Qt import QApplication, QPushButton
from PyQt4.Qt import QWidget, QLabel
class TestServicesView(unittest2.TestCase):
"""
This file test the Synthesis class.
"""
init_config()
app_backend = AppBackend()
app_backend.login()
@classmethod
def setUpClass(cls):
"""Create QApplication"""
try:
cls.app = QApplication(sys.argv)
cls.widget = QWidget()
except:
pass
def test_initialize(self):
"""Initialize Synthesis"""
under_test = Synthesis()
self.assertFalse(under_test.app_backend)
self.assertFalse(under_test.action_manager)
self.assertFalse(under_test.host_view)
self.assertFalse(under_test.services_view)
self.assertTrue(under_test.line_search)
under_test.initialize(self.app_backend)
self.assertTrue(under_test.app_backend)
self.assertTrue(under_test.action_manager)
self.assertTrue(under_test.host_view)
self.assertTrue(under_test.services_view)
self.assertTrue(under_test.line_search)
|
|
ff61b83fa07a944eb5f82d560643d6e8aed4c172
|
saulify/sitespec.py
|
saulify/sitespec.py
|
""" Reading and representation of Instapaper spec files. """
import sys
from saulify.clean import clean_content
class TestCase(object):
"""
Test case for the article scraper.
Attributes:
url (str): URL of the page being tested
fragments (list of str): Fragments of text that should be present in
the output of the scraper.
"""
def __init__(self, url):
self.url = url
self.fragments = []
def add_contains(self, fragment):
self.fragments.append(fragment)
def run(self):
try:
output = clean_content(self.url)["plaintext"]
except Exception as e:
sys.stderr.write("Exception on " + self.url + " :\n")
sys.stderr.write(str(e))
return {
"url": self.url,
"status": "EXCEPTION",
"message": "e.message"
}
else:
return {
"url": self.url,
"status": "OK",
"missing_fragments": self.missing_fragments(output),
}
def missing_fragments(self, text):
missing = []
for s in self.fragments:
if s not in text:
missing.append(s)
return missing
def load_testcases(fpath):
"""
Reads test cases from an Instapaper spec file.
Scans file until it reaches a line labelled "test_url", then creates a
new ``TestCase`` object. Subsequent lines populate the test case.
Multiple test cases in a single file are supported.
Args:
fpath: Path to the spec file.
Returns:
A list of ``TestCase`` objects.
"""
def parse_specline(line):
parts = line.partition(':')
label = parts[0]
content = parts[2].strip()
return (label, content)
cases = []
with open(fpath) as f:
for line in f:
(label, content) = parse_specline(line)
if label == "test_url":
url = content
case = TestCase(url)
cases.append(case)
elif label == "test_contains":
if not cases:
raise Exception("Invalid spec file: " + fpath)
fragment = content
cases[-1].add_contains(fragment)
return cases
|
Add module handling Instapaper config files
|
Add module handling Instapaper config files
|
Python
|
agpl-3.0
|
asm-products/saulify-web,asm-products/saulify-web,asm-products/saulify-web
|
Add module handling Instapaper config files
|
""" Reading and representation of Instapaper spec files. """
import sys
from saulify.clean import clean_content
class TestCase(object):
"""
Test case for the article scraper.
Attributes:
url (str): URL of the page being tested
fragments (list of str): Fragments of text that should be present in
the output of the scraper.
"""
def __init__(self, url):
self.url = url
self.fragments = []
def add_contains(self, fragment):
self.fragments.append(fragment)
def run(self):
try:
output = clean_content(self.url)["plaintext"]
except Exception as e:
sys.stderr.write("Exception on " + self.url + " :\n")
sys.stderr.write(str(e))
return {
"url": self.url,
"status": "EXCEPTION",
"message": "e.message"
}
else:
return {
"url": self.url,
"status": "OK",
"missing_fragments": self.missing_fragments(output),
}
def missing_fragments(self, text):
missing = []
for s in self.fragments:
if s not in text:
missing.append(s)
return missing
def load_testcases(fpath):
"""
Reads test cases from an Instapaper spec file.
Scans file until it reaches a line labelled "test_url", then creates a
new ``TestCase`` object. Subsequent lines populate the test case.
Multiple test cases in a single file are supported.
Args:
fpath: Path to the spec file.
Returns:
A list of ``TestCase`` objects.
"""
def parse_specline(line):
parts = line.partition(':')
label = parts[0]
content = parts[2].strip()
return (label, content)
cases = []
with open(fpath) as f:
for line in f:
(label, content) = parse_specline(line)
if label == "test_url":
url = content
case = TestCase(url)
cases.append(case)
elif label == "test_contains":
if not cases:
raise Exception("Invalid spec file: " + fpath)
fragment = content
cases[-1].add_contains(fragment)
return cases
|
<commit_before><commit_msg>Add module handling Instapaper config files<commit_after>
|
""" Reading and representation of Instapaper spec files. """
import sys
from saulify.clean import clean_content
class TestCase(object):
"""
Test case for the article scraper.
Attributes:
url (str): URL of the page being tested
fragments (list of str): Fragments of text that should be present in
the output of the scraper.
"""
def __init__(self, url):
self.url = url
self.fragments = []
def add_contains(self, fragment):
self.fragments.append(fragment)
def run(self):
try:
output = clean_content(self.url)["plaintext"]
except Exception as e:
sys.stderr.write("Exception on " + self.url + " :\n")
sys.stderr.write(str(e))
return {
"url": self.url,
"status": "EXCEPTION",
"message": "e.message"
}
else:
return {
"url": self.url,
"status": "OK",
"missing_fragments": self.missing_fragments(output),
}
def missing_fragments(self, text):
missing = []
for s in self.fragments:
if s not in text:
missing.append(s)
return missing
def load_testcases(fpath):
"""
Reads test cases from an Instapaper spec file.
Scans file until it reaches a line labelled "test_url", then creates a
new ``TestCase`` object. Subsequent lines populate the test case.
Multiple test cases in a single file are supported.
Args:
fpath: Path to the spec file.
Returns:
A list of ``TestCase`` objects.
"""
def parse_specline(line):
parts = line.partition(':')
label = parts[0]
content = parts[2].strip()
return (label, content)
cases = []
with open(fpath) as f:
for line in f:
(label, content) = parse_specline(line)
if label == "test_url":
url = content
case = TestCase(url)
cases.append(case)
elif label == "test_contains":
if not cases:
raise Exception("Invalid spec file: " + fpath)
fragment = content
cases[-1].add_contains(fragment)
return cases
|
Add module handling Instapaper config files""" Reading and representation of Instapaper spec files. """
import sys
from saulify.clean import clean_content
class TestCase(object):
"""
Test case for the article scraper.
Attributes:
url (str): URL of the page being tested
fragments (list of str): Fragments of text that should be present in
the output of the scraper.
"""
def __init__(self, url):
self.url = url
self.fragments = []
def add_contains(self, fragment):
self.fragments.append(fragment)
def run(self):
try:
output = clean_content(self.url)["plaintext"]
except Exception as e:
sys.stderr.write("Exception on " + self.url + " :\n")
sys.stderr.write(str(e))
return {
"url": self.url,
"status": "EXCEPTION",
"message": "e.message"
}
else:
return {
"url": self.url,
"status": "OK",
"missing_fragments": self.missing_fragments(output),
}
def missing_fragments(self, text):
missing = []
for s in self.fragments:
if s not in text:
missing.append(s)
return missing
def load_testcases(fpath):
"""
Reads test cases from an Instapaper spec file.
Scans file until it reaches a line labelled "test_url", then creates a
new ``TestCase`` object. Subsequent lines populate the test case.
Multiple test cases in a single file are supported.
Args:
fpath: Path to the spec file.
Returns:
A list of ``TestCase`` objects.
"""
def parse_specline(line):
parts = line.partition(':')
label = parts[0]
content = parts[2].strip()
return (label, content)
cases = []
with open(fpath) as f:
for line in f:
(label, content) = parse_specline(line)
if label == "test_url":
url = content
case = TestCase(url)
cases.append(case)
elif label == "test_contains":
if not cases:
raise Exception("Invalid spec file: " + fpath)
fragment = content
cases[-1].add_contains(fragment)
return cases
|
<commit_before><commit_msg>Add module handling Instapaper config files<commit_after>""" Reading and representation of Instapaper spec files. """
import sys
from saulify.clean import clean_content
class TestCase(object):
"""
Test case for the article scraper.
Attributes:
url (str): URL of the page being tested
fragments (list of str): Fragments of text that should be present in
the output of the scraper.
"""
def __init__(self, url):
self.url = url
self.fragments = []
def add_contains(self, fragment):
self.fragments.append(fragment)
def run(self):
try:
output = clean_content(self.url)["plaintext"]
except Exception as e:
sys.stderr.write("Exception on " + self.url + " :\n")
sys.stderr.write(str(e))
return {
"url": self.url,
"status": "EXCEPTION",
"message": "e.message"
}
else:
return {
"url": self.url,
"status": "OK",
"missing_fragments": self.missing_fragments(output),
}
def missing_fragments(self, text):
missing = []
for s in self.fragments:
if s not in text:
missing.append(s)
return missing
def load_testcases(fpath):
"""
Reads test cases from an Instapaper spec file.
Scans file until it reaches a line labelled "test_url", then creates a
new ``TestCase`` object. Subsequent lines populate the test case.
Multiple test cases in a single file are supported.
Args:
fpath: Path to the spec file.
Returns:
A list of ``TestCase`` objects.
"""
def parse_specline(line):
parts = line.partition(':')
label = parts[0]
content = parts[2].strip()
return (label, content)
cases = []
with open(fpath) as f:
for line in f:
(label, content) = parse_specline(line)
if label == "test_url":
url = content
case = TestCase(url)
cases.append(case)
elif label == "test_contains":
if not cases:
raise Exception("Invalid spec file: " + fpath)
fragment = content
cases[-1].add_contains(fragment)
return cases
|
|
8046d67046a0be334b87e0ad54d78b6cfce83aad
|
main/test_race.py
|
main/test_race.py
|
import multiprocessing
import pprint
from ppm import Trie
trie = None
def test(a):
x, trie = a
trie.add('b')
trie.add('b')
print(x, trie.bit_encoding)
return trie.bit_encoding
if __name__ == '__main__':
trie = Trie(5)
trie.add('a')
alist = [ (x, trie) for x in range(0, multiprocessing.cpu_count()) ]
pool = multiprocessing.Pool(multiprocessing.cpu_count())
result = pool.map(test, alist)
pprint.pprint(result)
|
Add a race condition tester
|
Add a race condition tester
|
Python
|
mit
|
worldwise001/stylometry
|
Add a race condition tester
|
import multiprocessing
import pprint
from ppm import Trie
trie = None
def test(a):
x, trie = a
trie.add('b')
trie.add('b')
print(x, trie.bit_encoding)
return trie.bit_encoding
if __name__ == '__main__':
trie = Trie(5)
trie.add('a')
alist = [ (x, trie) for x in range(0, multiprocessing.cpu_count()) ]
pool = multiprocessing.Pool(multiprocessing.cpu_count())
result = pool.map(test, alist)
pprint.pprint(result)
|
<commit_before><commit_msg>Add a race condition tester<commit_after>
|
import multiprocessing
import pprint
from ppm import Trie
trie = None
def test(a):
x, trie = a
trie.add('b')
trie.add('b')
print(x, trie.bit_encoding)
return trie.bit_encoding
if __name__ == '__main__':
trie = Trie(5)
trie.add('a')
alist = [ (x, trie) for x in range(0, multiprocessing.cpu_count()) ]
pool = multiprocessing.Pool(multiprocessing.cpu_count())
result = pool.map(test, alist)
pprint.pprint(result)
|
Add a race condition testerimport multiprocessing
import pprint
from ppm import Trie
trie = None
def test(a):
x, trie = a
trie.add('b')
trie.add('b')
print(x, trie.bit_encoding)
return trie.bit_encoding
if __name__ == '__main__':
trie = Trie(5)
trie.add('a')
alist = [ (x, trie) for x in range(0, multiprocessing.cpu_count()) ]
pool = multiprocessing.Pool(multiprocessing.cpu_count())
result = pool.map(test, alist)
pprint.pprint(result)
|
<commit_before><commit_msg>Add a race condition tester<commit_after>import multiprocessing
import pprint
from ppm import Trie
trie = None
def test(a):
x, trie = a
trie.add('b')
trie.add('b')
print(x, trie.bit_encoding)
return trie.bit_encoding
if __name__ == '__main__':
trie = Trie(5)
trie.add('a')
alist = [ (x, trie) for x in range(0, multiprocessing.cpu_count()) ]
pool = multiprocessing.Pool(multiprocessing.cpu_count())
result = pool.map(test, alist)
pprint.pprint(result)
|
|
ad14c2da6b7fdaf8744b48c6e714acc18c8c073c
|
render_random_particle.py
|
render_random_particle.py
|
import pygame
import random
pygame.init()
#-- SCREEN CHARACTERISTICS ------------------------->>>
background_color = (255,255,255)
(width, height) = (300, 200)
class Particle:
def __init__(self, (x, y), radius):
self.x = x
self.y = y
self.radius = radius
self.color = (255, 0, 0)
self.thickness = 1
def display(self):
pygame.draw.circle(screen, self.color, (self.x, self.y), self.radius, self.thickness)
#-- RENDER SCREEN ---------------------------------->>>
screen = pygame.display.set_mode((width, height))
screen.fill(background_color)
number_of_particles = 10
particles = []
for n in range(number_of_particles):
radius = random.randint(5, 20)
x = random.randint(radius, width - radius)
y = random.randint(radius, height - radius)
particles.append(Particle((x, y), radius))
for particle in particles:
particle.display()
#-- RUN LOOP --------------------------------------->>>
pygame.display.flip()
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
|
Add module that renders particles on screen with random position
|
Add module that renders particles on screen with random position
|
Python
|
mit
|
withtwoemms/pygame-explorations
|
Add module that renders particles on screen with random position
|
import pygame
import random
pygame.init()
#-- SCREEN CHARACTERISTICS ------------------------->>>
background_color = (255,255,255)
(width, height) = (300, 200)
class Particle:
def __init__(self, (x, y), radius):
self.x = x
self.y = y
self.radius = radius
self.color = (255, 0, 0)
self.thickness = 1
def display(self):
pygame.draw.circle(screen, self.color, (self.x, self.y), self.radius, self.thickness)
#-- RENDER SCREEN ---------------------------------->>>
screen = pygame.display.set_mode((width, height))
screen.fill(background_color)
number_of_particles = 10
particles = []
for n in range(number_of_particles):
radius = random.randint(5, 20)
x = random.randint(radius, width - radius)
y = random.randint(radius, height - radius)
particles.append(Particle((x, y), radius))
for particle in particles:
particle.display()
#-- RUN LOOP --------------------------------------->>>
pygame.display.flip()
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
|
<commit_before><commit_msg>Add module that renders particles on screen with random position<commit_after>
|
import pygame
import random
pygame.init()
#-- SCREEN CHARACTERISTICS ------------------------->>>
background_color = (255,255,255)
(width, height) = (300, 200)
class Particle:
def __init__(self, (x, y), radius):
self.x = x
self.y = y
self.radius = radius
self.color = (255, 0, 0)
self.thickness = 1
def display(self):
pygame.draw.circle(screen, self.color, (self.x, self.y), self.radius, self.thickness)
#-- RENDER SCREEN ---------------------------------->>>
screen = pygame.display.set_mode((width, height))
screen.fill(background_color)
number_of_particles = 10
particles = []
for n in range(number_of_particles):
radius = random.randint(5, 20)
x = random.randint(radius, width - radius)
y = random.randint(radius, height - radius)
particles.append(Particle((x, y), radius))
for particle in particles:
particle.display()
#-- RUN LOOP --------------------------------------->>>
pygame.display.flip()
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
|
Add module that renders particles on screen with random positionimport pygame
import random
pygame.init()
#-- SCREEN CHARACTERISTICS ------------------------->>>
background_color = (255,255,255)
(width, height) = (300, 200)
class Particle:
def __init__(self, (x, y), radius):
self.x = x
self.y = y
self.radius = radius
self.color = (255, 0, 0)
self.thickness = 1
def display(self):
pygame.draw.circle(screen, self.color, (self.x, self.y), self.radius, self.thickness)
#-- RENDER SCREEN ---------------------------------->>>
screen = pygame.display.set_mode((width, height))
screen.fill(background_color)
number_of_particles = 10
particles = []
for n in range(number_of_particles):
radius = random.randint(5, 20)
x = random.randint(radius, width - radius)
y = random.randint(radius, height - radius)
particles.append(Particle((x, y), radius))
for particle in particles:
particle.display()
#-- RUN LOOP --------------------------------------->>>
pygame.display.flip()
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
|
<commit_before><commit_msg>Add module that renders particles on screen with random position<commit_after>import pygame
import random
pygame.init()
#-- SCREEN CHARACTERISTICS ------------------------->>>
background_color = (255,255,255)
(width, height) = (300, 200)
class Particle:
def __init__(self, (x, y), radius):
self.x = x
self.y = y
self.radius = radius
self.color = (255, 0, 0)
self.thickness = 1
def display(self):
pygame.draw.circle(screen, self.color, (self.x, self.y), self.radius, self.thickness)
#-- RENDER SCREEN ---------------------------------->>>
screen = pygame.display.set_mode((width, height))
screen.fill(background_color)
number_of_particles = 10
particles = []
for n in range(number_of_particles):
radius = random.randint(5, 20)
x = random.randint(radius, width - radius)
y = random.randint(radius, height - radius)
particles.append(Particle((x, y), radius))
for particle in particles:
particle.display()
#-- RUN LOOP --------------------------------------->>>
pygame.display.flip()
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
|
|
ba49f8608bfe0130ea3b971e2844d6ad43b6e2ba
|
trypython/stdlib/sys_/sys_getsizeof_vs_dunder_sizeof_diff.py
|
trypython/stdlib/sys_/sys_getsizeof_vs_dunder_sizeof_diff.py
|
"""
sys.getsizeof() と __sizeof__() で 返される値が異なる時があるのを確認するサンプル
REFERENCES:: http://bit.ly/2GTVkbs
"""
import sys
from trypython.common.commoncls import SampleBase
class Sample(SampleBase):
def exec(self):
# sys.getsizeof() は GC フィールドの分を加算するので
# __sizeof__() が返す値と異なるときがある
list_data = [10, 20]
sys_sizeof = sys.getsizeof(list_data)
dunder_sizeof = list_data.__sizeof__()
self._print(type(list_data), sys_sizeof, dunder_sizeof)
# 同じものもある
str_data = "hello world"
sys_sizeof = sys.getsizeof(str_data)
dunder_sizeof = str_data.__sizeof__()
self._print(type(str_data), sys_sizeof, dunder_sizeof)
int_data = 1_000_000_000
sys_sizeof = sys.getsizeof(int_data)
dunder_sizeof = int_data.__sizeof__()
self._print(type(int_data), sys_sizeof, dunder_sizeof)
# noinspection PyMethodMayBeStatic
def _print(self, t: type, sys_sizeof: int, dunder_sizeof: int):
print(f'{t}:\tsys.getsizeof: {sys_sizeof}\tdunder_sizeof: {dunder_sizeof}')
def go():
obj = Sample()
obj.exec()
|
Add sys.getsizeof() vs ```__sizeof__()``` difference example
|
Add sys.getsizeof() vs ```__sizeof__()``` difference example
|
Python
|
mit
|
devlights/try-python
|
Add sys.getsizeof() vs ```__sizeof__()``` difference example
|
"""
sys.getsizeof() と __sizeof__() で 返される値が異なる時があるのを確認するサンプル
REFERENCES:: http://bit.ly/2GTVkbs
"""
import sys
from trypython.common.commoncls import SampleBase
class Sample(SampleBase):
def exec(self):
# sys.getsizeof() は GC フィールドの分を加算するので
# __sizeof__() が返す値と異なるときがある
list_data = [10, 20]
sys_sizeof = sys.getsizeof(list_data)
dunder_sizeof = list_data.__sizeof__()
self._print(type(list_data), sys_sizeof, dunder_sizeof)
# 同じものもある
str_data = "hello world"
sys_sizeof = sys.getsizeof(str_data)
dunder_sizeof = str_data.__sizeof__()
self._print(type(str_data), sys_sizeof, dunder_sizeof)
int_data = 1_000_000_000
sys_sizeof = sys.getsizeof(int_data)
dunder_sizeof = int_data.__sizeof__()
self._print(type(int_data), sys_sizeof, dunder_sizeof)
# noinspection PyMethodMayBeStatic
def _print(self, t: type, sys_sizeof: int, dunder_sizeof: int):
print(f'{t}:\tsys.getsizeof: {sys_sizeof}\tdunder_sizeof: {dunder_sizeof}')
def go():
obj = Sample()
obj.exec()
|
<commit_before><commit_msg>Add sys.getsizeof() vs ```__sizeof__()``` difference example<commit_after>
|
"""
sys.getsizeof() と __sizeof__() で 返される値が異なる時があるのを確認するサンプル
REFERENCES:: http://bit.ly/2GTVkbs
"""
import sys
from trypython.common.commoncls import SampleBase
class Sample(SampleBase):
def exec(self):
# sys.getsizeof() は GC フィールドの分を加算するので
# __sizeof__() が返す値と異なるときがある
list_data = [10, 20]
sys_sizeof = sys.getsizeof(list_data)
dunder_sizeof = list_data.__sizeof__()
self._print(type(list_data), sys_sizeof, dunder_sizeof)
# 同じものもある
str_data = "hello world"
sys_sizeof = sys.getsizeof(str_data)
dunder_sizeof = str_data.__sizeof__()
self._print(type(str_data), sys_sizeof, dunder_sizeof)
int_data = 1_000_000_000
sys_sizeof = sys.getsizeof(int_data)
dunder_sizeof = int_data.__sizeof__()
self._print(type(int_data), sys_sizeof, dunder_sizeof)
# noinspection PyMethodMayBeStatic
def _print(self, t: type, sys_sizeof: int, dunder_sizeof: int):
print(f'{t}:\tsys.getsizeof: {sys_sizeof}\tdunder_sizeof: {dunder_sizeof}')
def go():
obj = Sample()
obj.exec()
|
Add sys.getsizeof() vs ```__sizeof__()``` difference example"""
sys.getsizeof() と __sizeof__() で 返される値が異なる時があるのを確認するサンプル
REFERENCES:: http://bit.ly/2GTVkbs
"""
import sys
from trypython.common.commoncls import SampleBase
class Sample(SampleBase):
def exec(self):
# sys.getsizeof() は GC フィールドの分を加算するので
# __sizeof__() が返す値と異なるときがある
list_data = [10, 20]
sys_sizeof = sys.getsizeof(list_data)
dunder_sizeof = list_data.__sizeof__()
self._print(type(list_data), sys_sizeof, dunder_sizeof)
# 同じものもある
str_data = "hello world"
sys_sizeof = sys.getsizeof(str_data)
dunder_sizeof = str_data.__sizeof__()
self._print(type(str_data), sys_sizeof, dunder_sizeof)
int_data = 1_000_000_000
sys_sizeof = sys.getsizeof(int_data)
dunder_sizeof = int_data.__sizeof__()
self._print(type(int_data), sys_sizeof, dunder_sizeof)
# noinspection PyMethodMayBeStatic
def _print(self, t: type, sys_sizeof: int, dunder_sizeof: int):
print(f'{t}:\tsys.getsizeof: {sys_sizeof}\tdunder_sizeof: {dunder_sizeof}')
def go():
obj = Sample()
obj.exec()
|
<commit_before><commit_msg>Add sys.getsizeof() vs ```__sizeof__()``` difference example<commit_after>"""
sys.getsizeof() と __sizeof__() で 返される値が異なる時があるのを確認するサンプル
REFERENCES:: http://bit.ly/2GTVkbs
"""
import sys
from trypython.common.commoncls import SampleBase
class Sample(SampleBase):
def exec(self):
# sys.getsizeof() は GC フィールドの分を加算するので
# __sizeof__() が返す値と異なるときがある
list_data = [10, 20]
sys_sizeof = sys.getsizeof(list_data)
dunder_sizeof = list_data.__sizeof__()
self._print(type(list_data), sys_sizeof, dunder_sizeof)
# 同じものもある
str_data = "hello world"
sys_sizeof = sys.getsizeof(str_data)
dunder_sizeof = str_data.__sizeof__()
self._print(type(str_data), sys_sizeof, dunder_sizeof)
int_data = 1_000_000_000
sys_sizeof = sys.getsizeof(int_data)
dunder_sizeof = int_data.__sizeof__()
self._print(type(int_data), sys_sizeof, dunder_sizeof)
# noinspection PyMethodMayBeStatic
def _print(self, t: type, sys_sizeof: int, dunder_sizeof: int):
print(f'{t}:\tsys.getsizeof: {sys_sizeof}\tdunder_sizeof: {dunder_sizeof}')
def go():
obj = Sample()
obj.exec()
|
|
43f4c62d45d41b9dae5582e2d7f10b187731f2b7
|
utils/test/__init__.py
|
utils/test/__init__.py
|
import os
import unittest
from google.appengine.ext import testbed
from google.appengine.datastore import datastore_stub_util
import settings
HRD_CONSISTENCY = 1
class DatastoreTestCase(unittest.TestCase):
"""Test case with stubbed high-replication datastore API. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
def tearDown(self):
super(DatastoreTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class MemcacheTestCase(unittest.TestCase):
"""Test case with stubbed memcache API."""
def setUp(self):
super(MemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_memcache_stub()
def tearDown(self):
super(MemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class DatastoreMemcacheTestCase(unittest.TestCase):
"""Test case with stubbed datastore and memcache APIs. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreMemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
self.testbed.init_memcache_stub()
def tearDown(self):
super(DatastoreMemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
|
Add datastore and memcache stub test cases
|
Add datastore and memcache stub test cases
|
Python
|
apache-2.0
|
tylertreat/gaeutils
|
Add datastore and memcache stub test cases
|
import os
import unittest
from google.appengine.ext import testbed
from google.appengine.datastore import datastore_stub_util
import settings
HRD_CONSISTENCY = 1
class DatastoreTestCase(unittest.TestCase):
"""Test case with stubbed high-replication datastore API. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
def tearDown(self):
super(DatastoreTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class MemcacheTestCase(unittest.TestCase):
"""Test case with stubbed memcache API."""
def setUp(self):
super(MemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_memcache_stub()
def tearDown(self):
super(MemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class DatastoreMemcacheTestCase(unittest.TestCase):
"""Test case with stubbed datastore and memcache APIs. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreMemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
self.testbed.init_memcache_stub()
def tearDown(self):
super(DatastoreMemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
|
<commit_before><commit_msg>Add datastore and memcache stub test cases<commit_after>
|
import os
import unittest
from google.appengine.ext import testbed
from google.appengine.datastore import datastore_stub_util
import settings
HRD_CONSISTENCY = 1
class DatastoreTestCase(unittest.TestCase):
"""Test case with stubbed high-replication datastore API. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
def tearDown(self):
super(DatastoreTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class MemcacheTestCase(unittest.TestCase):
"""Test case with stubbed memcache API."""
def setUp(self):
super(MemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_memcache_stub()
def tearDown(self):
super(MemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class DatastoreMemcacheTestCase(unittest.TestCase):
"""Test case with stubbed datastore and memcache APIs. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreMemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
self.testbed.init_memcache_stub()
def tearDown(self):
super(DatastoreMemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
|
Add datastore and memcache stub test casesimport os
import unittest
from google.appengine.ext import testbed
from google.appengine.datastore import datastore_stub_util
import settings
HRD_CONSISTENCY = 1
class DatastoreTestCase(unittest.TestCase):
"""Test case with stubbed high-replication datastore API. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
def tearDown(self):
super(DatastoreTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class MemcacheTestCase(unittest.TestCase):
"""Test case with stubbed memcache API."""
def setUp(self):
super(MemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_memcache_stub()
def tearDown(self):
super(MemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class DatastoreMemcacheTestCase(unittest.TestCase):
"""Test case with stubbed datastore and memcache APIs. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreMemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
self.testbed.init_memcache_stub()
def tearDown(self):
super(DatastoreMemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
|
<commit_before><commit_msg>Add datastore and memcache stub test cases<commit_after>import os
import unittest
from google.appengine.ext import testbed
from google.appengine.datastore import datastore_stub_util
import settings
HRD_CONSISTENCY = 1
class DatastoreTestCase(unittest.TestCase):
"""Test case with stubbed high-replication datastore API. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
def tearDown(self):
super(DatastoreTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class MemcacheTestCase(unittest.TestCase):
"""Test case with stubbed memcache API."""
def setUp(self):
super(MemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_memcache_stub()
def tearDown(self):
super(MemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
class DatastoreMemcacheTestCase(unittest.TestCase):
"""Test case with stubbed datastore and memcache APIs. The datastore
stub uses an optimistic, always-consistent policy, meaning writes will
always apply.
"""
def setUp(self):
super(DatastoreMemcacheTestCase, self).setUp()
self.original_environ = dict(os.environ)
os.environ['TZ'] = 'UTC'
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.setup_env(app_id=settings.APP_ID)
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(
probability=HRD_CONSISTENCY)
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
self.testbed.init_memcache_stub()
def tearDown(self):
super(DatastoreMemcacheTestCase, self).tearDown()
self.testbed.deactivate()
os.environ = self.original_environ
|
|
3fb964bd49ab5855b60ecbf8981fe0bceffb108b
|
slr1/wsgi_websocket.py
|
slr1/wsgi_websocket.py
|
import os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.update(DJANGO_SETTINGS_MODULE='settings.base')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
|
Introduce wsgi app for websockets
|
Introduce wsgi app for websockets
|
Python
|
bsd-3-clause
|
stefantsov/blackbox3,stefantsov/blackbox3,stefantsov/blackbox3
|
Introduce wsgi app for websockets
|
import os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.update(DJANGO_SETTINGS_MODULE='settings.base')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
|
<commit_before><commit_msg>Introduce wsgi app for websockets<commit_after>
|
import os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.update(DJANGO_SETTINGS_MODULE='settings.base')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
|
Introduce wsgi app for websocketsimport os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.update(DJANGO_SETTINGS_MODULE='settings.base')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
|
<commit_before><commit_msg>Introduce wsgi app for websockets<commit_after>import os
import gevent.socket
import redis.connection
redis.connection.socket = gevent.socket
os.environ.update(DJANGO_SETTINGS_MODULE='settings.base')
from ws4redis.uwsgi_runserver import uWSGIWebsocketServer
application = uWSGIWebsocketServer()
|
|
4cbe57c803de833d912cf2f325e9e710eb17aa10
|
contrib/os_export/archive_sources.py
|
contrib/os_export/archive_sources.py
|
import requests
import json
import sys
import os
def grab_source(url, output):
"""
Grab a source from a url and store it in an output file
This creates uses requests as a dependency because I'm lazy.
It probably would have taken me less time to just write it with
urllib than writing this docstring
"""
# We use stream because these files might be huge
response = requests.get(url, stream=True)
# We don't do anything if there's something wrong with the url
# This is basically what made urllib.urlretrieve a hassle
if not response.ok:
return
with open(output, 'w') as output_file:
for block in response.iter_content(1024):
output_file.write(block)
def archive(directory):
"""
Archive a OpenSpending dataset export directory
"""
# If we accidentally pass in something that's not a directory
# we don't do anything
if not os.path.isdir(directory):
return
# Check if the directory contains a dataset.json file
dataset = os.path.join(directory, 'dataset.json')
if not os.path.isfile(dataset):
return
# Open the dataset.json file and grab the sources listed in it
with open(dataset) as descriptor:
data = json.load(descriptor)
if len(data['sources']):
# Create an archive directory because there are some
# sources we want to grab
archive_directory = os.path.join(directory, 'archive')
if not os.path.exists(archive_directory):
os.makedirs(archive_directory)
# Loop through sources, grab them and store in an output file
# called <source_id>.csv
for source in data['sources']:
filename = '{0}.csv'.format(source['id'])
archive_file = os.path.join(archive_directory, filename)
grab_source(source['url'], output=archive_file)
# If the archive directory is empty which will happen if
# grabbing the sources failed for some reason
if not os.listdir(archive_directory):
os.rmdir(archive_directory)
if __name__ == "__main__":
# Loop through each of the arguments and archive them
for directory in sys.argv[1:]:
archive(directory)
|
Add source archiving to OpenSpending export (separate script)
|
Add source archiving to OpenSpending export (separate script)
This is a separate script because the export scripts is run on a
production server (or a server with access to the database) and we
don't want the production server to also do the archiving, we can
do that on some poor local machine.
|
Python
|
agpl-3.0
|
pudo/spendb,johnjohndoe/spendb,openspending/spendb,pudo/spendb,spendb/spendb,CivicVision/datahub,spendb/spendb,CivicVision/datahub,pudo/spendb,openspending/spendb,johnjohndoe/spendb,spendb/spendb,openspending/spendb,johnjohndoe/spendb,CivicVision/datahub
|
Add source archiving to OpenSpending export (separate script)
This is a separate script because the export scripts is run on a
production server (or a server with access to the database) and we
don't want the production server to also do the archiving, we can
do that on some poor local machine.
|
import requests
import json
import sys
import os
def grab_source(url, output):
"""
Grab a source from a url and store it in an output file
This creates uses requests as a dependency because I'm lazy.
It probably would have taken me less time to just write it with
urllib than writing this docstring
"""
# We use stream because these files might be huge
response = requests.get(url, stream=True)
# We don't do anything if there's something wrong with the url
# This is basically what made urllib.urlretrieve a hassle
if not response.ok:
return
with open(output, 'w') as output_file:
for block in response.iter_content(1024):
output_file.write(block)
def archive(directory):
"""
Archive a OpenSpending dataset export directory
"""
# If we accidentally pass in something that's not a directory
# we don't do anything
if not os.path.isdir(directory):
return
# Check if the directory contains a dataset.json file
dataset = os.path.join(directory, 'dataset.json')
if not os.path.isfile(dataset):
return
# Open the dataset.json file and grab the sources listed in it
with open(dataset) as descriptor:
data = json.load(descriptor)
if len(data['sources']):
# Create an archive directory because there are some
# sources we want to grab
archive_directory = os.path.join(directory, 'archive')
if not os.path.exists(archive_directory):
os.makedirs(archive_directory)
# Loop through sources, grab them and store in an output file
# called <source_id>.csv
for source in data['sources']:
filename = '{0}.csv'.format(source['id'])
archive_file = os.path.join(archive_directory, filename)
grab_source(source['url'], output=archive_file)
# If the archive directory is empty which will happen if
# grabbing the sources failed for some reason
if not os.listdir(archive_directory):
os.rmdir(archive_directory)
if __name__ == "__main__":
# Loop through each of the arguments and archive them
for directory in sys.argv[1:]:
archive(directory)
|
<commit_before><commit_msg>Add source archiving to OpenSpending export (separate script)
This is a separate script because the export scripts is run on a
production server (or a server with access to the database) and we
don't want the production server to also do the archiving, we can
do that on some poor local machine.<commit_after>
|
import requests
import json
import sys
import os
def grab_source(url, output):
"""
Grab a source from a url and store it in an output file
This creates uses requests as a dependency because I'm lazy.
It probably would have taken me less time to just write it with
urllib than writing this docstring
"""
# We use stream because these files might be huge
response = requests.get(url, stream=True)
# We don't do anything if there's something wrong with the url
# This is basically what made urllib.urlretrieve a hassle
if not response.ok:
return
with open(output, 'w') as output_file:
for block in response.iter_content(1024):
output_file.write(block)
def archive(directory):
"""
Archive a OpenSpending dataset export directory
"""
# If we accidentally pass in something that's not a directory
# we don't do anything
if not os.path.isdir(directory):
return
# Check if the directory contains a dataset.json file
dataset = os.path.join(directory, 'dataset.json')
if not os.path.isfile(dataset):
return
# Open the dataset.json file and grab the sources listed in it
with open(dataset) as descriptor:
data = json.load(descriptor)
if len(data['sources']):
# Create an archive directory because there are some
# sources we want to grab
archive_directory = os.path.join(directory, 'archive')
if not os.path.exists(archive_directory):
os.makedirs(archive_directory)
# Loop through sources, grab them and store in an output file
# called <source_id>.csv
for source in data['sources']:
filename = '{0}.csv'.format(source['id'])
archive_file = os.path.join(archive_directory, filename)
grab_source(source['url'], output=archive_file)
# If the archive directory is empty which will happen if
# grabbing the sources failed for some reason
if not os.listdir(archive_directory):
os.rmdir(archive_directory)
if __name__ == "__main__":
# Loop through each of the arguments and archive them
for directory in sys.argv[1:]:
archive(directory)
|
Add source archiving to OpenSpending export (separate script)
This is a separate script because the export scripts is run on a
production server (or a server with access to the database) and we
don't want the production server to also do the archiving, we can
do that on some poor local machine.import requests
import json
import sys
import os
def grab_source(url, output):
"""
Grab a source from a url and store it in an output file
This creates uses requests as a dependency because I'm lazy.
It probably would have taken me less time to just write it with
urllib than writing this docstring
"""
# We use stream because these files might be huge
response = requests.get(url, stream=True)
# We don't do anything if there's something wrong with the url
# This is basically what made urllib.urlretrieve a hassle
if not response.ok:
return
with open(output, 'w') as output_file:
for block in response.iter_content(1024):
output_file.write(block)
def archive(directory):
"""
Archive a OpenSpending dataset export directory
"""
# If we accidentally pass in something that's not a directory
# we don't do anything
if not os.path.isdir(directory):
return
# Check if the directory contains a dataset.json file
dataset = os.path.join(directory, 'dataset.json')
if not os.path.isfile(dataset):
return
# Open the dataset.json file and grab the sources listed in it
with open(dataset) as descriptor:
data = json.load(descriptor)
if len(data['sources']):
# Create an archive directory because there are some
# sources we want to grab
archive_directory = os.path.join(directory, 'archive')
if not os.path.exists(archive_directory):
os.makedirs(archive_directory)
# Loop through sources, grab them and store in an output file
# called <source_id>.csv
for source in data['sources']:
filename = '{0}.csv'.format(source['id'])
archive_file = os.path.join(archive_directory, filename)
grab_source(source['url'], output=archive_file)
# If the archive directory is empty which will happen if
# grabbing the sources failed for some reason
if not os.listdir(archive_directory):
os.rmdir(archive_directory)
if __name__ == "__main__":
# Loop through each of the arguments and archive them
for directory in sys.argv[1:]:
archive(directory)
|
<commit_before><commit_msg>Add source archiving to OpenSpending export (separate script)
This is a separate script because the export scripts is run on a
production server (or a server with access to the database) and we
don't want the production server to also do the archiving, we can
do that on some poor local machine.<commit_after>import requests
import json
import sys
import os
def grab_source(url, output):
"""
Grab a source from a url and store it in an output file
This creates uses requests as a dependency because I'm lazy.
It probably would have taken me less time to just write it with
urllib than writing this docstring
"""
# We use stream because these files might be huge
response = requests.get(url, stream=True)
# We don't do anything if there's something wrong with the url
# This is basically what made urllib.urlretrieve a hassle
if not response.ok:
return
with open(output, 'w') as output_file:
for block in response.iter_content(1024):
output_file.write(block)
def archive(directory):
"""
Archive a OpenSpending dataset export directory
"""
# If we accidentally pass in something that's not a directory
# we don't do anything
if not os.path.isdir(directory):
return
# Check if the directory contains a dataset.json file
dataset = os.path.join(directory, 'dataset.json')
if not os.path.isfile(dataset):
return
# Open the dataset.json file and grab the sources listed in it
with open(dataset) as descriptor:
data = json.load(descriptor)
if len(data['sources']):
# Create an archive directory because there are some
# sources we want to grab
archive_directory = os.path.join(directory, 'archive')
if not os.path.exists(archive_directory):
os.makedirs(archive_directory)
# Loop through sources, grab them and store in an output file
# called <source_id>.csv
for source in data['sources']:
filename = '{0}.csv'.format(source['id'])
archive_file = os.path.join(archive_directory, filename)
grab_source(source['url'], output=archive_file)
# If the archive directory is empty which will happen if
# grabbing the sources failed for some reason
if not os.listdir(archive_directory):
os.rmdir(archive_directory)
if __name__ == "__main__":
# Loop through each of the arguments and archive them
for directory in sys.argv[1:]:
archive(directory)
|
|
5594a131574e410d750fbc20d40c0b13c67671e9
|
pinax/app_name/apps.py
|
pinax/app_name/apps.py
|
import importlib
from django.apps import AppConfig as BaseAppConfig
from django.utils.translation import ugettext_lazy as _
class AppConfig(BaseAppConfig):
name = "pinax.{{ app_name }}"
label = "pinax_{{ app_name }}"
verbose_name = _("Pinax {{ app_name|title }}")
|
Add AppConfig to namespace tables with label
|
Add AppConfig to namespace tables with label
|
Python
|
mit
|
pinax/pinax-starter-app
|
Add AppConfig to namespace tables with label
|
import importlib
from django.apps import AppConfig as BaseAppConfig
from django.utils.translation import ugettext_lazy as _
class AppConfig(BaseAppConfig):
name = "pinax.{{ app_name }}"
label = "pinax_{{ app_name }}"
verbose_name = _("Pinax {{ app_name|title }}")
|
<commit_before><commit_msg>Add AppConfig to namespace tables with label<commit_after>
|
import importlib
from django.apps import AppConfig as BaseAppConfig
from django.utils.translation import ugettext_lazy as _
class AppConfig(BaseAppConfig):
name = "pinax.{{ app_name }}"
label = "pinax_{{ app_name }}"
verbose_name = _("Pinax {{ app_name|title }}")
|
Add AppConfig to namespace tables with labelimport importlib
from django.apps import AppConfig as BaseAppConfig
from django.utils.translation import ugettext_lazy as _
class AppConfig(BaseAppConfig):
name = "pinax.{{ app_name }}"
label = "pinax_{{ app_name }}"
verbose_name = _("Pinax {{ app_name|title }}")
|
<commit_before><commit_msg>Add AppConfig to namespace tables with label<commit_after>import importlib
from django.apps import AppConfig as BaseAppConfig
from django.utils.translation import ugettext_lazy as _
class AppConfig(BaseAppConfig):
name = "pinax.{{ app_name }}"
label = "pinax_{{ app_name }}"
verbose_name = _("Pinax {{ app_name|title }}")
|
|
57d207ff7facac0d3970f96f5ac91bbb6e7ec7f8
|
indra/tools/hypothesis_annotator.py
|
indra/tools/hypothesis_annotator.py
|
import logging
from indra.sources import indra_db_rest
from indra.pipeline import AssemblyPipeline
from indra.sources.hypothesis import upload_statement_annotation
ref_priority = ['TRID', 'PMCID', 'PMID']
logger = logging.getLogger(__name__)
def annotate_paper(text_refs, pipeline=None):
"""Upload INDRA Statements as annotations for a given paper.
Parameters
----------
text_refs : dict
A dict of text references, following the same format as
the INDRA Evidence text_refs attribute.
pipeline : Optional[json]
A list of pipeline steps (typically filters) that are applied
before uploading statements to hypothes.is as annotations.
"""
for ref_ns in ref_priority:
ref_id = text_refs.get(ref_ns)
if ref_id:
break
else:
logger.info('Could not find appropriate text refs')
return
ip = indra_db_rest.get_statements_for_paper([(ref_ns.lower(), ref_id)])
stmts = ip.statements
# Cut down evidences to ones just from this paper
for stmt in stmts:
stmt.evidence = [ev for ev in stmt.evidence if
ev.text_refs.get(ref_ns) == ref_id]
if pipeline:
ap = AssemblyPipeline(pipeline)
stmts = ap.run(stmts)
logger.info('Uploading %d statements to hypothes.is' % len(stmts))
for stmt in stmts:
upload_statement_annotation(stmt, annotate_agents=True)
|
Add tool for annotating a given paper
|
Add tool for annotating a given paper
|
Python
|
bsd-2-clause
|
bgyori/indra,bgyori/indra,bgyori/indra,johnbachman/indra,johnbachman/indra,sorgerlab/indra,sorgerlab/belpy,sorgerlab/indra,sorgerlab/belpy,sorgerlab/belpy,sorgerlab/indra,johnbachman/indra
|
Add tool for annotating a given paper
|
import logging
from indra.sources import indra_db_rest
from indra.pipeline import AssemblyPipeline
from indra.sources.hypothesis import upload_statement_annotation
ref_priority = ['TRID', 'PMCID', 'PMID']
logger = logging.getLogger(__name__)
def annotate_paper(text_refs, pipeline=None):
"""Upload INDRA Statements as annotations for a given paper.
Parameters
----------
text_refs : dict
A dict of text references, following the same format as
the INDRA Evidence text_refs attribute.
pipeline : Optional[json]
A list of pipeline steps (typically filters) that are applied
before uploading statements to hypothes.is as annotations.
"""
for ref_ns in ref_priority:
ref_id = text_refs.get(ref_ns)
if ref_id:
break
else:
logger.info('Could not find appropriate text refs')
return
ip = indra_db_rest.get_statements_for_paper([(ref_ns.lower(), ref_id)])
stmts = ip.statements
# Cut down evidences to ones just from this paper
for stmt in stmts:
stmt.evidence = [ev for ev in stmt.evidence if
ev.text_refs.get(ref_ns) == ref_id]
if pipeline:
ap = AssemblyPipeline(pipeline)
stmts = ap.run(stmts)
logger.info('Uploading %d statements to hypothes.is' % len(stmts))
for stmt in stmts:
upload_statement_annotation(stmt, annotate_agents=True)
|
<commit_before><commit_msg>Add tool for annotating a given paper<commit_after>
|
import logging
from indra.sources import indra_db_rest
from indra.pipeline import AssemblyPipeline
from indra.sources.hypothesis import upload_statement_annotation
ref_priority = ['TRID', 'PMCID', 'PMID']
logger = logging.getLogger(__name__)
def annotate_paper(text_refs, pipeline=None):
"""Upload INDRA Statements as annotations for a given paper.
Parameters
----------
text_refs : dict
A dict of text references, following the same format as
the INDRA Evidence text_refs attribute.
pipeline : Optional[json]
A list of pipeline steps (typically filters) that are applied
before uploading statements to hypothes.is as annotations.
"""
for ref_ns in ref_priority:
ref_id = text_refs.get(ref_ns)
if ref_id:
break
else:
logger.info('Could not find appropriate text refs')
return
ip = indra_db_rest.get_statements_for_paper([(ref_ns.lower(), ref_id)])
stmts = ip.statements
# Cut down evidences to ones just from this paper
for stmt in stmts:
stmt.evidence = [ev for ev in stmt.evidence if
ev.text_refs.get(ref_ns) == ref_id]
if pipeline:
ap = AssemblyPipeline(pipeline)
stmts = ap.run(stmts)
logger.info('Uploading %d statements to hypothes.is' % len(stmts))
for stmt in stmts:
upload_statement_annotation(stmt, annotate_agents=True)
|
Add tool for annotating a given paperimport logging
from indra.sources import indra_db_rest
from indra.pipeline import AssemblyPipeline
from indra.sources.hypothesis import upload_statement_annotation
ref_priority = ['TRID', 'PMCID', 'PMID']
logger = logging.getLogger(__name__)
def annotate_paper(text_refs, pipeline=None):
"""Upload INDRA Statements as annotations for a given paper.
Parameters
----------
text_refs : dict
A dict of text references, following the same format as
the INDRA Evidence text_refs attribute.
pipeline : Optional[json]
A list of pipeline steps (typically filters) that are applied
before uploading statements to hypothes.is as annotations.
"""
for ref_ns in ref_priority:
ref_id = text_refs.get(ref_ns)
if ref_id:
break
else:
logger.info('Could not find appropriate text refs')
return
ip = indra_db_rest.get_statements_for_paper([(ref_ns.lower(), ref_id)])
stmts = ip.statements
# Cut down evidences to ones just from this paper
for stmt in stmts:
stmt.evidence = [ev for ev in stmt.evidence if
ev.text_refs.get(ref_ns) == ref_id]
if pipeline:
ap = AssemblyPipeline(pipeline)
stmts = ap.run(stmts)
logger.info('Uploading %d statements to hypothes.is' % len(stmts))
for stmt in stmts:
upload_statement_annotation(stmt, annotate_agents=True)
|
<commit_before><commit_msg>Add tool for annotating a given paper<commit_after>import logging
from indra.sources import indra_db_rest
from indra.pipeline import AssemblyPipeline
from indra.sources.hypothesis import upload_statement_annotation
ref_priority = ['TRID', 'PMCID', 'PMID']
logger = logging.getLogger(__name__)
def annotate_paper(text_refs, pipeline=None):
"""Upload INDRA Statements as annotations for a given paper.
Parameters
----------
text_refs : dict
A dict of text references, following the same format as
the INDRA Evidence text_refs attribute.
pipeline : Optional[json]
A list of pipeline steps (typically filters) that are applied
before uploading statements to hypothes.is as annotations.
"""
for ref_ns in ref_priority:
ref_id = text_refs.get(ref_ns)
if ref_id:
break
else:
logger.info('Could not find appropriate text refs')
return
ip = indra_db_rest.get_statements_for_paper([(ref_ns.lower(), ref_id)])
stmts = ip.statements
# Cut down evidences to ones just from this paper
for stmt in stmts:
stmt.evidence = [ev for ev in stmt.evidence if
ev.text_refs.get(ref_ns) == ref_id]
if pipeline:
ap = AssemblyPipeline(pipeline)
stmts = ap.run(stmts)
logger.info('Uploading %d statements to hypothes.is' % len(stmts))
for stmt in stmts:
upload_statement_annotation(stmt, annotate_agents=True)
|
|
aec2ff67b3dbf2fa7770fc62b08659b51e6ed41e
|
tests/test_membrane.py
|
tests/test_membrane.py
|
from membrane import Membrane, Proxy
from rt import Wait
def test_membrane():
m1 = Membrane.create()
m1.transport = {'protocol': 'null',
'membrane': m1}
m2 = Membrane.create()
m2.transport = {'protocol': 'null',
'membrane': m1}
m1 << 'start'
m2 << 'start'
w = Wait.create()
actor1 = Wait.create()
actor2 = Wait.create()
m2 << {'get_uid': actor2,
'reply_to': w}
u2 = w.act()['uid']
m1 << {'create_proxy': u2,
'transport': {'protocol': 'null',
'membrane': m2},
'reply_to': w}
proxy = w.act()['proxy']
proxy << {'foo': 5,
'reply_to': actor1}
msg = actor2.act()
assert msg['foo'] == 5
proxy2 = msg['reply_to']
assert isinstance(proxy2, Proxy)
proxy2 << {'bar': 3,
'reply_to': actor2}
msg = actor1.act()
assert msg['bar'] == 3
assert msg['reply_to'] is proxy
|
Add test for membrane behavior
|
Add test for membrane behavior
|
Python
|
mit
|
waltermoreira/tartpy
|
Add test for membrane behavior
|
from membrane import Membrane, Proxy
from rt import Wait
def test_membrane():
m1 = Membrane.create()
m1.transport = {'protocol': 'null',
'membrane': m1}
m2 = Membrane.create()
m2.transport = {'protocol': 'null',
'membrane': m1}
m1 << 'start'
m2 << 'start'
w = Wait.create()
actor1 = Wait.create()
actor2 = Wait.create()
m2 << {'get_uid': actor2,
'reply_to': w}
u2 = w.act()['uid']
m1 << {'create_proxy': u2,
'transport': {'protocol': 'null',
'membrane': m2},
'reply_to': w}
proxy = w.act()['proxy']
proxy << {'foo': 5,
'reply_to': actor1}
msg = actor2.act()
assert msg['foo'] == 5
proxy2 = msg['reply_to']
assert isinstance(proxy2, Proxy)
proxy2 << {'bar': 3,
'reply_to': actor2}
msg = actor1.act()
assert msg['bar'] == 3
assert msg['reply_to'] is proxy
|
<commit_before><commit_msg>Add test for membrane behavior<commit_after>
|
from membrane import Membrane, Proxy
from rt import Wait
def test_membrane():
m1 = Membrane.create()
m1.transport = {'protocol': 'null',
'membrane': m1}
m2 = Membrane.create()
m2.transport = {'protocol': 'null',
'membrane': m1}
m1 << 'start'
m2 << 'start'
w = Wait.create()
actor1 = Wait.create()
actor2 = Wait.create()
m2 << {'get_uid': actor2,
'reply_to': w}
u2 = w.act()['uid']
m1 << {'create_proxy': u2,
'transport': {'protocol': 'null',
'membrane': m2},
'reply_to': w}
proxy = w.act()['proxy']
proxy << {'foo': 5,
'reply_to': actor1}
msg = actor2.act()
assert msg['foo'] == 5
proxy2 = msg['reply_to']
assert isinstance(proxy2, Proxy)
proxy2 << {'bar': 3,
'reply_to': actor2}
msg = actor1.act()
assert msg['bar'] == 3
assert msg['reply_to'] is proxy
|
Add test for membrane behaviorfrom membrane import Membrane, Proxy
from rt import Wait
def test_membrane():
m1 = Membrane.create()
m1.transport = {'protocol': 'null',
'membrane': m1}
m2 = Membrane.create()
m2.transport = {'protocol': 'null',
'membrane': m1}
m1 << 'start'
m2 << 'start'
w = Wait.create()
actor1 = Wait.create()
actor2 = Wait.create()
m2 << {'get_uid': actor2,
'reply_to': w}
u2 = w.act()['uid']
m1 << {'create_proxy': u2,
'transport': {'protocol': 'null',
'membrane': m2},
'reply_to': w}
proxy = w.act()['proxy']
proxy << {'foo': 5,
'reply_to': actor1}
msg = actor2.act()
assert msg['foo'] == 5
proxy2 = msg['reply_to']
assert isinstance(proxy2, Proxy)
proxy2 << {'bar': 3,
'reply_to': actor2}
msg = actor1.act()
assert msg['bar'] == 3
assert msg['reply_to'] is proxy
|
<commit_before><commit_msg>Add test for membrane behavior<commit_after>from membrane import Membrane, Proxy
from rt import Wait
def test_membrane():
m1 = Membrane.create()
m1.transport = {'protocol': 'null',
'membrane': m1}
m2 = Membrane.create()
m2.transport = {'protocol': 'null',
'membrane': m1}
m1 << 'start'
m2 << 'start'
w = Wait.create()
actor1 = Wait.create()
actor2 = Wait.create()
m2 << {'get_uid': actor2,
'reply_to': w}
u2 = w.act()['uid']
m1 << {'create_proxy': u2,
'transport': {'protocol': 'null',
'membrane': m2},
'reply_to': w}
proxy = w.act()['proxy']
proxy << {'foo': 5,
'reply_to': actor1}
msg = actor2.act()
assert msg['foo'] == 5
proxy2 = msg['reply_to']
assert isinstance(proxy2, Proxy)
proxy2 << {'bar': 3,
'reply_to': actor2}
msg = actor1.act()
assert msg['bar'] == 3
assert msg['reply_to'] is proxy
|
|
af27946ce53405afa044de872d931c0b92a2af9b
|
providers/popularity/thepiratebay.py
|
providers/popularity/thepiratebay.py
|
from providers.popularity.provider import PopularityProvider
from utils.torrent_util import torrent_to_movie, remove_bad_torrent_matches
from urllib import quote
IDENTIFIER = "thepiratebay"
class Provider(PopularityProvider):
PAGES_TO_FETCH = 3
def get_popular(self):
names = []
query = quote("720p | 1080p | DVDRip")
for page in range(Provider.PAGES_TO_FETCH):
url = "https://thepiratebay.org/search/%s/%s/99/207" % (query, page)
# assert False, url
names += self.parse_html(url, ".detLink", cache=False)
movies = [torrent_to_movie(name) for name in names]
movies = remove_bad_torrent_matches(movies)
return movies
|
Add support for The Pirate Bay as popularity provider.
|
Add support for The Pirate Bay as popularity provider.
|
Python
|
mit
|
EmilStenstrom/nephele
|
Add support for The Pirate Bay as popularity provider.
|
from providers.popularity.provider import PopularityProvider
from utils.torrent_util import torrent_to_movie, remove_bad_torrent_matches
from urllib import quote
IDENTIFIER = "thepiratebay"
class Provider(PopularityProvider):
PAGES_TO_FETCH = 3
def get_popular(self):
names = []
query = quote("720p | 1080p | DVDRip")
for page in range(Provider.PAGES_TO_FETCH):
url = "https://thepiratebay.org/search/%s/%s/99/207" % (query, page)
# assert False, url
names += self.parse_html(url, ".detLink", cache=False)
movies = [torrent_to_movie(name) for name in names]
movies = remove_bad_torrent_matches(movies)
return movies
|
<commit_before><commit_msg>Add support for The Pirate Bay as popularity provider.<commit_after>
|
from providers.popularity.provider import PopularityProvider
from utils.torrent_util import torrent_to_movie, remove_bad_torrent_matches
from urllib import quote
IDENTIFIER = "thepiratebay"
class Provider(PopularityProvider):
PAGES_TO_FETCH = 3
def get_popular(self):
names = []
query = quote("720p | 1080p | DVDRip")
for page in range(Provider.PAGES_TO_FETCH):
url = "https://thepiratebay.org/search/%s/%s/99/207" % (query, page)
# assert False, url
names += self.parse_html(url, ".detLink", cache=False)
movies = [torrent_to_movie(name) for name in names]
movies = remove_bad_torrent_matches(movies)
return movies
|
Add support for The Pirate Bay as popularity provider.from providers.popularity.provider import PopularityProvider
from utils.torrent_util import torrent_to_movie, remove_bad_torrent_matches
from urllib import quote
IDENTIFIER = "thepiratebay"
class Provider(PopularityProvider):
PAGES_TO_FETCH = 3
def get_popular(self):
names = []
query = quote("720p | 1080p | DVDRip")
for page in range(Provider.PAGES_TO_FETCH):
url = "https://thepiratebay.org/search/%s/%s/99/207" % (query, page)
# assert False, url
names += self.parse_html(url, ".detLink", cache=False)
movies = [torrent_to_movie(name) for name in names]
movies = remove_bad_torrent_matches(movies)
return movies
|
<commit_before><commit_msg>Add support for The Pirate Bay as popularity provider.<commit_after>from providers.popularity.provider import PopularityProvider
from utils.torrent_util import torrent_to_movie, remove_bad_torrent_matches
from urllib import quote
IDENTIFIER = "thepiratebay"
class Provider(PopularityProvider):
PAGES_TO_FETCH = 3
def get_popular(self):
names = []
query = quote("720p | 1080p | DVDRip")
for page in range(Provider.PAGES_TO_FETCH):
url = "https://thepiratebay.org/search/%s/%s/99/207" % (query, page)
# assert False, url
names += self.parse_html(url, ".detLink", cache=False)
movies = [torrent_to_movie(name) for name in names]
movies = remove_bad_torrent_matches(movies)
return movies
|
|
88fc811658b002d0ef6f0cce070df9e8ef85e739
|
tweetyr.py
|
tweetyr.py
|
#!/usr/bin/env python
# -*- coding: UTF-8
'''
A simple twitter client that posts current weather to twitter
'''
import tweepy
import json
from urllib2 import urlopen
import os
root =os.path.dirname(os.path.abspath(__file__))
conf = json.loads(file(root+'/twitterconfig.json').read())
auth = tweepy.OAuthHandler(conf['consumerkey'], conf['consumersecret'])
auth.set_access_token(conf['accesstoken'], conf['accesssecret'])
api = tweepy.API(auth)
w = json.loads(urlopen(conf['apiurl']).read())[0]
api.update_status('%(outtemp).1f °C, %(windspeed).1f m/s vind, %(rain).1f mm nedbør' %w);
|
Add a simple tweeting weather bot
|
Add a simple tweeting weather bot
|
Python
|
bsd-3-clause
|
torhve/Amatyr,torhve/Amatyr,torhve/Amatyr
|
Add a simple tweeting weather bot
|
#!/usr/bin/env python
# -*- coding: UTF-8
'''
A simple twitter client that posts current weather to twitter
'''
import tweepy
import json
from urllib2 import urlopen
import os
root =os.path.dirname(os.path.abspath(__file__))
conf = json.loads(file(root+'/twitterconfig.json').read())
auth = tweepy.OAuthHandler(conf['consumerkey'], conf['consumersecret'])
auth.set_access_token(conf['accesstoken'], conf['accesssecret'])
api = tweepy.API(auth)
w = json.loads(urlopen(conf['apiurl']).read())[0]
api.update_status('%(outtemp).1f °C, %(windspeed).1f m/s vind, %(rain).1f mm nedbør' %w);
|
<commit_before><commit_msg>Add a simple tweeting weather bot<commit_after>
|
#!/usr/bin/env python
# -*- coding: UTF-8
'''
A simple twitter client that posts current weather to twitter
'''
import tweepy
import json
from urllib2 import urlopen
import os
root =os.path.dirname(os.path.abspath(__file__))
conf = json.loads(file(root+'/twitterconfig.json').read())
auth = tweepy.OAuthHandler(conf['consumerkey'], conf['consumersecret'])
auth.set_access_token(conf['accesstoken'], conf['accesssecret'])
api = tweepy.API(auth)
w = json.loads(urlopen(conf['apiurl']).read())[0]
api.update_status('%(outtemp).1f °C, %(windspeed).1f m/s vind, %(rain).1f mm nedbør' %w);
|
Add a simple tweeting weather bot#!/usr/bin/env python
# -*- coding: UTF-8
'''
A simple twitter client that posts current weather to twitter
'''
import tweepy
import json
from urllib2 import urlopen
import os
root =os.path.dirname(os.path.abspath(__file__))
conf = json.loads(file(root+'/twitterconfig.json').read())
auth = tweepy.OAuthHandler(conf['consumerkey'], conf['consumersecret'])
auth.set_access_token(conf['accesstoken'], conf['accesssecret'])
api = tweepy.API(auth)
w = json.loads(urlopen(conf['apiurl']).read())[0]
api.update_status('%(outtemp).1f °C, %(windspeed).1f m/s vind, %(rain).1f mm nedbør' %w);
|
<commit_before><commit_msg>Add a simple tweeting weather bot<commit_after>#!/usr/bin/env python
# -*- coding: UTF-8
'''
A simple twitter client that posts current weather to twitter
'''
import tweepy
import json
from urllib2 import urlopen
import os
root =os.path.dirname(os.path.abspath(__file__))
conf = json.loads(file(root+'/twitterconfig.json').read())
auth = tweepy.OAuthHandler(conf['consumerkey'], conf['consumersecret'])
auth.set_access_token(conf['accesstoken'], conf['accesssecret'])
api = tweepy.API(auth)
w = json.loads(urlopen(conf['apiurl']).read())[0]
api.update_status('%(outtemp).1f °C, %(windspeed).1f m/s vind, %(rain).1f mm nedbør' %w);
|
|
b08e60e57484810fc2fb695e5a8fc6aef7b8ea77
|
sst/create_filelist.py
|
sst/create_filelist.py
|
#!/usr/bin/env python
"""Create filelists to use for training and testing."""
import os
import json
from sklearn.cross_validation import train_test_split
path_data = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"image_2/")
files_data = [os.path.join(path_data, f)
for f in sorted(os.listdir(path_data))
if f.endswith('.png')]
path_gt = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"gt_image_2/")
files_gt = [os.path.join(path_gt, f)
for f in sorted(os.listdir(path_gt))
if f.endswith('.png')]
zipped = list(zip(files_data, files_gt))
train, test = train_test_split(zipped, random_state=0)
train_data = []
for el in train:
train_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
test_data = []
for el in test:
test_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
with open('trainfiles.json', 'w') as outfile:
json.dump(train_data, outfile, indent=4, separators=(',', ': '))
with open('testfiles.json', 'w') as outfile:
json.dump(test_data, outfile, indent=4, separators=(',', ': '))
|
Add utility function to create file list
|
Add utility function to create file list
|
Python
|
mit
|
MartinThoma/sst,MartinThoma/sst
|
Add utility function to create file list
|
#!/usr/bin/env python
"""Create filelists to use for training and testing."""
import os
import json
from sklearn.cross_validation import train_test_split
path_data = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"image_2/")
files_data = [os.path.join(path_data, f)
for f in sorted(os.listdir(path_data))
if f.endswith('.png')]
path_gt = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"gt_image_2/")
files_gt = [os.path.join(path_gt, f)
for f in sorted(os.listdir(path_gt))
if f.endswith('.png')]
zipped = list(zip(files_data, files_gt))
train, test = train_test_split(zipped, random_state=0)
train_data = []
for el in train:
train_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
test_data = []
for el in test:
test_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
with open('trainfiles.json', 'w') as outfile:
json.dump(train_data, outfile, indent=4, separators=(',', ': '))
with open('testfiles.json', 'w') as outfile:
json.dump(test_data, outfile, indent=4, separators=(',', ': '))
|
<commit_before><commit_msg>Add utility function to create file list<commit_after>
|
#!/usr/bin/env python
"""Create filelists to use for training and testing."""
import os
import json
from sklearn.cross_validation import train_test_split
path_data = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"image_2/")
files_data = [os.path.join(path_data, f)
for f in sorted(os.listdir(path_data))
if f.endswith('.png')]
path_gt = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"gt_image_2/")
files_gt = [os.path.join(path_gt, f)
for f in sorted(os.listdir(path_gt))
if f.endswith('.png')]
zipped = list(zip(files_data, files_gt))
train, test = train_test_split(zipped, random_state=0)
train_data = []
for el in train:
train_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
test_data = []
for el in test:
test_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
with open('trainfiles.json', 'w') as outfile:
json.dump(train_data, outfile, indent=4, separators=(',', ': '))
with open('testfiles.json', 'w') as outfile:
json.dump(test_data, outfile, indent=4, separators=(',', ': '))
|
Add utility function to create file list#!/usr/bin/env python
"""Create filelists to use for training and testing."""
import os
import json
from sklearn.cross_validation import train_test_split
path_data = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"image_2/")
files_data = [os.path.join(path_data, f)
for f in sorted(os.listdir(path_data))
if f.endswith('.png')]
path_gt = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"gt_image_2/")
files_gt = [os.path.join(path_gt, f)
for f in sorted(os.listdir(path_gt))
if f.endswith('.png')]
zipped = list(zip(files_data, files_gt))
train, test = train_test_split(zipped, random_state=0)
train_data = []
for el in train:
train_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
test_data = []
for el in test:
test_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
with open('trainfiles.json', 'w') as outfile:
json.dump(train_data, outfile, indent=4, separators=(',', ': '))
with open('testfiles.json', 'w') as outfile:
json.dump(test_data, outfile, indent=4, separators=(',', ': '))
|
<commit_before><commit_msg>Add utility function to create file list<commit_after>#!/usr/bin/env python
"""Create filelists to use for training and testing."""
import os
import json
from sklearn.cross_validation import train_test_split
path_data = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"image_2/")
files_data = [os.path.join(path_data, f)
for f in sorted(os.listdir(path_data))
if f.endswith('.png')]
path_gt = os.path.join(os.environ['DATA_PATH'],
'data_road/roadC621/',
"gt_image_2/")
files_gt = [os.path.join(path_gt, f)
for f in sorted(os.listdir(path_gt))
if f.endswith('.png')]
zipped = list(zip(files_data, files_gt))
train, test = train_test_split(zipped, random_state=0)
train_data = []
for el in train:
train_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
test_data = []
for el in test:
test_data.append({'raw': os.path.abspath(el[0]),
'mask': os.path.abspath(el[1])})
with open('trainfiles.json', 'w') as outfile:
json.dump(train_data, outfile, indent=4, separators=(',', ': '))
with open('testfiles.json', 'w') as outfile:
json.dump(test_data, outfile, indent=4, separators=(',', ': '))
|
|
f2ce38fd51706848814dc9a2f420776bcf8ebd3f
|
pwnedcheck/__init__.py
|
pwnedcheck/__init__.py
|
__author__ = 'Casey Dunham'
__version__ = "0.1.0"
import urllib
import urllib2
import json
PWNED_API_URL = "https://haveibeenpwned.com/api/breachedaccount/%s"
class InvalidEmail(Exception):
pass
def check(email):
req = urllib.Request(PWNED_API_URL % urllib.quote(email))
try:
resp = urllib.urlopen(req)
return json.loads(resp.read())
except urllib2.HTTPError, e:
if e.code == 400:
raise InvalidEmail("Email address does not appear to be a valid email")
return []
|
Check single email against api
|
Check single email against api
|
Python
|
mit
|
caseydunham/PwnedCheck
|
Check single email against api
|
__author__ = 'Casey Dunham'
__version__ = "0.1.0"
import urllib
import urllib2
import json
PWNED_API_URL = "https://haveibeenpwned.com/api/breachedaccount/%s"
class InvalidEmail(Exception):
pass
def check(email):
req = urllib.Request(PWNED_API_URL % urllib.quote(email))
try:
resp = urllib.urlopen(req)
return json.loads(resp.read())
except urllib2.HTTPError, e:
if e.code == 400:
raise InvalidEmail("Email address does not appear to be a valid email")
return []
|
<commit_before><commit_msg>Check single email against api<commit_after>
|
__author__ = 'Casey Dunham'
__version__ = "0.1.0"
import urllib
import urllib2
import json
PWNED_API_URL = "https://haveibeenpwned.com/api/breachedaccount/%s"
class InvalidEmail(Exception):
pass
def check(email):
req = urllib.Request(PWNED_API_URL % urllib.quote(email))
try:
resp = urllib.urlopen(req)
return json.loads(resp.read())
except urllib2.HTTPError, e:
if e.code == 400:
raise InvalidEmail("Email address does not appear to be a valid email")
return []
|
Check single email against api__author__ = 'Casey Dunham'
__version__ = "0.1.0"
import urllib
import urllib2
import json
PWNED_API_URL = "https://haveibeenpwned.com/api/breachedaccount/%s"
class InvalidEmail(Exception):
pass
def check(email):
req = urllib.Request(PWNED_API_URL % urllib.quote(email))
try:
resp = urllib.urlopen(req)
return json.loads(resp.read())
except urllib2.HTTPError, e:
if e.code == 400:
raise InvalidEmail("Email address does not appear to be a valid email")
return []
|
<commit_before><commit_msg>Check single email against api<commit_after>__author__ = 'Casey Dunham'
__version__ = "0.1.0"
import urllib
import urllib2
import json
PWNED_API_URL = "https://haveibeenpwned.com/api/breachedaccount/%s"
class InvalidEmail(Exception):
pass
def check(email):
req = urllib.Request(PWNED_API_URL % urllib.quote(email))
try:
resp = urllib.urlopen(req)
return json.loads(resp.read())
except urllib2.HTTPError, e:
if e.code == 400:
raise InvalidEmail("Email address does not appear to be a valid email")
return []
|
|
33a0e85ef52bd13a407f17aaacb37d4081343c0e
|
data/3_2_make_json_for_each_pheno.py
|
data/3_2_make_json_for_each_pheno.py
|
#!/usr/bin/env python2
from __future__ import print_function, division, absolute_import
import gzip
import glob
import heapq
import re
import os.path
import json
def parse_marker_id(marker_id):
chr1, pos1, ref, alt, chr2, pos2 = re.match(r'([^:]+):([0-9]+)_([-ATCG]+)/([-ATCG]+)_([^:]+):([0-9]+)', marker_id).groups()
assert chr1 == chr2
assert pos1 == pos2
return chr1, int(pos1), ref, alt
files_to_convert = glob.glob('/var/pheweb_data/gwas-single-pheno/*.vcf.gz')
for filename in files_to_convert:
basename = os.path.basename(filename)
dest_filename = '/var/pheweb_data/gwas-json/{}.json'.format(basename.rstrip('.vcf.gz'))
if os.path.exists(dest_filename):
continue
print('{} -> {}'.format(filename, dest_filename))
with gzip.open(filename) as f:
variants = (line.rstrip('\n').split('\t') for line in f)
variants = ((v[0], int(v[1]), v[2], float(v[3]), float(v[4]), float(v[5])) for v in variants) # chrom, pos, marker_id, maf, pval, beta
top_variants = heapq.nsmallest(2000, variants, key=lambda v:v[4])
rv = []
for variant in top_variants:
chrom, pos, ref, alt = parse_marker_id(variant[2])
assert chrom == variant[0]
assert pos == variant[1]
rv.append({
'chrom': chrom,
'pos': pos,
'ref': ref,
'alt': alt,
'maf': variant[3],
'pval': variant[4],
# TODO: include beta
})
with open(dest_filename, 'w') as f:
json.dump(rv, f, sort_keys=True, indent=0)
|
Make GWAS json for each pheno
|
Make GWAS json for each pheno
|
Python
|
agpl-3.0
|
statgen/pheweb,statgen/pheweb,statgen/pheweb,statgen/pheweb,statgen/pheweb
|
Make GWAS json for each pheno
|
#!/usr/bin/env python2
from __future__ import print_function, division, absolute_import
import gzip
import glob
import heapq
import re
import os.path
import json
def parse_marker_id(marker_id):
chr1, pos1, ref, alt, chr2, pos2 = re.match(r'([^:]+):([0-9]+)_([-ATCG]+)/([-ATCG]+)_([^:]+):([0-9]+)', marker_id).groups()
assert chr1 == chr2
assert pos1 == pos2
return chr1, int(pos1), ref, alt
files_to_convert = glob.glob('/var/pheweb_data/gwas-single-pheno/*.vcf.gz')
for filename in files_to_convert:
basename = os.path.basename(filename)
dest_filename = '/var/pheweb_data/gwas-json/{}.json'.format(basename.rstrip('.vcf.gz'))
if os.path.exists(dest_filename):
continue
print('{} -> {}'.format(filename, dest_filename))
with gzip.open(filename) as f:
variants = (line.rstrip('\n').split('\t') for line in f)
variants = ((v[0], int(v[1]), v[2], float(v[3]), float(v[4]), float(v[5])) for v in variants) # chrom, pos, marker_id, maf, pval, beta
top_variants = heapq.nsmallest(2000, variants, key=lambda v:v[4])
rv = []
for variant in top_variants:
chrom, pos, ref, alt = parse_marker_id(variant[2])
assert chrom == variant[0]
assert pos == variant[1]
rv.append({
'chrom': chrom,
'pos': pos,
'ref': ref,
'alt': alt,
'maf': variant[3],
'pval': variant[4],
# TODO: include beta
})
with open(dest_filename, 'w') as f:
json.dump(rv, f, sort_keys=True, indent=0)
|
<commit_before><commit_msg>Make GWAS json for each pheno<commit_after>
|
#!/usr/bin/env python2
from __future__ import print_function, division, absolute_import
import gzip
import glob
import heapq
import re
import os.path
import json
def parse_marker_id(marker_id):
chr1, pos1, ref, alt, chr2, pos2 = re.match(r'([^:]+):([0-9]+)_([-ATCG]+)/([-ATCG]+)_([^:]+):([0-9]+)', marker_id).groups()
assert chr1 == chr2
assert pos1 == pos2
return chr1, int(pos1), ref, alt
files_to_convert = glob.glob('/var/pheweb_data/gwas-single-pheno/*.vcf.gz')
for filename in files_to_convert:
basename = os.path.basename(filename)
dest_filename = '/var/pheweb_data/gwas-json/{}.json'.format(basename.rstrip('.vcf.gz'))
if os.path.exists(dest_filename):
continue
print('{} -> {}'.format(filename, dest_filename))
with gzip.open(filename) as f:
variants = (line.rstrip('\n').split('\t') for line in f)
variants = ((v[0], int(v[1]), v[2], float(v[3]), float(v[4]), float(v[5])) for v in variants) # chrom, pos, marker_id, maf, pval, beta
top_variants = heapq.nsmallest(2000, variants, key=lambda v:v[4])
rv = []
for variant in top_variants:
chrom, pos, ref, alt = parse_marker_id(variant[2])
assert chrom == variant[0]
assert pos == variant[1]
rv.append({
'chrom': chrom,
'pos': pos,
'ref': ref,
'alt': alt,
'maf': variant[3],
'pval': variant[4],
# TODO: include beta
})
with open(dest_filename, 'w') as f:
json.dump(rv, f, sort_keys=True, indent=0)
|
Make GWAS json for each pheno#!/usr/bin/env python2
from __future__ import print_function, division, absolute_import
import gzip
import glob
import heapq
import re
import os.path
import json
def parse_marker_id(marker_id):
chr1, pos1, ref, alt, chr2, pos2 = re.match(r'([^:]+):([0-9]+)_([-ATCG]+)/([-ATCG]+)_([^:]+):([0-9]+)', marker_id).groups()
assert chr1 == chr2
assert pos1 == pos2
return chr1, int(pos1), ref, alt
files_to_convert = glob.glob('/var/pheweb_data/gwas-single-pheno/*.vcf.gz')
for filename in files_to_convert:
basename = os.path.basename(filename)
dest_filename = '/var/pheweb_data/gwas-json/{}.json'.format(basename.rstrip('.vcf.gz'))
if os.path.exists(dest_filename):
continue
print('{} -> {}'.format(filename, dest_filename))
with gzip.open(filename) as f:
variants = (line.rstrip('\n').split('\t') for line in f)
variants = ((v[0], int(v[1]), v[2], float(v[3]), float(v[4]), float(v[5])) for v in variants) # chrom, pos, marker_id, maf, pval, beta
top_variants = heapq.nsmallest(2000, variants, key=lambda v:v[4])
rv = []
for variant in top_variants:
chrom, pos, ref, alt = parse_marker_id(variant[2])
assert chrom == variant[0]
assert pos == variant[1]
rv.append({
'chrom': chrom,
'pos': pos,
'ref': ref,
'alt': alt,
'maf': variant[3],
'pval': variant[4],
# TODO: include beta
})
with open(dest_filename, 'w') as f:
json.dump(rv, f, sort_keys=True, indent=0)
|
<commit_before><commit_msg>Make GWAS json for each pheno<commit_after>#!/usr/bin/env python2
from __future__ import print_function, division, absolute_import
import gzip
import glob
import heapq
import re
import os.path
import json
def parse_marker_id(marker_id):
chr1, pos1, ref, alt, chr2, pos2 = re.match(r'([^:]+):([0-9]+)_([-ATCG]+)/([-ATCG]+)_([^:]+):([0-9]+)', marker_id).groups()
assert chr1 == chr2
assert pos1 == pos2
return chr1, int(pos1), ref, alt
files_to_convert = glob.glob('/var/pheweb_data/gwas-single-pheno/*.vcf.gz')
for filename in files_to_convert:
basename = os.path.basename(filename)
dest_filename = '/var/pheweb_data/gwas-json/{}.json'.format(basename.rstrip('.vcf.gz'))
if os.path.exists(dest_filename):
continue
print('{} -> {}'.format(filename, dest_filename))
with gzip.open(filename) as f:
variants = (line.rstrip('\n').split('\t') for line in f)
variants = ((v[0], int(v[1]), v[2], float(v[3]), float(v[4]), float(v[5])) for v in variants) # chrom, pos, marker_id, maf, pval, beta
top_variants = heapq.nsmallest(2000, variants, key=lambda v:v[4])
rv = []
for variant in top_variants:
chrom, pos, ref, alt = parse_marker_id(variant[2])
assert chrom == variant[0]
assert pos == variant[1]
rv.append({
'chrom': chrom,
'pos': pos,
'ref': ref,
'alt': alt,
'maf': variant[3],
'pval': variant[4],
# TODO: include beta
})
with open(dest_filename, 'w') as f:
json.dump(rv, f, sort_keys=True, indent=0)
|
|
5f2e986a9dbc8cf82e55fd711dbe9931b4b3edc4
|
link_in_global_module.py
|
link_in_global_module.py
|
# Link in a module in the global Python site-packages the virtualenv that we are currently in
# Author: Luke Macken <lmacken@redhat.com>
import os
import sys
from glob import glob
from distutils.sysconfig import get_python_lib
def symlink_global_module_into_virtualenv(modulename, env):
for path in (get_python_lib(), get_python_lib(1)):
for modpath in glob(os.path.join(path, modulename) + '*'):
dest = os.path.join(env, path.replace('/usr/', ''), os.path.basename(modpath))
if os.path.exists(dest):
assert os.path.islink(dest)
else:
print "%s => %s" % (modpath, dest)
os.symlink(modpath, dest)
if __name__ == '__main__':
if len(sys.argv) != 3:
print "Usage: %s <virtualenv> <modulename>" % sys.argv[0]
sys.exit(2)
env = os.environ.get('VIRTUAL_ENV')
if env:
print "You must deactivate your virtualenv first"
sys.exit(1)
prog, env, module = sys.argv
symlink_global_module_into_virtualenv(module, env)
|
Add a tool for linking in global python modules into our virtualenv
|
Add a tool for linking in global python modules into our virtualenv
|
Python
|
agpl-3.0
|
Fale/fedora-packages,fedora-infra/fedora-packages,Fale/fedora-packages,fedora-infra/fedora-packages,fedora-infra/fedora-packages,Fale/fedora-packages,fedora-infra/fedora-packages
|
Add a tool for linking in global python modules into our virtualenv
|
# Link in a module in the global Python site-packages the virtualenv that we are currently in
# Author: Luke Macken <lmacken@redhat.com>
import os
import sys
from glob import glob
from distutils.sysconfig import get_python_lib
def symlink_global_module_into_virtualenv(modulename, env):
for path in (get_python_lib(), get_python_lib(1)):
for modpath in glob(os.path.join(path, modulename) + '*'):
dest = os.path.join(env, path.replace('/usr/', ''), os.path.basename(modpath))
if os.path.exists(dest):
assert os.path.islink(dest)
else:
print "%s => %s" % (modpath, dest)
os.symlink(modpath, dest)
if __name__ == '__main__':
if len(sys.argv) != 3:
print "Usage: %s <virtualenv> <modulename>" % sys.argv[0]
sys.exit(2)
env = os.environ.get('VIRTUAL_ENV')
if env:
print "You must deactivate your virtualenv first"
sys.exit(1)
prog, env, module = sys.argv
symlink_global_module_into_virtualenv(module, env)
|
<commit_before><commit_msg>Add a tool for linking in global python modules into our virtualenv<commit_after>
|
# Link in a module in the global Python site-packages the virtualenv that we are currently in
# Author: Luke Macken <lmacken@redhat.com>
import os
import sys
from glob import glob
from distutils.sysconfig import get_python_lib
def symlink_global_module_into_virtualenv(modulename, env):
for path in (get_python_lib(), get_python_lib(1)):
for modpath in glob(os.path.join(path, modulename) + '*'):
dest = os.path.join(env, path.replace('/usr/', ''), os.path.basename(modpath))
if os.path.exists(dest):
assert os.path.islink(dest)
else:
print "%s => %s" % (modpath, dest)
os.symlink(modpath, dest)
if __name__ == '__main__':
if len(sys.argv) != 3:
print "Usage: %s <virtualenv> <modulename>" % sys.argv[0]
sys.exit(2)
env = os.environ.get('VIRTUAL_ENV')
if env:
print "You must deactivate your virtualenv first"
sys.exit(1)
prog, env, module = sys.argv
symlink_global_module_into_virtualenv(module, env)
|
Add a tool for linking in global python modules into our virtualenv# Link in a module in the global Python site-packages the virtualenv that we are currently in
# Author: Luke Macken <lmacken@redhat.com>
import os
import sys
from glob import glob
from distutils.sysconfig import get_python_lib
def symlink_global_module_into_virtualenv(modulename, env):
for path in (get_python_lib(), get_python_lib(1)):
for modpath in glob(os.path.join(path, modulename) + '*'):
dest = os.path.join(env, path.replace('/usr/', ''), os.path.basename(modpath))
if os.path.exists(dest):
assert os.path.islink(dest)
else:
print "%s => %s" % (modpath, dest)
os.symlink(modpath, dest)
if __name__ == '__main__':
if len(sys.argv) != 3:
print "Usage: %s <virtualenv> <modulename>" % sys.argv[0]
sys.exit(2)
env = os.environ.get('VIRTUAL_ENV')
if env:
print "You must deactivate your virtualenv first"
sys.exit(1)
prog, env, module = sys.argv
symlink_global_module_into_virtualenv(module, env)
|
<commit_before><commit_msg>Add a tool for linking in global python modules into our virtualenv<commit_after># Link in a module in the global Python site-packages the virtualenv that we are currently in
# Author: Luke Macken <lmacken@redhat.com>
import os
import sys
from glob import glob
from distutils.sysconfig import get_python_lib
def symlink_global_module_into_virtualenv(modulename, env):
for path in (get_python_lib(), get_python_lib(1)):
for modpath in glob(os.path.join(path, modulename) + '*'):
dest = os.path.join(env, path.replace('/usr/', ''), os.path.basename(modpath))
if os.path.exists(dest):
assert os.path.islink(dest)
else:
print "%s => %s" % (modpath, dest)
os.symlink(modpath, dest)
if __name__ == '__main__':
if len(sys.argv) != 3:
print "Usage: %s <virtualenv> <modulename>" % sys.argv[0]
sys.exit(2)
env = os.environ.get('VIRTUAL_ENV')
if env:
print "You must deactivate your virtualenv first"
sys.exit(1)
prog, env, module = sys.argv
symlink_global_module_into_virtualenv(module, env)
|
|
9daceaf8cad088a1507af7ce1503f15bc619695d
|
panoptes/panoptes.py
|
panoptes/panoptes.py
|
#!/usr/bin/env python
import ephem
import panoptes.utils.logger as logger
import panoptes.observatory as observatory
class Panoptes:
"""
Sets up logger, reads config file and starts up application.
"""
def __init__(self):
# Setup utils
self.logger = logger.Logger()
self.logger.info('Initializing panoptes')
# Create our observatory, which does the bulk of the work
# NOTE: Here we would pass in config options
self.observatory = observatory.Observatory( logger=self.logger )
def start_session(self):
"""
Main starting point for panoptes application
"""
self.observatory.start_observing()
if __name__ == '__main__':
panoptes = Panoptes()
panoptes.logger.info("Panoptes created. Starting session")
# panoptes.start_session()
|
Simplify Panoptes class so that site belongs to observatory. We need to read in a config here.
|
Simplify Panoptes class so that site belongs to observatory. We need to read in a config here.
|
Python
|
mit
|
Guokr1991/POCS,panoptes/POCS,joshwalawender/POCS,panoptes/POCS,joshwalawender/POCS,fmin2958/POCS,Guokr1991/POCS,fmin2958/POCS,joshwalawender/POCS,AstroHuntsman/POCS,Guokr1991/POCS,AstroHuntsman/POCS,panoptes/POCS,panoptes/POCS,fmin2958/POCS,Guokr1991/POCS,AstroHuntsman/POCS,AstroHuntsman/POCS
|
Simplify Panoptes class so that site belongs to observatory. We need to read in a config here.
|
#!/usr/bin/env python
import ephem
import panoptes.utils.logger as logger
import panoptes.observatory as observatory
class Panoptes:
"""
Sets up logger, reads config file and starts up application.
"""
def __init__(self):
# Setup utils
self.logger = logger.Logger()
self.logger.info('Initializing panoptes')
# Create our observatory, which does the bulk of the work
# NOTE: Here we would pass in config options
self.observatory = observatory.Observatory( logger=self.logger )
def start_session(self):
"""
Main starting point for panoptes application
"""
self.observatory.start_observing()
if __name__ == '__main__':
panoptes = Panoptes()
panoptes.logger.info("Panoptes created. Starting session")
# panoptes.start_session()
|
<commit_before><commit_msg>Simplify Panoptes class so that site belongs to observatory. We need to read in a config here.<commit_after>
|
#!/usr/bin/env python
import ephem
import panoptes.utils.logger as logger
import panoptes.observatory as observatory
class Panoptes:
"""
Sets up logger, reads config file and starts up application.
"""
def __init__(self):
# Setup utils
self.logger = logger.Logger()
self.logger.info('Initializing panoptes')
# Create our observatory, which does the bulk of the work
# NOTE: Here we would pass in config options
self.observatory = observatory.Observatory( logger=self.logger )
def start_session(self):
"""
Main starting point for panoptes application
"""
self.observatory.start_observing()
if __name__ == '__main__':
panoptes = Panoptes()
panoptes.logger.info("Panoptes created. Starting session")
# panoptes.start_session()
|
Simplify Panoptes class so that site belongs to observatory. We need to read in a config here.#!/usr/bin/env python
import ephem
import panoptes.utils.logger as logger
import panoptes.observatory as observatory
class Panoptes:
"""
Sets up logger, reads config file and starts up application.
"""
def __init__(self):
# Setup utils
self.logger = logger.Logger()
self.logger.info('Initializing panoptes')
# Create our observatory, which does the bulk of the work
# NOTE: Here we would pass in config options
self.observatory = observatory.Observatory( logger=self.logger )
def start_session(self):
"""
Main starting point for panoptes application
"""
self.observatory.start_observing()
if __name__ == '__main__':
panoptes = Panoptes()
panoptes.logger.info("Panoptes created. Starting session")
# panoptes.start_session()
|
<commit_before><commit_msg>Simplify Panoptes class so that site belongs to observatory. We need to read in a config here.<commit_after>#!/usr/bin/env python
import ephem
import panoptes.utils.logger as logger
import panoptes.observatory as observatory
class Panoptes:
"""
Sets up logger, reads config file and starts up application.
"""
def __init__(self):
# Setup utils
self.logger = logger.Logger()
self.logger.info('Initializing panoptes')
# Create our observatory, which does the bulk of the work
# NOTE: Here we would pass in config options
self.observatory = observatory.Observatory( logger=self.logger )
def start_session(self):
"""
Main starting point for panoptes application
"""
self.observatory.start_observing()
if __name__ == '__main__':
panoptes = Panoptes()
panoptes.logger.info("Panoptes created. Starting session")
# panoptes.start_session()
|
|
1f7bb3ae05ef00a78a579553ac2f1954ae32b991
|
payment_ogone_compassion/migrations/11.0.1.0.0/post-migration.py
|
payment_ogone_compassion/migrations/11.0.1.0.0/post-migration.py
|
from openupgradelib import openupgrade
@openupgrade.migrate()
def migrate(env, version):
if not version:
return
# Update payment_acquirers
env.cr.execute("""
select id from ir_ui_view where name = 'ogone_acquirer_button'
""")
old_view_id = env.cr.fetchone()[0]
new_view_id = env.ref('payment_ogone.ogone_form').id
env['payment.acquirer'].search([
('view_template_id', '=', old_view_id)
]).write({'view_template_id': new_view_id})
|
Add migrations for payment acquirers
|
Add migrations for payment acquirers
|
Python
|
agpl-3.0
|
CompassionCH/compassion-switzerland,eicher31/compassion-switzerland,CompassionCH/compassion-switzerland,eicher31/compassion-switzerland,eicher31/compassion-switzerland,CompassionCH/compassion-switzerland
|
Add migrations for payment acquirers
|
from openupgradelib import openupgrade
@openupgrade.migrate()
def migrate(env, version):
if not version:
return
# Update payment_acquirers
env.cr.execute("""
select id from ir_ui_view where name = 'ogone_acquirer_button'
""")
old_view_id = env.cr.fetchone()[0]
new_view_id = env.ref('payment_ogone.ogone_form').id
env['payment.acquirer'].search([
('view_template_id', '=', old_view_id)
]).write({'view_template_id': new_view_id})
|
<commit_before><commit_msg>Add migrations for payment acquirers<commit_after>
|
from openupgradelib import openupgrade
@openupgrade.migrate()
def migrate(env, version):
if not version:
return
# Update payment_acquirers
env.cr.execute("""
select id from ir_ui_view where name = 'ogone_acquirer_button'
""")
old_view_id = env.cr.fetchone()[0]
new_view_id = env.ref('payment_ogone.ogone_form').id
env['payment.acquirer'].search([
('view_template_id', '=', old_view_id)
]).write({'view_template_id': new_view_id})
|
Add migrations for payment acquirersfrom openupgradelib import openupgrade
@openupgrade.migrate()
def migrate(env, version):
if not version:
return
# Update payment_acquirers
env.cr.execute("""
select id from ir_ui_view where name = 'ogone_acquirer_button'
""")
old_view_id = env.cr.fetchone()[0]
new_view_id = env.ref('payment_ogone.ogone_form').id
env['payment.acquirer'].search([
('view_template_id', '=', old_view_id)
]).write({'view_template_id': new_view_id})
|
<commit_before><commit_msg>Add migrations for payment acquirers<commit_after>from openupgradelib import openupgrade
@openupgrade.migrate()
def migrate(env, version):
if not version:
return
# Update payment_acquirers
env.cr.execute("""
select id from ir_ui_view where name = 'ogone_acquirer_button'
""")
old_view_id = env.cr.fetchone()[0]
new_view_id = env.ref('payment_ogone.ogone_form').id
env['payment.acquirer'].search([
('view_template_id', '=', old_view_id)
]).write({'view_template_id': new_view_id})
|
|
7e4dd4bc4cfdea5f1e2872b3348b760473a3b2ab
|
Problems/uniqueChars.py
|
Problems/uniqueChars.py
|
#!/Applications/anaconda/envs/Python3/bin
def main():
# Test suite
strings = [None, '', 'Young Frankenstein', 'Yodeling cat']
results = [False, True, False, True]
for i, s in enumerate(strings):
if has_unique_chars(s) == results[i]:
print('PASSED test case: {} returned {}'.format(s, results[i]))
else:
print('FAILED test case: {} should return {}'.format(s, results[i]))
return 0
def has_unique_chars(string):
'''
Determines if given string has unique characters (is case sensitive)
Input: string
Output: Boolean
'''
if string is None:
return False
char_set = set()
for c in string:
if c in char_set:
return False
else:
char_set.add(c)
return True
if __name__ == '__main__':
main()
|
Add unique characters algorithm and tests
|
Add unique characters algorithm and tests
|
Python
|
mit
|
HKuz/Test_Code
|
Add unique characters algorithm and tests
|
#!/Applications/anaconda/envs/Python3/bin
def main():
# Test suite
strings = [None, '', 'Young Frankenstein', 'Yodeling cat']
results = [False, True, False, True]
for i, s in enumerate(strings):
if has_unique_chars(s) == results[i]:
print('PASSED test case: {} returned {}'.format(s, results[i]))
else:
print('FAILED test case: {} should return {}'.format(s, results[i]))
return 0
def has_unique_chars(string):
'''
Determines if given string has unique characters (is case sensitive)
Input: string
Output: Boolean
'''
if string is None:
return False
char_set = set()
for c in string:
if c in char_set:
return False
else:
char_set.add(c)
return True
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add unique characters algorithm and tests<commit_after>
|
#!/Applications/anaconda/envs/Python3/bin
def main():
# Test suite
strings = [None, '', 'Young Frankenstein', 'Yodeling cat']
results = [False, True, False, True]
for i, s in enumerate(strings):
if has_unique_chars(s) == results[i]:
print('PASSED test case: {} returned {}'.format(s, results[i]))
else:
print('FAILED test case: {} should return {}'.format(s, results[i]))
return 0
def has_unique_chars(string):
'''
Determines if given string has unique characters (is case sensitive)
Input: string
Output: Boolean
'''
if string is None:
return False
char_set = set()
for c in string:
if c in char_set:
return False
else:
char_set.add(c)
return True
if __name__ == '__main__':
main()
|
Add unique characters algorithm and tests#!/Applications/anaconda/envs/Python3/bin
def main():
# Test suite
strings = [None, '', 'Young Frankenstein', 'Yodeling cat']
results = [False, True, False, True]
for i, s in enumerate(strings):
if has_unique_chars(s) == results[i]:
print('PASSED test case: {} returned {}'.format(s, results[i]))
else:
print('FAILED test case: {} should return {}'.format(s, results[i]))
return 0
def has_unique_chars(string):
'''
Determines if given string has unique characters (is case sensitive)
Input: string
Output: Boolean
'''
if string is None:
return False
char_set = set()
for c in string:
if c in char_set:
return False
else:
char_set.add(c)
return True
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add unique characters algorithm and tests<commit_after>#!/Applications/anaconda/envs/Python3/bin
def main():
# Test suite
strings = [None, '', 'Young Frankenstein', 'Yodeling cat']
results = [False, True, False, True]
for i, s in enumerate(strings):
if has_unique_chars(s) == results[i]:
print('PASSED test case: {} returned {}'.format(s, results[i]))
else:
print('FAILED test case: {} should return {}'.format(s, results[i]))
return 0
def has_unique_chars(string):
'''
Determines if given string has unique characters (is case sensitive)
Input: string
Output: Boolean
'''
if string is None:
return False
char_set = set()
for c in string:
if c in char_set:
return False
else:
char_set.add(c)
return True
if __name__ == '__main__':
main()
|
|
0994f60728a19a14628ca2e4544693d1ea918126
|
examples/li_to_hdf5.py
|
examples/li_to_hdf5.py
|
#!/usr/bin/env python
import sys
from datetime import datetime
import pymoku.dataparser
import h5py
if len(sys.argv) != 3:
print "Usage: li_to_csv.py infile.li outfile.hd5"
exit(1)
reader = pymoku.dataparser.LIDataFileReader(sys.argv[1])
writer = h5py.File(sys.argv[2], 'w')
ncols = reader.nch
set_name = 'moku:datalog'
# Start with storage for 100 items, it'll be expanded as we add data. We don't know the
# length of the data set to begin with.
writer.create_dataset(set_name, (100,ncols), maxshape=(None,ncols))
writer[set_name].attrs['timestep'] = reader.deltat
writer[set_name].attrs['start_secs'] = reader.starttime
writer[set_name].attrs['start_time'] = datetime.fromtimestamp(reader.starttime).strftime('%c')
writer[set_name].attrs['instrument'] = reader.instr
writer[set_name].attrs['instrument_version'] = reader.instrv
i = 0
for record in reader:
curlen = len(writer[set_name])
if curlen <= i:
# Exponential allocation strategy, works fairly well for different sized files.
# We truncate to the correct length at the end anyway.
writer[set_name].resize((2*curlen, ncols))
writer[set_name][i,:] = record[:ncols]
i += 1
# Truncate the file to the correct length
writer[set_name].resize((i, ncols))
writer.close()
|
Add HDF5 data file writer to examples
|
HD5: Add HDF5 data file writer to examples
|
Python
|
mit
|
benizl/pymoku,liquidinstruments/pymoku
|
HD5: Add HDF5 data file writer to examples
|
#!/usr/bin/env python
import sys
from datetime import datetime
import pymoku.dataparser
import h5py
if len(sys.argv) != 3:
print "Usage: li_to_csv.py infile.li outfile.hd5"
exit(1)
reader = pymoku.dataparser.LIDataFileReader(sys.argv[1])
writer = h5py.File(sys.argv[2], 'w')
ncols = reader.nch
set_name = 'moku:datalog'
# Start with storage for 100 items, it'll be expanded as we add data. We don't know the
# length of the data set to begin with.
writer.create_dataset(set_name, (100,ncols), maxshape=(None,ncols))
writer[set_name].attrs['timestep'] = reader.deltat
writer[set_name].attrs['start_secs'] = reader.starttime
writer[set_name].attrs['start_time'] = datetime.fromtimestamp(reader.starttime).strftime('%c')
writer[set_name].attrs['instrument'] = reader.instr
writer[set_name].attrs['instrument_version'] = reader.instrv
i = 0
for record in reader:
curlen = len(writer[set_name])
if curlen <= i:
# Exponential allocation strategy, works fairly well for different sized files.
# We truncate to the correct length at the end anyway.
writer[set_name].resize((2*curlen, ncols))
writer[set_name][i,:] = record[:ncols]
i += 1
# Truncate the file to the correct length
writer[set_name].resize((i, ncols))
writer.close()
|
<commit_before><commit_msg>HD5: Add HDF5 data file writer to examples<commit_after>
|
#!/usr/bin/env python
import sys
from datetime import datetime
import pymoku.dataparser
import h5py
if len(sys.argv) != 3:
print "Usage: li_to_csv.py infile.li outfile.hd5"
exit(1)
reader = pymoku.dataparser.LIDataFileReader(sys.argv[1])
writer = h5py.File(sys.argv[2], 'w')
ncols = reader.nch
set_name = 'moku:datalog'
# Start with storage for 100 items, it'll be expanded as we add data. We don't know the
# length of the data set to begin with.
writer.create_dataset(set_name, (100,ncols), maxshape=(None,ncols))
writer[set_name].attrs['timestep'] = reader.deltat
writer[set_name].attrs['start_secs'] = reader.starttime
writer[set_name].attrs['start_time'] = datetime.fromtimestamp(reader.starttime).strftime('%c')
writer[set_name].attrs['instrument'] = reader.instr
writer[set_name].attrs['instrument_version'] = reader.instrv
i = 0
for record in reader:
curlen = len(writer[set_name])
if curlen <= i:
# Exponential allocation strategy, works fairly well for different sized files.
# We truncate to the correct length at the end anyway.
writer[set_name].resize((2*curlen, ncols))
writer[set_name][i,:] = record[:ncols]
i += 1
# Truncate the file to the correct length
writer[set_name].resize((i, ncols))
writer.close()
|
HD5: Add HDF5 data file writer to examples#!/usr/bin/env python
import sys
from datetime import datetime
import pymoku.dataparser
import h5py
if len(sys.argv) != 3:
print "Usage: li_to_csv.py infile.li outfile.hd5"
exit(1)
reader = pymoku.dataparser.LIDataFileReader(sys.argv[1])
writer = h5py.File(sys.argv[2], 'w')
ncols = reader.nch
set_name = 'moku:datalog'
# Start with storage for 100 items, it'll be expanded as we add data. We don't know the
# length of the data set to begin with.
writer.create_dataset(set_name, (100,ncols), maxshape=(None,ncols))
writer[set_name].attrs['timestep'] = reader.deltat
writer[set_name].attrs['start_secs'] = reader.starttime
writer[set_name].attrs['start_time'] = datetime.fromtimestamp(reader.starttime).strftime('%c')
writer[set_name].attrs['instrument'] = reader.instr
writer[set_name].attrs['instrument_version'] = reader.instrv
i = 0
for record in reader:
curlen = len(writer[set_name])
if curlen <= i:
# Exponential allocation strategy, works fairly well for different sized files.
# We truncate to the correct length at the end anyway.
writer[set_name].resize((2*curlen, ncols))
writer[set_name][i,:] = record[:ncols]
i += 1
# Truncate the file to the correct length
writer[set_name].resize((i, ncols))
writer.close()
|
<commit_before><commit_msg>HD5: Add HDF5 data file writer to examples<commit_after>#!/usr/bin/env python
import sys
from datetime import datetime
import pymoku.dataparser
import h5py
if len(sys.argv) != 3:
print "Usage: li_to_csv.py infile.li outfile.hd5"
exit(1)
reader = pymoku.dataparser.LIDataFileReader(sys.argv[1])
writer = h5py.File(sys.argv[2], 'w')
ncols = reader.nch
set_name = 'moku:datalog'
# Start with storage for 100 items, it'll be expanded as we add data. We don't know the
# length of the data set to begin with.
writer.create_dataset(set_name, (100,ncols), maxshape=(None,ncols))
writer[set_name].attrs['timestep'] = reader.deltat
writer[set_name].attrs['start_secs'] = reader.starttime
writer[set_name].attrs['start_time'] = datetime.fromtimestamp(reader.starttime).strftime('%c')
writer[set_name].attrs['instrument'] = reader.instr
writer[set_name].attrs['instrument_version'] = reader.instrv
i = 0
for record in reader:
curlen = len(writer[set_name])
if curlen <= i:
# Exponential allocation strategy, works fairly well for different sized files.
# We truncate to the correct length at the end anyway.
writer[set_name].resize((2*curlen, ncols))
writer[set_name][i,:] = record[:ncols]
i += 1
# Truncate the file to the correct length
writer[set_name].resize((i, ncols))
writer.close()
|
|
9858c56188f4d6c81daf6535e7cd58ff23e20712
|
application/senic/nuimo_hub/tests/test_setup_wifi.py
|
application/senic/nuimo_hub/tests/test_setup_wifi.py
|
import pytest
from mock import patch
@pytest.fixture
def url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, url):
assert browser.get_json(url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, url):
assert browser.get_json(url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, url, mocked_run, settings):
browser.post_json(url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
|
import pytest
from mock import patch
@pytest.fixture
def setup_url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, setup_url):
assert browser.get_json(setup_url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, setup_url):
assert browser.get_json(setup_url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, setup_url, mocked_run, settings):
browser.post_json(setup_url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
|
Make `url` fixture less generic
|
Make `url` fixture less generic
in preparation for additional endpoints
|
Python
|
mit
|
grunskis/nuimo-hub-backend,grunskis/nuimo-hub-backend,getsenic/senic-hub,grunskis/senic-hub,grunskis/senic-hub,grunskis/nuimo-hub-backend,grunskis/senic-hub,grunskis/senic-hub,getsenic/senic-hub,grunskis/senic-hub,grunskis/nuimo-hub-backend,grunskis/senic-hub,grunskis/nuimo-hub-backend
|
import pytest
from mock import patch
@pytest.fixture
def url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, url):
assert browser.get_json(url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, url):
assert browser.get_json(url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, url, mocked_run, settings):
browser.post_json(url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
Make `url` fixture less generic
in preparation for additional endpoints
|
import pytest
from mock import patch
@pytest.fixture
def setup_url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, setup_url):
assert browser.get_json(setup_url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, setup_url):
assert browser.get_json(setup_url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, setup_url, mocked_run, settings):
browser.post_json(setup_url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
|
<commit_before>import pytest
from mock import patch
@pytest.fixture
def url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, url):
assert browser.get_json(url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, url):
assert browser.get_json(url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, url, mocked_run, settings):
browser.post_json(url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
<commit_msg>Make `url` fixture less generic
in preparation for additional endpoints<commit_after>
|
import pytest
from mock import patch
@pytest.fixture
def setup_url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, setup_url):
assert browser.get_json(setup_url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, setup_url):
assert browser.get_json(setup_url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, setup_url, mocked_run, settings):
browser.post_json(setup_url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
|
import pytest
from mock import patch
@pytest.fixture
def url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, url):
assert browser.get_json(url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, url):
assert browser.get_json(url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, url, mocked_run, settings):
browser.post_json(url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
Make `url` fixture less generic
in preparation for additional endpointsimport pytest
from mock import patch
@pytest.fixture
def setup_url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, setup_url):
assert browser.get_json(setup_url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, setup_url):
assert browser.get_json(setup_url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, setup_url, mocked_run, settings):
browser.post_json(setup_url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
|
<commit_before>import pytest
from mock import patch
@pytest.fixture
def url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, url):
assert browser.get_json(url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, url):
assert browser.get_json(url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, url, mocked_run, settings):
browser.post_json(url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
<commit_msg>Make `url` fixture less generic
in preparation for additional endpoints<commit_after>import pytest
from mock import patch
@pytest.fixture
def setup_url(route_url):
return route_url('wifi_setup')
def test_get_scanned_wifi(browser, setup_url):
assert browser.get_json(setup_url).json == ['grandpausethisnetwork']
@pytest.fixture
def no_such_wifi(settings):
settings['wifi_networks_path'] = '/no/such/file'
return settings
def test_get_scanned_wifi_empty(no_such_wifi, browser, setup_url):
assert browser.get_json(setup_url).json == []
@pytest.yield_fixture(autouse=True)
def mocked_run(request):
"""don't run actual external commands during these tests
"""
with patch('senic.nuimo_hub.views.setup_wifi.run')\
as mocked_run:
yield mocked_run
def test_join_wifi(browser, setup_url, mocked_run, settings):
browser.post_json(setup_url, dict(
ssid='grandpausethisnetwork',
password='foobar',
device='wlan0')).json
mocked_run.assert_called_once_with(
[
'sudo',
'%s/join_wifi' % settings['bin_path'],
'-c {fs_config_ini}'.format(**settings),
'grandpausethisnetwork',
'foobar',
]
)
|
ca53296d4f7af651905657d533ac18517531ca32
|
pybloom/slidingwindow.py
|
pybloom/slidingwindow.py
|
import time
from collections import deque
from pybloom import ScalableBloomFilter
class DecayScalableBloomFilter(ScalableBloomFilter):
'''
Stepwise decaying Bloom Filter
'''
def __init__(self, initial_capacity=1000, error_rate=0.01, window_period = 60):
super(DecayScalableBloomFilter, self).__init__(initial_capacity, error_rate)
self.window_period = 60
self.timestamp = time.time()
self._expired = False
@property
def expired(self):
if time.time() - self.timestamp > self.window_period:
self._expired = True
return self._expired
class SlidingWindowScalableBloomFilter(object):
'''
Sliding Window Bloom Filter using a coarse expiration
'''
VALID_RES = {'Min': 60,
'Hour': 3600,
'Day': 86400}
def __init__(self, initial_capacity=1000, window_period = "10_Min"):
self.initial_capacity = initial_capacity
self.error_rate = 0.01
self._setup_window_period(window_period)
def _setup_window_period(self, window_period):
try:
self.amount, self.res = window_period.split('_')
self.amount = int(self.amount)
except ValueError:
raise Exception('Invalid window period')
self.window_period = SlidingWindowScalableBloomFilter.VALID_RES[self.res]
self.filters = deque(maxlen = self.amount)
def total_error(self):
'''
Return the total error: temporal error + native bf error
'''
temporal_error = float(1.0 / self.amount)
total_error = self.error_rate + temporal_error
return total_error
def __contains__(self, key):
for f in reversed(self.filters):
if key in f:
return True
return False
def check_expiration(self):
filter = self.filters[0]
if filter.expired:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
def add(self, key):
self.check_expiration()
if key in self:
return True
if not self.filters:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
else:
filter = self.filters[-1]
filter.add(key)
return False
|
Add simple SlidingWindowScalableBloomFilter for combating overengineer habit
|
Add simple SlidingWindowScalableBloomFilter for combating overengineer habit
|
Python
|
mit
|
Parsely/python-bloomfilter
|
Add simple SlidingWindowScalableBloomFilter for combating overengineer habit
|
import time
from collections import deque
from pybloom import ScalableBloomFilter
class DecayScalableBloomFilter(ScalableBloomFilter):
'''
Stepwise decaying Bloom Filter
'''
def __init__(self, initial_capacity=1000, error_rate=0.01, window_period = 60):
super(DecayScalableBloomFilter, self).__init__(initial_capacity, error_rate)
self.window_period = 60
self.timestamp = time.time()
self._expired = False
@property
def expired(self):
if time.time() - self.timestamp > self.window_period:
self._expired = True
return self._expired
class SlidingWindowScalableBloomFilter(object):
'''
Sliding Window Bloom Filter using a coarse expiration
'''
VALID_RES = {'Min': 60,
'Hour': 3600,
'Day': 86400}
def __init__(self, initial_capacity=1000, window_period = "10_Min"):
self.initial_capacity = initial_capacity
self.error_rate = 0.01
self._setup_window_period(window_period)
def _setup_window_period(self, window_period):
try:
self.amount, self.res = window_period.split('_')
self.amount = int(self.amount)
except ValueError:
raise Exception('Invalid window period')
self.window_period = SlidingWindowScalableBloomFilter.VALID_RES[self.res]
self.filters = deque(maxlen = self.amount)
def total_error(self):
'''
Return the total error: temporal error + native bf error
'''
temporal_error = float(1.0 / self.amount)
total_error = self.error_rate + temporal_error
return total_error
def __contains__(self, key):
for f in reversed(self.filters):
if key in f:
return True
return False
def check_expiration(self):
filter = self.filters[0]
if filter.expired:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
def add(self, key):
self.check_expiration()
if key in self:
return True
if not self.filters:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
else:
filter = self.filters[-1]
filter.add(key)
return False
|
<commit_before><commit_msg>Add simple SlidingWindowScalableBloomFilter for combating overengineer habit<commit_after>
|
import time
from collections import deque
from pybloom import ScalableBloomFilter
class DecayScalableBloomFilter(ScalableBloomFilter):
'''
Stepwise decaying Bloom Filter
'''
def __init__(self, initial_capacity=1000, error_rate=0.01, window_period = 60):
super(DecayScalableBloomFilter, self).__init__(initial_capacity, error_rate)
self.window_period = 60
self.timestamp = time.time()
self._expired = False
@property
def expired(self):
if time.time() - self.timestamp > self.window_period:
self._expired = True
return self._expired
class SlidingWindowScalableBloomFilter(object):
'''
Sliding Window Bloom Filter using a coarse expiration
'''
VALID_RES = {'Min': 60,
'Hour': 3600,
'Day': 86400}
def __init__(self, initial_capacity=1000, window_period = "10_Min"):
self.initial_capacity = initial_capacity
self.error_rate = 0.01
self._setup_window_period(window_period)
def _setup_window_period(self, window_period):
try:
self.amount, self.res = window_period.split('_')
self.amount = int(self.amount)
except ValueError:
raise Exception('Invalid window period')
self.window_period = SlidingWindowScalableBloomFilter.VALID_RES[self.res]
self.filters = deque(maxlen = self.amount)
def total_error(self):
'''
Return the total error: temporal error + native bf error
'''
temporal_error = float(1.0 / self.amount)
total_error = self.error_rate + temporal_error
return total_error
def __contains__(self, key):
for f in reversed(self.filters):
if key in f:
return True
return False
def check_expiration(self):
filter = self.filters[0]
if filter.expired:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
def add(self, key):
self.check_expiration()
if key in self:
return True
if not self.filters:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
else:
filter = self.filters[-1]
filter.add(key)
return False
|
Add simple SlidingWindowScalableBloomFilter for combating overengineer habitimport time
from collections import deque
from pybloom import ScalableBloomFilter
class DecayScalableBloomFilter(ScalableBloomFilter):
'''
Stepwise decaying Bloom Filter
'''
def __init__(self, initial_capacity=1000, error_rate=0.01, window_period = 60):
super(DecayScalableBloomFilter, self).__init__(initial_capacity, error_rate)
self.window_period = 60
self.timestamp = time.time()
self._expired = False
@property
def expired(self):
if time.time() - self.timestamp > self.window_period:
self._expired = True
return self._expired
class SlidingWindowScalableBloomFilter(object):
'''
Sliding Window Bloom Filter using a coarse expiration
'''
VALID_RES = {'Min': 60,
'Hour': 3600,
'Day': 86400}
def __init__(self, initial_capacity=1000, window_period = "10_Min"):
self.initial_capacity = initial_capacity
self.error_rate = 0.01
self._setup_window_period(window_period)
def _setup_window_period(self, window_period):
try:
self.amount, self.res = window_period.split('_')
self.amount = int(self.amount)
except ValueError:
raise Exception('Invalid window period')
self.window_period = SlidingWindowScalableBloomFilter.VALID_RES[self.res]
self.filters = deque(maxlen = self.amount)
def total_error(self):
'''
Return the total error: temporal error + native bf error
'''
temporal_error = float(1.0 / self.amount)
total_error = self.error_rate + temporal_error
return total_error
def __contains__(self, key):
for f in reversed(self.filters):
if key in f:
return True
return False
def check_expiration(self):
filter = self.filters[0]
if filter.expired:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
def add(self, key):
self.check_expiration()
if key in self:
return True
if not self.filters:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
else:
filter = self.filters[-1]
filter.add(key)
return False
|
<commit_before><commit_msg>Add simple SlidingWindowScalableBloomFilter for combating overengineer habit<commit_after>import time
from collections import deque
from pybloom import ScalableBloomFilter
class DecayScalableBloomFilter(ScalableBloomFilter):
'''
Stepwise decaying Bloom Filter
'''
def __init__(self, initial_capacity=1000, error_rate=0.01, window_period = 60):
super(DecayScalableBloomFilter, self).__init__(initial_capacity, error_rate)
self.window_period = 60
self.timestamp = time.time()
self._expired = False
@property
def expired(self):
if time.time() - self.timestamp > self.window_period:
self._expired = True
return self._expired
class SlidingWindowScalableBloomFilter(object):
'''
Sliding Window Bloom Filter using a coarse expiration
'''
VALID_RES = {'Min': 60,
'Hour': 3600,
'Day': 86400}
def __init__(self, initial_capacity=1000, window_period = "10_Min"):
self.initial_capacity = initial_capacity
self.error_rate = 0.01
self._setup_window_period(window_period)
def _setup_window_period(self, window_period):
try:
self.amount, self.res = window_period.split('_')
self.amount = int(self.amount)
except ValueError:
raise Exception('Invalid window period')
self.window_period = SlidingWindowScalableBloomFilter.VALID_RES[self.res]
self.filters = deque(maxlen = self.amount)
def total_error(self):
'''
Return the total error: temporal error + native bf error
'''
temporal_error = float(1.0 / self.amount)
total_error = self.error_rate + temporal_error
return total_error
def __contains__(self, key):
for f in reversed(self.filters):
if key in f:
return True
return False
def check_expiration(self):
filter = self.filters[0]
if filter.expired:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
def add(self, key):
self.check_expiration()
if key in self:
return True
if not self.filters:
filter = DecayScalableBloomFilter(initial_capacity=self.initial_capacity,
error_rate=self.error_rate,
window_period=self.window_period)
self.filters.append(filter)
else:
filter = self.filters[-1]
filter.add(key)
return False
|
|
e7e30eaebf3075df3c965d700352506f52be10ef
|
test.py
|
test.py
|
import usb.core
import usb.util
import binascii,time
reset=[binascii.unhexlify('06cb150010000000')]
rainb=[binascii.unhexlify('06cd080800000000')]
color=[binascii.unhexlify('06cd080100000000'), #red
binascii.unhexlify('06cd080200000000'),#yellow
binascii.unhexlify('06cd080300000000'),#green
binascii.unhexlify('06cd080400000000'),#cyan
binascii.unhexlify('06cd080500000000'),#blue
binascii.unhexlify('06cd080600000000'),#pink
binascii.unhexlify('06cd080700000000')]#white
# find our device
dev = usb.core.find(idVendor=0x1a81, idProduct=0x220a)
#06:cd:08:03:00:00:00:00
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
#print dir(dev)
interface=[]
for i in range(0,2):
if dev.is_kernel_driver_active(i) is True:
print "Kernel driver is in use by "+str(i)
interface.append(i)
dev.detach_kernel_driver(i)
#print dev.get_active_configuration()
dev.set_configuration()
for i in range(0,8):
dev.ctrl_transfer(0x21,9,0x0306,1,color[i])
time.sleep(1)
#dev.attach_kernel_driver(0)
dev.reset()
|
Test file that "programs" the mouse.It switches between colors and can reset the programming
|
Test file that "programs" the mouse.It switches between colors and can reset the programming
|
Python
|
mit
|
BlackLotus/mx1200py
|
Test file that "programs" the mouse.It switches between colors and can reset the programming
|
import usb.core
import usb.util
import binascii,time
reset=[binascii.unhexlify('06cb150010000000')]
rainb=[binascii.unhexlify('06cd080800000000')]
color=[binascii.unhexlify('06cd080100000000'), #red
binascii.unhexlify('06cd080200000000'),#yellow
binascii.unhexlify('06cd080300000000'),#green
binascii.unhexlify('06cd080400000000'),#cyan
binascii.unhexlify('06cd080500000000'),#blue
binascii.unhexlify('06cd080600000000'),#pink
binascii.unhexlify('06cd080700000000')]#white
# find our device
dev = usb.core.find(idVendor=0x1a81, idProduct=0x220a)
#06:cd:08:03:00:00:00:00
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
#print dir(dev)
interface=[]
for i in range(0,2):
if dev.is_kernel_driver_active(i) is True:
print "Kernel driver is in use by "+str(i)
interface.append(i)
dev.detach_kernel_driver(i)
#print dev.get_active_configuration()
dev.set_configuration()
for i in range(0,8):
dev.ctrl_transfer(0x21,9,0x0306,1,color[i])
time.sleep(1)
#dev.attach_kernel_driver(0)
dev.reset()
|
<commit_before><commit_msg>Test file that "programs" the mouse.It switches between colors and can reset the programming<commit_after>
|
import usb.core
import usb.util
import binascii,time
reset=[binascii.unhexlify('06cb150010000000')]
rainb=[binascii.unhexlify('06cd080800000000')]
color=[binascii.unhexlify('06cd080100000000'), #red
binascii.unhexlify('06cd080200000000'),#yellow
binascii.unhexlify('06cd080300000000'),#green
binascii.unhexlify('06cd080400000000'),#cyan
binascii.unhexlify('06cd080500000000'),#blue
binascii.unhexlify('06cd080600000000'),#pink
binascii.unhexlify('06cd080700000000')]#white
# find our device
dev = usb.core.find(idVendor=0x1a81, idProduct=0x220a)
#06:cd:08:03:00:00:00:00
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
#print dir(dev)
interface=[]
for i in range(0,2):
if dev.is_kernel_driver_active(i) is True:
print "Kernel driver is in use by "+str(i)
interface.append(i)
dev.detach_kernel_driver(i)
#print dev.get_active_configuration()
dev.set_configuration()
for i in range(0,8):
dev.ctrl_transfer(0x21,9,0x0306,1,color[i])
time.sleep(1)
#dev.attach_kernel_driver(0)
dev.reset()
|
Test file that "programs" the mouse.It switches between colors and can reset the programmingimport usb.core
import usb.util
import binascii,time
reset=[binascii.unhexlify('06cb150010000000')]
rainb=[binascii.unhexlify('06cd080800000000')]
color=[binascii.unhexlify('06cd080100000000'), #red
binascii.unhexlify('06cd080200000000'),#yellow
binascii.unhexlify('06cd080300000000'),#green
binascii.unhexlify('06cd080400000000'),#cyan
binascii.unhexlify('06cd080500000000'),#blue
binascii.unhexlify('06cd080600000000'),#pink
binascii.unhexlify('06cd080700000000')]#white
# find our device
dev = usb.core.find(idVendor=0x1a81, idProduct=0x220a)
#06:cd:08:03:00:00:00:00
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
#print dir(dev)
interface=[]
for i in range(0,2):
if dev.is_kernel_driver_active(i) is True:
print "Kernel driver is in use by "+str(i)
interface.append(i)
dev.detach_kernel_driver(i)
#print dev.get_active_configuration()
dev.set_configuration()
for i in range(0,8):
dev.ctrl_transfer(0x21,9,0x0306,1,color[i])
time.sleep(1)
#dev.attach_kernel_driver(0)
dev.reset()
|
<commit_before><commit_msg>Test file that "programs" the mouse.It switches between colors and can reset the programming<commit_after>import usb.core
import usb.util
import binascii,time
reset=[binascii.unhexlify('06cb150010000000')]
rainb=[binascii.unhexlify('06cd080800000000')]
color=[binascii.unhexlify('06cd080100000000'), #red
binascii.unhexlify('06cd080200000000'),#yellow
binascii.unhexlify('06cd080300000000'),#green
binascii.unhexlify('06cd080400000000'),#cyan
binascii.unhexlify('06cd080500000000'),#blue
binascii.unhexlify('06cd080600000000'),#pink
binascii.unhexlify('06cd080700000000')]#white
# find our device
dev = usb.core.find(idVendor=0x1a81, idProduct=0x220a)
#06:cd:08:03:00:00:00:00
# was it found?
if dev is None:
raise ValueError('Device not found')
# set the active configuration. With no arguments, the first
# configuration will be the active one
#print dir(dev)
interface=[]
for i in range(0,2):
if dev.is_kernel_driver_active(i) is True:
print "Kernel driver is in use by "+str(i)
interface.append(i)
dev.detach_kernel_driver(i)
#print dev.get_active_configuration()
dev.set_configuration()
for i in range(0,8):
dev.ctrl_transfer(0x21,9,0x0306,1,color[i])
time.sleep(1)
#dev.attach_kernel_driver(0)
dev.reset()
|
|
8d0443e7423a480c7001d0f9b59af4dc903166b3
|
nodeconductor/cost_tracking/migrations/0023_consumptiondetails.py
|
nodeconductor/cost_tracking/migrations/0023_consumptiondetails.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
import jsonfield.fields
import model_utils.fields
import uuidfield.fields
class Migration(migrations.Migration):
dependencies = [
('cost_tracking', '0022_priceestimate_leafs'),
]
operations = [
migrations.CreateModel(
name='ConsumptionDetails',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, verbose_name='created', editable=False)),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, verbose_name='modified', editable=False)),
('uuid', uuidfield.fields.UUIDField(unique=True, max_length=32, editable=False, blank=True)),
('configuration', jsonfield.fields.JSONField(default={}, help_text='Current resource configuration.')),
('last_update_time', models.DateTimeField(help_text='Last configuration change time.')),
('consumed_before_update', jsonfield.fields.JSONField(default={}, help_text='How many consumables were used by resource before last update.')),
('price_estimate', models.OneToOneField(related_name='consumption_details', to='cost_tracking.PriceEstimate')),
],
options={
'abstract': False,
},
),
]
|
Add DB migration for ConsumptionDetails
|
Add DB migration for ConsumptionDetails
- nc-1521
|
Python
|
mit
|
opennode/nodeconductor,opennode/nodeconductor,opennode/nodeconductor
|
Add DB migration for ConsumptionDetails
- nc-1521
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
import jsonfield.fields
import model_utils.fields
import uuidfield.fields
class Migration(migrations.Migration):
dependencies = [
('cost_tracking', '0022_priceestimate_leafs'),
]
operations = [
migrations.CreateModel(
name='ConsumptionDetails',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, verbose_name='created', editable=False)),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, verbose_name='modified', editable=False)),
('uuid', uuidfield.fields.UUIDField(unique=True, max_length=32, editable=False, blank=True)),
('configuration', jsonfield.fields.JSONField(default={}, help_text='Current resource configuration.')),
('last_update_time', models.DateTimeField(help_text='Last configuration change time.')),
('consumed_before_update', jsonfield.fields.JSONField(default={}, help_text='How many consumables were used by resource before last update.')),
('price_estimate', models.OneToOneField(related_name='consumption_details', to='cost_tracking.PriceEstimate')),
],
options={
'abstract': False,
},
),
]
|
<commit_before><commit_msg>Add DB migration for ConsumptionDetails
- nc-1521<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
import jsonfield.fields
import model_utils.fields
import uuidfield.fields
class Migration(migrations.Migration):
dependencies = [
('cost_tracking', '0022_priceestimate_leafs'),
]
operations = [
migrations.CreateModel(
name='ConsumptionDetails',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, verbose_name='created', editable=False)),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, verbose_name='modified', editable=False)),
('uuid', uuidfield.fields.UUIDField(unique=True, max_length=32, editable=False, blank=True)),
('configuration', jsonfield.fields.JSONField(default={}, help_text='Current resource configuration.')),
('last_update_time', models.DateTimeField(help_text='Last configuration change time.')),
('consumed_before_update', jsonfield.fields.JSONField(default={}, help_text='How many consumables were used by resource before last update.')),
('price_estimate', models.OneToOneField(related_name='consumption_details', to='cost_tracking.PriceEstimate')),
],
options={
'abstract': False,
},
),
]
|
Add DB migration for ConsumptionDetails
- nc-1521# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
import jsonfield.fields
import model_utils.fields
import uuidfield.fields
class Migration(migrations.Migration):
dependencies = [
('cost_tracking', '0022_priceestimate_leafs'),
]
operations = [
migrations.CreateModel(
name='ConsumptionDetails',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, verbose_name='created', editable=False)),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, verbose_name='modified', editable=False)),
('uuid', uuidfield.fields.UUIDField(unique=True, max_length=32, editable=False, blank=True)),
('configuration', jsonfield.fields.JSONField(default={}, help_text='Current resource configuration.')),
('last_update_time', models.DateTimeField(help_text='Last configuration change time.')),
('consumed_before_update', jsonfield.fields.JSONField(default={}, help_text='How many consumables were used by resource before last update.')),
('price_estimate', models.OneToOneField(related_name='consumption_details', to='cost_tracking.PriceEstimate')),
],
options={
'abstract': False,
},
),
]
|
<commit_before><commit_msg>Add DB migration for ConsumptionDetails
- nc-1521<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
import jsonfield.fields
import model_utils.fields
import uuidfield.fields
class Migration(migrations.Migration):
dependencies = [
('cost_tracking', '0022_priceestimate_leafs'),
]
operations = [
migrations.CreateModel(
name='ConsumptionDetails',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', model_utils.fields.AutoCreatedField(default=django.utils.timezone.now, verbose_name='created', editable=False)),
('modified', model_utils.fields.AutoLastModifiedField(default=django.utils.timezone.now, verbose_name='modified', editable=False)),
('uuid', uuidfield.fields.UUIDField(unique=True, max_length=32, editable=False, blank=True)),
('configuration', jsonfield.fields.JSONField(default={}, help_text='Current resource configuration.')),
('last_update_time', models.DateTimeField(help_text='Last configuration change time.')),
('consumed_before_update', jsonfield.fields.JSONField(default={}, help_text='How many consumables were used by resource before last update.')),
('price_estimate', models.OneToOneField(related_name='consumption_details', to='cost_tracking.PriceEstimate')),
],
options={
'abstract': False,
},
),
]
|
|
75a9315ac2bffdc4b9f22d1ea6184a369e4ddec3
|
project/scripts/get_context_data.py
|
project/scripts/get_context_data.py
|
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python3
import sys
from google.cloud import datastore
from fetch_trends import get_updated_daily_data
from database_updates import update_investment_database
if __name__ == "__main__":
arguments = sys.argv
start_date = int(arguments[0])
google_search = arguments[1]
# Instantiates a client
datastore_client = datastore.Client()
# Retrieve up to date trends data for each search term
daily_data = get_updated_daily_data(start_date, google_search)
# Add up to date data do datastore
update_investment_database(daily_data, datastore_client)
|
Add script for fetching context data on a new investment
|
Add script for fetching context data on a new investment
|
Python
|
apache-2.0
|
googleinterns/sgonks,googleinterns/sgonks,googleinterns/sgonks,googleinterns/sgonks
|
Add script for fetching context data on a new investment
|
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python3
import sys
from google.cloud import datastore
from fetch_trends import get_updated_daily_data
from database_updates import update_investment_database
if __name__ == "__main__":
arguments = sys.argv
start_date = int(arguments[0])
google_search = arguments[1]
# Instantiates a client
datastore_client = datastore.Client()
# Retrieve up to date trends data for each search term
daily_data = get_updated_daily_data(start_date, google_search)
# Add up to date data do datastore
update_investment_database(daily_data, datastore_client)
|
<commit_before><commit_msg>Add script for fetching context data on a new investment<commit_after>
|
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python3
import sys
from google.cloud import datastore
from fetch_trends import get_updated_daily_data
from database_updates import update_investment_database
if __name__ == "__main__":
arguments = sys.argv
start_date = int(arguments[0])
google_search = arguments[1]
# Instantiates a client
datastore_client = datastore.Client()
# Retrieve up to date trends data for each search term
daily_data = get_updated_daily_data(start_date, google_search)
# Add up to date data do datastore
update_investment_database(daily_data, datastore_client)
|
Add script for fetching context data on a new investment# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python3
import sys
from google.cloud import datastore
from fetch_trends import get_updated_daily_data
from database_updates import update_investment_database
if __name__ == "__main__":
arguments = sys.argv
start_date = int(arguments[0])
google_search = arguments[1]
# Instantiates a client
datastore_client = datastore.Client()
# Retrieve up to date trends data for each search term
daily_data = get_updated_daily_data(start_date, google_search)
# Add up to date data do datastore
update_investment_database(daily_data, datastore_client)
|
<commit_before><commit_msg>Add script for fetching context data on a new investment<commit_after># Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/env python3
import sys
from google.cloud import datastore
from fetch_trends import get_updated_daily_data
from database_updates import update_investment_database
if __name__ == "__main__":
arguments = sys.argv
start_date = int(arguments[0])
google_search = arguments[1]
# Instantiates a client
datastore_client = datastore.Client()
# Retrieve up to date trends data for each search term
daily_data = get_updated_daily_data(start_date, google_search)
# Add up to date data do datastore
update_investment_database(daily_data, datastore_client)
|
|
66c5285cc0b5d9e29dcd511114060ffc9f17fdb1
|
volunteering/migrations/0005_volunteeradded.py
|
volunteering/migrations/0005_volunteeradded.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('volunteering', '0004_auto_20150415_2058'),
]
operations = [
migrations.CreateModel(
name='VolunteerAdded',
fields=[
],
options={
'proxy': True,
'verbose_name_plural': 'Recently added volunteers',
},
bases=('volunteering.volunteer',),
),
]
|
Migrate for proxy model (should be noop)
|
Migrate for proxy model (should be noop)
|
Python
|
agpl-3.0
|
jesseh/dothis,jesseh/dothis,jesseh/dothis,jesseh/dothis
|
Migrate for proxy model (should be noop)
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('volunteering', '0004_auto_20150415_2058'),
]
operations = [
migrations.CreateModel(
name='VolunteerAdded',
fields=[
],
options={
'proxy': True,
'verbose_name_plural': 'Recently added volunteers',
},
bases=('volunteering.volunteer',),
),
]
|
<commit_before><commit_msg>Migrate for proxy model (should be noop)<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('volunteering', '0004_auto_20150415_2058'),
]
operations = [
migrations.CreateModel(
name='VolunteerAdded',
fields=[
],
options={
'proxy': True,
'verbose_name_plural': 'Recently added volunteers',
},
bases=('volunteering.volunteer',),
),
]
|
Migrate for proxy model (should be noop)# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('volunteering', '0004_auto_20150415_2058'),
]
operations = [
migrations.CreateModel(
name='VolunteerAdded',
fields=[
],
options={
'proxy': True,
'verbose_name_plural': 'Recently added volunteers',
},
bases=('volunteering.volunteer',),
),
]
|
<commit_before><commit_msg>Migrate for proxy model (should be noop)<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('volunteering', '0004_auto_20150415_2058'),
]
operations = [
migrations.CreateModel(
name='VolunteerAdded',
fields=[
],
options={
'proxy': True,
'verbose_name_plural': 'Recently added volunteers',
},
bases=('volunteering.volunteer',),
),
]
|
|
eeb84ac27b924903c10f6a9a1169e57b481256be
|
tests/Settings/TestExtruderStack.py
|
tests/Settings/TestExtruderStack.py
|
# Copyright (c) 2017 Ultimaker B.V.
# Cura is released under the terms of the AGPLv3 or higher.
import pytest #This module contains automated tests.
import unittest.mock #For the mocking and monkeypatching functionality.
import cura.Settings.ExtruderStack #The module we're testing.
from cura.Settings.Exceptions import InvalidOperationError #To check whether the correct exceptions are raised.
## An empty extruder stack to test with.
@pytest.fixture()
def extruder_stack() -> cura.Settings.ExtruderStack.ExtruderStack:
return cura.Settings.ExtruderStack.ExtruderStack
## Tests whether adding a container is properly forbidden.
def test_addContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.addContainer(unittest.mock.MagicMock())
## Tests whether inserting a container is properly forbidden.
def test_insertContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.insertContainer(0, unittest.mock.MagicMock())
## Tests whether removing a container is properly forbidden.
def test_removeContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.removeContainer(unittest.mock.MagicMock())
|
Add tests for prohibited operations on extruder stacks
|
Add tests for prohibited operations on extruder stacks
These operations are explicitly prohibited, so they should raise an exception.
Contributes to issue CURA-3497.
|
Python
|
agpl-3.0
|
hmflash/Cura,ynotstartups/Wanhao,hmflash/Cura,fieldOfView/Cura,ynotstartups/Wanhao,fieldOfView/Cura,Curahelper/Cura,Curahelper/Cura
|
Add tests for prohibited operations on extruder stacks
These operations are explicitly prohibited, so they should raise an exception.
Contributes to issue CURA-3497.
|
# Copyright (c) 2017 Ultimaker B.V.
# Cura is released under the terms of the AGPLv3 or higher.
import pytest #This module contains automated tests.
import unittest.mock #For the mocking and monkeypatching functionality.
import cura.Settings.ExtruderStack #The module we're testing.
from cura.Settings.Exceptions import InvalidOperationError #To check whether the correct exceptions are raised.
## An empty extruder stack to test with.
@pytest.fixture()
def extruder_stack() -> cura.Settings.ExtruderStack.ExtruderStack:
return cura.Settings.ExtruderStack.ExtruderStack
## Tests whether adding a container is properly forbidden.
def test_addContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.addContainer(unittest.mock.MagicMock())
## Tests whether inserting a container is properly forbidden.
def test_insertContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.insertContainer(0, unittest.mock.MagicMock())
## Tests whether removing a container is properly forbidden.
def test_removeContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.removeContainer(unittest.mock.MagicMock())
|
<commit_before><commit_msg>Add tests for prohibited operations on extruder stacks
These operations are explicitly prohibited, so they should raise an exception.
Contributes to issue CURA-3497.<commit_after>
|
# Copyright (c) 2017 Ultimaker B.V.
# Cura is released under the terms of the AGPLv3 or higher.
import pytest #This module contains automated tests.
import unittest.mock #For the mocking and monkeypatching functionality.
import cura.Settings.ExtruderStack #The module we're testing.
from cura.Settings.Exceptions import InvalidOperationError #To check whether the correct exceptions are raised.
## An empty extruder stack to test with.
@pytest.fixture()
def extruder_stack() -> cura.Settings.ExtruderStack.ExtruderStack:
return cura.Settings.ExtruderStack.ExtruderStack
## Tests whether adding a container is properly forbidden.
def test_addContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.addContainer(unittest.mock.MagicMock())
## Tests whether inserting a container is properly forbidden.
def test_insertContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.insertContainer(0, unittest.mock.MagicMock())
## Tests whether removing a container is properly forbidden.
def test_removeContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.removeContainer(unittest.mock.MagicMock())
|
Add tests for prohibited operations on extruder stacks
These operations are explicitly prohibited, so they should raise an exception.
Contributes to issue CURA-3497.# Copyright (c) 2017 Ultimaker B.V.
# Cura is released under the terms of the AGPLv3 or higher.
import pytest #This module contains automated tests.
import unittest.mock #For the mocking and monkeypatching functionality.
import cura.Settings.ExtruderStack #The module we're testing.
from cura.Settings.Exceptions import InvalidOperationError #To check whether the correct exceptions are raised.
## An empty extruder stack to test with.
@pytest.fixture()
def extruder_stack() -> cura.Settings.ExtruderStack.ExtruderStack:
return cura.Settings.ExtruderStack.ExtruderStack
## Tests whether adding a container is properly forbidden.
def test_addContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.addContainer(unittest.mock.MagicMock())
## Tests whether inserting a container is properly forbidden.
def test_insertContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.insertContainer(0, unittest.mock.MagicMock())
## Tests whether removing a container is properly forbidden.
def test_removeContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.removeContainer(unittest.mock.MagicMock())
|
<commit_before><commit_msg>Add tests for prohibited operations on extruder stacks
These operations are explicitly prohibited, so they should raise an exception.
Contributes to issue CURA-3497.<commit_after># Copyright (c) 2017 Ultimaker B.V.
# Cura is released under the terms of the AGPLv3 or higher.
import pytest #This module contains automated tests.
import unittest.mock #For the mocking and monkeypatching functionality.
import cura.Settings.ExtruderStack #The module we're testing.
from cura.Settings.Exceptions import InvalidOperationError #To check whether the correct exceptions are raised.
## An empty extruder stack to test with.
@pytest.fixture()
def extruder_stack() -> cura.Settings.ExtruderStack.ExtruderStack:
return cura.Settings.ExtruderStack.ExtruderStack
## Tests whether adding a container is properly forbidden.
def test_addContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.addContainer(unittest.mock.MagicMock())
## Tests whether inserting a container is properly forbidden.
def test_insertContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.insertContainer(0, unittest.mock.MagicMock())
## Tests whether removing a container is properly forbidden.
def test_removeContainer(extruder_stack):
with pytest.raises(InvalidOperationError):
extruder_stack.removeContainer(unittest.mock.MagicMock())
|
|
67cee2591ab0aa41ff51a327a55fa3dff136652e
|
will/tests/test_acl.py
|
will/tests/test_acl.py
|
import unittest
from will.mixins.roster import RosterMixin
from will import settings
from mock import patch
class TestIsAdmin(unittest.TestCase):
def setUp(self):
self.message = {'nick': 'WoOh'}
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_not_set(self, mock_get_user_from_message):
settings.ADMINS = '*'
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['wooh']
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['skoczen']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_lowercase(self, mock_get_user_from_message):
settings.ADMINS = ['WoOh']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
|
Cover current ACL with tests
|
Cover current ACL with tests
|
Python
|
mit
|
fredsmith/will,wontonst/will,brandonsturgeon/will,ammartins/will,ammartins/will,woohgit/will,ammartins/will,jacobbridges/will,Ironykins/will,woohgit/will,jacobbridges/will,shadow7412/will,mvanbaak/will,mvanbaak/will,mike-love/will,chillipeper/will,mvanbaak/will,dmuntean/will,fredsmith/will,grahamhayes/will,wontonst/will,dmuntean/will,dmuntean/will,pcurry/will,Ironykins/will,chillipeper/will,skoczen/will,brandonsturgeon/will,pcurry/will,Regner/will,grahamhayes/will,fredsmith/will,Regner/will,chillipeper/will,skoczen/will,mike-love/will,wontonst/will,shadow7412/will,woohgit/will,shadow7412/will,Regner/will,Ironykins/will,skoczen/will,pcurry/will,mike-love/will,grahamhayes/will,jacobbridges/will,brandonsturgeon/will
|
Cover current ACL with tests
|
import unittest
from will.mixins.roster import RosterMixin
from will import settings
from mock import patch
class TestIsAdmin(unittest.TestCase):
def setUp(self):
self.message = {'nick': 'WoOh'}
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_not_set(self, mock_get_user_from_message):
settings.ADMINS = '*'
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['wooh']
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['skoczen']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_lowercase(self, mock_get_user_from_message):
settings.ADMINS = ['WoOh']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
|
<commit_before><commit_msg>Cover current ACL with tests<commit_after>
|
import unittest
from will.mixins.roster import RosterMixin
from will import settings
from mock import patch
class TestIsAdmin(unittest.TestCase):
def setUp(self):
self.message = {'nick': 'WoOh'}
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_not_set(self, mock_get_user_from_message):
settings.ADMINS = '*'
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['wooh']
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['skoczen']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_lowercase(self, mock_get_user_from_message):
settings.ADMINS = ['WoOh']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
|
Cover current ACL with testsimport unittest
from will.mixins.roster import RosterMixin
from will import settings
from mock import patch
class TestIsAdmin(unittest.TestCase):
def setUp(self):
self.message = {'nick': 'WoOh'}
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_not_set(self, mock_get_user_from_message):
settings.ADMINS = '*'
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['wooh']
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['skoczen']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_lowercase(self, mock_get_user_from_message):
settings.ADMINS = ['WoOh']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
|
<commit_before><commit_msg>Cover current ACL with tests<commit_after>import unittest
from will.mixins.roster import RosterMixin
from will import settings
from mock import patch
class TestIsAdmin(unittest.TestCase):
def setUp(self):
self.message = {'nick': 'WoOh'}
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_not_set(self, mock_get_user_from_message):
settings.ADMINS = '*'
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_true_if_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['wooh']
mock_get_user_from_message.return_value = self.message
self.assertTrue(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_enlisted(self, mock_get_user_from_message):
settings.ADMINS = ['skoczen']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
@patch('will.mixins.roster.RosterMixin.get_user_from_message')
def test_message_is_from_admin_false_if_not_lowercase(self, mock_get_user_from_message):
settings.ADMINS = ['WoOh']
mock_get_user_from_message.return_value = self.message
self.assertFalse(RosterMixin().message_is_from_admin(self.message))
|
|
5207550b9d19ff6823fb641e86e4851106ebd7f1
|
bench/run-paper-nums.py
|
bench/run-paper-nums.py
|
#!/usr/bin/env python
devices = [
("hdd", "/dev/sdc1"),
("ssd-sam", "/dev/sdb1"),
("sdd-intel", "/dev/sdd2"),
("ram", "/dev/loop0"),
]
benches = [
("smallfile", "./smallfile /tmp/ft"),
("smallsync", "./smallsync /tmp/ft"),
("largefile", "./largefile /tmp/ft"),
("mailbench", "./mailbench.sh /home/alex/sv6 /tmp/ft"),
("app-bench", "./app-bench.sh /home/alex/xv6 /tmp/ft"),
("sqlite", "./sqlitebench.sh /tmp/ft"),
]
benches = [x for x in benches if x[0] == "mailbench"]
import os
import sys
for d, dev in devices:
for b, bench in benches:
for i in range(1, 6):
name = "{}-{}-{}".format(b, d, i)
cmd = "perflock ./run-bench.sh {0} '{1}' '{2}' > {1}.log".format(dev, name, bench)
print(cmd)
status = os.system(cmd)
if status != 0:
print("failed:", cmd, file=sys.stderr)
|
Add script to run benchmarks for paper
|
Add script to run benchmarks for paper
|
Python
|
mit
|
mit-pdos/fscq-impl,mit-pdos/fscq-impl,mit-pdos/fscq-impl,mit-pdos/fscq-impl,mit-pdos/fscq-impl
|
Add script to run benchmarks for paper
|
#!/usr/bin/env python
devices = [
("hdd", "/dev/sdc1"),
("ssd-sam", "/dev/sdb1"),
("sdd-intel", "/dev/sdd2"),
("ram", "/dev/loop0"),
]
benches = [
("smallfile", "./smallfile /tmp/ft"),
("smallsync", "./smallsync /tmp/ft"),
("largefile", "./largefile /tmp/ft"),
("mailbench", "./mailbench.sh /home/alex/sv6 /tmp/ft"),
("app-bench", "./app-bench.sh /home/alex/xv6 /tmp/ft"),
("sqlite", "./sqlitebench.sh /tmp/ft"),
]
benches = [x for x in benches if x[0] == "mailbench"]
import os
import sys
for d, dev in devices:
for b, bench in benches:
for i in range(1, 6):
name = "{}-{}-{}".format(b, d, i)
cmd = "perflock ./run-bench.sh {0} '{1}' '{2}' > {1}.log".format(dev, name, bench)
print(cmd)
status = os.system(cmd)
if status != 0:
print("failed:", cmd, file=sys.stderr)
|
<commit_before><commit_msg>Add script to run benchmarks for paper<commit_after>
|
#!/usr/bin/env python
devices = [
("hdd", "/dev/sdc1"),
("ssd-sam", "/dev/sdb1"),
("sdd-intel", "/dev/sdd2"),
("ram", "/dev/loop0"),
]
benches = [
("smallfile", "./smallfile /tmp/ft"),
("smallsync", "./smallsync /tmp/ft"),
("largefile", "./largefile /tmp/ft"),
("mailbench", "./mailbench.sh /home/alex/sv6 /tmp/ft"),
("app-bench", "./app-bench.sh /home/alex/xv6 /tmp/ft"),
("sqlite", "./sqlitebench.sh /tmp/ft"),
]
benches = [x for x in benches if x[0] == "mailbench"]
import os
import sys
for d, dev in devices:
for b, bench in benches:
for i in range(1, 6):
name = "{}-{}-{}".format(b, d, i)
cmd = "perflock ./run-bench.sh {0} '{1}' '{2}' > {1}.log".format(dev, name, bench)
print(cmd)
status = os.system(cmd)
if status != 0:
print("failed:", cmd, file=sys.stderr)
|
Add script to run benchmarks for paper#!/usr/bin/env python
devices = [
("hdd", "/dev/sdc1"),
("ssd-sam", "/dev/sdb1"),
("sdd-intel", "/dev/sdd2"),
("ram", "/dev/loop0"),
]
benches = [
("smallfile", "./smallfile /tmp/ft"),
("smallsync", "./smallsync /tmp/ft"),
("largefile", "./largefile /tmp/ft"),
("mailbench", "./mailbench.sh /home/alex/sv6 /tmp/ft"),
("app-bench", "./app-bench.sh /home/alex/xv6 /tmp/ft"),
("sqlite", "./sqlitebench.sh /tmp/ft"),
]
benches = [x for x in benches if x[0] == "mailbench"]
import os
import sys
for d, dev in devices:
for b, bench in benches:
for i in range(1, 6):
name = "{}-{}-{}".format(b, d, i)
cmd = "perflock ./run-bench.sh {0} '{1}' '{2}' > {1}.log".format(dev, name, bench)
print(cmd)
status = os.system(cmd)
if status != 0:
print("failed:", cmd, file=sys.stderr)
|
<commit_before><commit_msg>Add script to run benchmarks for paper<commit_after>#!/usr/bin/env python
devices = [
("hdd", "/dev/sdc1"),
("ssd-sam", "/dev/sdb1"),
("sdd-intel", "/dev/sdd2"),
("ram", "/dev/loop0"),
]
benches = [
("smallfile", "./smallfile /tmp/ft"),
("smallsync", "./smallsync /tmp/ft"),
("largefile", "./largefile /tmp/ft"),
("mailbench", "./mailbench.sh /home/alex/sv6 /tmp/ft"),
("app-bench", "./app-bench.sh /home/alex/xv6 /tmp/ft"),
("sqlite", "./sqlitebench.sh /tmp/ft"),
]
benches = [x for x in benches if x[0] == "mailbench"]
import os
import sys
for d, dev in devices:
for b, bench in benches:
for i in range(1, 6):
name = "{}-{}-{}".format(b, d, i)
cmd = "perflock ./run-bench.sh {0} '{1}' '{2}' > {1}.log".format(dev, name, bench)
print(cmd)
status = os.system(cmd)
if status != 0:
print("failed:", cmd, file=sys.stderr)
|
|
4ecad80ff60f9964e4027a02d9382b7505d8386a
|
pullpush/GitPython-Daemon-Example.py
|
pullpush/GitPython-Daemon-Example.py
|
#!/usr/bin/env python3
import git
import tempfile
import time
tmpdir = tempfile.TemporaryDirectory(suffix='.git')
repo = git.Repo.init(tmpdir.name, shared=True, bare=True)
repo.daemon_export = True
gd = git.Git().daemon(tmpdir.name,
enable='receive-pack',
listen='127.0.0.1',
port=9418,
as_process=True,
verbose=True
)
time.sleep(0.5)
gd.proc.wait()
# filename = os.path.join(tmpdir.name, 'index.htm')
# with open(filename, "a") as f:
# f.write("Hello World")
|
Add Example for Git Deamon.
|
Add Example for Git Deamon.
This can now be used in the unittests
|
Python
|
mit
|
martialblog/git-pullpush
|
Add Example for Git Deamon.
This can now be used in the unittests
|
#!/usr/bin/env python3
import git
import tempfile
import time
tmpdir = tempfile.TemporaryDirectory(suffix='.git')
repo = git.Repo.init(tmpdir.name, shared=True, bare=True)
repo.daemon_export = True
gd = git.Git().daemon(tmpdir.name,
enable='receive-pack',
listen='127.0.0.1',
port=9418,
as_process=True,
verbose=True
)
time.sleep(0.5)
gd.proc.wait()
# filename = os.path.join(tmpdir.name, 'index.htm')
# with open(filename, "a") as f:
# f.write("Hello World")
|
<commit_before><commit_msg>Add Example for Git Deamon.
This can now be used in the unittests<commit_after>
|
#!/usr/bin/env python3
import git
import tempfile
import time
tmpdir = tempfile.TemporaryDirectory(suffix='.git')
repo = git.Repo.init(tmpdir.name, shared=True, bare=True)
repo.daemon_export = True
gd = git.Git().daemon(tmpdir.name,
enable='receive-pack',
listen='127.0.0.1',
port=9418,
as_process=True,
verbose=True
)
time.sleep(0.5)
gd.proc.wait()
# filename = os.path.join(tmpdir.name, 'index.htm')
# with open(filename, "a") as f:
# f.write("Hello World")
|
Add Example for Git Deamon.
This can now be used in the unittests#!/usr/bin/env python3
import git
import tempfile
import time
tmpdir = tempfile.TemporaryDirectory(suffix='.git')
repo = git.Repo.init(tmpdir.name, shared=True, bare=True)
repo.daemon_export = True
gd = git.Git().daemon(tmpdir.name,
enable='receive-pack',
listen='127.0.0.1',
port=9418,
as_process=True,
verbose=True
)
time.sleep(0.5)
gd.proc.wait()
# filename = os.path.join(tmpdir.name, 'index.htm')
# with open(filename, "a") as f:
# f.write("Hello World")
|
<commit_before><commit_msg>Add Example for Git Deamon.
This can now be used in the unittests<commit_after>#!/usr/bin/env python3
import git
import tempfile
import time
tmpdir = tempfile.TemporaryDirectory(suffix='.git')
repo = git.Repo.init(tmpdir.name, shared=True, bare=True)
repo.daemon_export = True
gd = git.Git().daemon(tmpdir.name,
enable='receive-pack',
listen='127.0.0.1',
port=9418,
as_process=True,
verbose=True
)
time.sleep(0.5)
gd.proc.wait()
# filename = os.path.join(tmpdir.name, 'index.htm')
# with open(filename, "a") as f:
# f.write("Hello World")
|
|
9c88eb5994ac33e72fad750b46de84c7de38328e
|
ChannelWorm/adapters.py
|
ChannelWorm/adapters.py
|
# configure django to use default settings
# note that this can also be done using an environment variable
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
if hasattr(settings, 'DEBUG'):
# settings are configured already
pass
else:
# load default settings if they're not set
from web_app import settings as defaults
settings.configure(default_settings=defaults, DEBUG=True)
import ion_channel.models as C
import PyOpenWorm as P
from django.forms.models import model_to_dict
class PatchClampAdapter(object):
"""Map a channelworm model to a pyopenworm model"""
def __init__(self, cw_obj):
# initialize PyOpenWorm connection so we can access its API
P.connect()
self.channelworm_object = cw_obj
cw_dict = model_to_dict(self.channelworm_object)
experiment_id = cw_dict.pop('experiment')
patch_clamp_id = cw_dict.pop('id')
self.pyopenworm_object = P.Experiment()
# get the CW model's experiment
cw_evidence = C.Experiment.objects.get(id=experiment_id)
# make a PyOW evidence object with it
pow_evidence = P.Evidence(doi=cw_evidence.doi)
# add it to the PyOW experiment model
self.pyopenworm_object.reference(pow_evidence)
for key, value in cw_dict.iteritems():
self.pyopenworm_object.conditions.set(key, value)
# we not longer need PyOW API so we can kill the connection
P.disconnect()
def get_pow(self):
return self.pyopenworm_object
def get_cw(self):
return self.channelworm_object
|
Fix configuration imports. Adapter now works.
|
Fix configuration imports. Adapter now works.
|
Python
|
mit
|
joebowen/ChannelWorm,VahidGh/ChannelWorm,cheelee/ChannelWorm,openworm/ChannelWorm,cheelee/ChannelWorm,cheelee/ChannelWorm,gsarma/ChannelWorm,gsarma/ChannelWorm,gsarma/ChannelWorm,joebowen/ChannelWorm,joebowen/ChannelWorm,openworm/ChannelWorm,joebowen/ChannelWorm,VahidGh/ChannelWorm,gsarma/ChannelWorm,openworm/ChannelWorm,cheelee/ChannelWorm,VahidGh/ChannelWorm,openworm/ChannelWorm,VahidGh/ChannelWorm
|
Fix configuration imports. Adapter now works.
|
# configure django to use default settings
# note that this can also be done using an environment variable
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
if hasattr(settings, 'DEBUG'):
# settings are configured already
pass
else:
# load default settings if they're not set
from web_app import settings as defaults
settings.configure(default_settings=defaults, DEBUG=True)
import ion_channel.models as C
import PyOpenWorm as P
from django.forms.models import model_to_dict
class PatchClampAdapter(object):
"""Map a channelworm model to a pyopenworm model"""
def __init__(self, cw_obj):
# initialize PyOpenWorm connection so we can access its API
P.connect()
self.channelworm_object = cw_obj
cw_dict = model_to_dict(self.channelworm_object)
experiment_id = cw_dict.pop('experiment')
patch_clamp_id = cw_dict.pop('id')
self.pyopenworm_object = P.Experiment()
# get the CW model's experiment
cw_evidence = C.Experiment.objects.get(id=experiment_id)
# make a PyOW evidence object with it
pow_evidence = P.Evidence(doi=cw_evidence.doi)
# add it to the PyOW experiment model
self.pyopenworm_object.reference(pow_evidence)
for key, value in cw_dict.iteritems():
self.pyopenworm_object.conditions.set(key, value)
# we not longer need PyOW API so we can kill the connection
P.disconnect()
def get_pow(self):
return self.pyopenworm_object
def get_cw(self):
return self.channelworm_object
|
<commit_before><commit_msg>Fix configuration imports. Adapter now works.<commit_after>
|
# configure django to use default settings
# note that this can also be done using an environment variable
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
if hasattr(settings, 'DEBUG'):
# settings are configured already
pass
else:
# load default settings if they're not set
from web_app import settings as defaults
settings.configure(default_settings=defaults, DEBUG=True)
import ion_channel.models as C
import PyOpenWorm as P
from django.forms.models import model_to_dict
class PatchClampAdapter(object):
"""Map a channelworm model to a pyopenworm model"""
def __init__(self, cw_obj):
# initialize PyOpenWorm connection so we can access its API
P.connect()
self.channelworm_object = cw_obj
cw_dict = model_to_dict(self.channelworm_object)
experiment_id = cw_dict.pop('experiment')
patch_clamp_id = cw_dict.pop('id')
self.pyopenworm_object = P.Experiment()
# get the CW model's experiment
cw_evidence = C.Experiment.objects.get(id=experiment_id)
# make a PyOW evidence object with it
pow_evidence = P.Evidence(doi=cw_evidence.doi)
# add it to the PyOW experiment model
self.pyopenworm_object.reference(pow_evidence)
for key, value in cw_dict.iteritems():
self.pyopenworm_object.conditions.set(key, value)
# we not longer need PyOW API so we can kill the connection
P.disconnect()
def get_pow(self):
return self.pyopenworm_object
def get_cw(self):
return self.channelworm_object
|
Fix configuration imports. Adapter now works.# configure django to use default settings
# note that this can also be done using an environment variable
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
if hasattr(settings, 'DEBUG'):
# settings are configured already
pass
else:
# load default settings if they're not set
from web_app import settings as defaults
settings.configure(default_settings=defaults, DEBUG=True)
import ion_channel.models as C
import PyOpenWorm as P
from django.forms.models import model_to_dict
class PatchClampAdapter(object):
"""Map a channelworm model to a pyopenworm model"""
def __init__(self, cw_obj):
# initialize PyOpenWorm connection so we can access its API
P.connect()
self.channelworm_object = cw_obj
cw_dict = model_to_dict(self.channelworm_object)
experiment_id = cw_dict.pop('experiment')
patch_clamp_id = cw_dict.pop('id')
self.pyopenworm_object = P.Experiment()
# get the CW model's experiment
cw_evidence = C.Experiment.objects.get(id=experiment_id)
# make a PyOW evidence object with it
pow_evidence = P.Evidence(doi=cw_evidence.doi)
# add it to the PyOW experiment model
self.pyopenworm_object.reference(pow_evidence)
for key, value in cw_dict.iteritems():
self.pyopenworm_object.conditions.set(key, value)
# we not longer need PyOW API so we can kill the connection
P.disconnect()
def get_pow(self):
return self.pyopenworm_object
def get_cw(self):
return self.channelworm_object
|
<commit_before><commit_msg>Fix configuration imports. Adapter now works.<commit_after># configure django to use default settings
# note that this can also be done using an environment variable
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
if hasattr(settings, 'DEBUG'):
# settings are configured already
pass
else:
# load default settings if they're not set
from web_app import settings as defaults
settings.configure(default_settings=defaults, DEBUG=True)
import ion_channel.models as C
import PyOpenWorm as P
from django.forms.models import model_to_dict
class PatchClampAdapter(object):
"""Map a channelworm model to a pyopenworm model"""
def __init__(self, cw_obj):
# initialize PyOpenWorm connection so we can access its API
P.connect()
self.channelworm_object = cw_obj
cw_dict = model_to_dict(self.channelworm_object)
experiment_id = cw_dict.pop('experiment')
patch_clamp_id = cw_dict.pop('id')
self.pyopenworm_object = P.Experiment()
# get the CW model's experiment
cw_evidence = C.Experiment.objects.get(id=experiment_id)
# make a PyOW evidence object with it
pow_evidence = P.Evidence(doi=cw_evidence.doi)
# add it to the PyOW experiment model
self.pyopenworm_object.reference(pow_evidence)
for key, value in cw_dict.iteritems():
self.pyopenworm_object.conditions.set(key, value)
# we not longer need PyOW API so we can kill the connection
P.disconnect()
def get_pow(self):
return self.pyopenworm_object
def get_cw(self):
return self.channelworm_object
|
|
1418112d0ba752d3ebac6cdf2d727fdd71a2cf6f
|
test/unit/interfaces/test_map_ctp.py
|
test/unit/interfaces/test_map_ctp.py
|
import sys, os, re, shutil
from nose.tools import *
import logging
logger = logging.getLogger(__name__)
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
from qipipe.interfaces import MapCTP
ROOT = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..'))
"""The test parent directory."""
RESULTS = os.path.join(ROOT, 'results', 'staging', 'map_ctp')
"""The test results directory."""
from nipype import config
cfg = dict(logging=dict(workflow_level='DEBUG', log_directory=RESULTS, log_to_file=True),
execution=dict(crashdump_dir=RESULTS, create_report=False))
config.update_config(cfg)
COLLECTION = 'Sarcoma'
"""The test collection."""
SUBJECTS = ["Sarcoma%02d" % i for i in range(8, 12)]
PAT = "ptid/(Sarcoma\d{2})\s*=\s*QIN-\w+-\d{2}-(\d{4})"
class TestMapCTP:
"""Map CTP unit tests."""
def tearDown(self):
shutil.rmtree(RESULTS, True)
def test_map_ctp(self):
logger.debug("Testing Map CTP on %s..." % SUBJECTS)
map_ctp = MapCTP(collection=COLLECTION, patient_ids=SUBJECTS, dest=RESULTS)
result = map_ctp.run()
prop_file = result.outputs.out_file
assert_true(os.path.exists(prop_file), "Property file was not created: %s" % prop_file)
assert_equal(RESULTS, os.path.dirname(prop_file), "Property file was not created in %s: %s" % (RESULTS, prop_file))
for line in open(prop_file).readlines():
qin_id, ctp_suffix = re.match(PAT, line).groups()
assert_true(qin_id in SUBJECTS, "Subject id not found: %s" % qin_id)
qin_nbr = int(qin_id[-2:])
ctp_nbr = int(ctp_suffix)
assert_equal(qin_nbr, ctp_nbr, "Patient number incorrect; expected: %d found: %d" % (qin_nbr, ctp_nbr))
if __name__ == "__main__":
import nose
nose.main(defaultTest=__name__)
|
Test the Map CTP interface.
|
Test the Map CTP interface.
|
Python
|
bsd-2-clause
|
ohsu-qin/qipipe
|
Test the Map CTP interface.
|
import sys, os, re, shutil
from nose.tools import *
import logging
logger = logging.getLogger(__name__)
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
from qipipe.interfaces import MapCTP
ROOT = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..'))
"""The test parent directory."""
RESULTS = os.path.join(ROOT, 'results', 'staging', 'map_ctp')
"""The test results directory."""
from nipype import config
cfg = dict(logging=dict(workflow_level='DEBUG', log_directory=RESULTS, log_to_file=True),
execution=dict(crashdump_dir=RESULTS, create_report=False))
config.update_config(cfg)
COLLECTION = 'Sarcoma'
"""The test collection."""
SUBJECTS = ["Sarcoma%02d" % i for i in range(8, 12)]
PAT = "ptid/(Sarcoma\d{2})\s*=\s*QIN-\w+-\d{2}-(\d{4})"
class TestMapCTP:
"""Map CTP unit tests."""
def tearDown(self):
shutil.rmtree(RESULTS, True)
def test_map_ctp(self):
logger.debug("Testing Map CTP on %s..." % SUBJECTS)
map_ctp = MapCTP(collection=COLLECTION, patient_ids=SUBJECTS, dest=RESULTS)
result = map_ctp.run()
prop_file = result.outputs.out_file
assert_true(os.path.exists(prop_file), "Property file was not created: %s" % prop_file)
assert_equal(RESULTS, os.path.dirname(prop_file), "Property file was not created in %s: %s" % (RESULTS, prop_file))
for line in open(prop_file).readlines():
qin_id, ctp_suffix = re.match(PAT, line).groups()
assert_true(qin_id in SUBJECTS, "Subject id not found: %s" % qin_id)
qin_nbr = int(qin_id[-2:])
ctp_nbr = int(ctp_suffix)
assert_equal(qin_nbr, ctp_nbr, "Patient number incorrect; expected: %d found: %d" % (qin_nbr, ctp_nbr))
if __name__ == "__main__":
import nose
nose.main(defaultTest=__name__)
|
<commit_before><commit_msg>Test the Map CTP interface.<commit_after>
|
import sys, os, re, shutil
from nose.tools import *
import logging
logger = logging.getLogger(__name__)
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
from qipipe.interfaces import MapCTP
ROOT = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..'))
"""The test parent directory."""
RESULTS = os.path.join(ROOT, 'results', 'staging', 'map_ctp')
"""The test results directory."""
from nipype import config
cfg = dict(logging=dict(workflow_level='DEBUG', log_directory=RESULTS, log_to_file=True),
execution=dict(crashdump_dir=RESULTS, create_report=False))
config.update_config(cfg)
COLLECTION = 'Sarcoma'
"""The test collection."""
SUBJECTS = ["Sarcoma%02d" % i for i in range(8, 12)]
PAT = "ptid/(Sarcoma\d{2})\s*=\s*QIN-\w+-\d{2}-(\d{4})"
class TestMapCTP:
"""Map CTP unit tests."""
def tearDown(self):
shutil.rmtree(RESULTS, True)
def test_map_ctp(self):
logger.debug("Testing Map CTP on %s..." % SUBJECTS)
map_ctp = MapCTP(collection=COLLECTION, patient_ids=SUBJECTS, dest=RESULTS)
result = map_ctp.run()
prop_file = result.outputs.out_file
assert_true(os.path.exists(prop_file), "Property file was not created: %s" % prop_file)
assert_equal(RESULTS, os.path.dirname(prop_file), "Property file was not created in %s: %s" % (RESULTS, prop_file))
for line in open(prop_file).readlines():
qin_id, ctp_suffix = re.match(PAT, line).groups()
assert_true(qin_id in SUBJECTS, "Subject id not found: %s" % qin_id)
qin_nbr = int(qin_id[-2:])
ctp_nbr = int(ctp_suffix)
assert_equal(qin_nbr, ctp_nbr, "Patient number incorrect; expected: %d found: %d" % (qin_nbr, ctp_nbr))
if __name__ == "__main__":
import nose
nose.main(defaultTest=__name__)
|
Test the Map CTP interface.import sys, os, re, shutil
from nose.tools import *
import logging
logger = logging.getLogger(__name__)
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
from qipipe.interfaces import MapCTP
ROOT = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..'))
"""The test parent directory."""
RESULTS = os.path.join(ROOT, 'results', 'staging', 'map_ctp')
"""The test results directory."""
from nipype import config
cfg = dict(logging=dict(workflow_level='DEBUG', log_directory=RESULTS, log_to_file=True),
execution=dict(crashdump_dir=RESULTS, create_report=False))
config.update_config(cfg)
COLLECTION = 'Sarcoma'
"""The test collection."""
SUBJECTS = ["Sarcoma%02d" % i for i in range(8, 12)]
PAT = "ptid/(Sarcoma\d{2})\s*=\s*QIN-\w+-\d{2}-(\d{4})"
class TestMapCTP:
"""Map CTP unit tests."""
def tearDown(self):
shutil.rmtree(RESULTS, True)
def test_map_ctp(self):
logger.debug("Testing Map CTP on %s..." % SUBJECTS)
map_ctp = MapCTP(collection=COLLECTION, patient_ids=SUBJECTS, dest=RESULTS)
result = map_ctp.run()
prop_file = result.outputs.out_file
assert_true(os.path.exists(prop_file), "Property file was not created: %s" % prop_file)
assert_equal(RESULTS, os.path.dirname(prop_file), "Property file was not created in %s: %s" % (RESULTS, prop_file))
for line in open(prop_file).readlines():
qin_id, ctp_suffix = re.match(PAT, line).groups()
assert_true(qin_id in SUBJECTS, "Subject id not found: %s" % qin_id)
qin_nbr = int(qin_id[-2:])
ctp_nbr = int(ctp_suffix)
assert_equal(qin_nbr, ctp_nbr, "Patient number incorrect; expected: %d found: %d" % (qin_nbr, ctp_nbr))
if __name__ == "__main__":
import nose
nose.main(defaultTest=__name__)
|
<commit_before><commit_msg>Test the Map CTP interface.<commit_after>import sys, os, re, shutil
from nose.tools import *
import logging
logger = logging.getLogger(__name__)
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
from qipipe.interfaces import MapCTP
ROOT = os.path.normpath(os.path.join(os.path.dirname(__file__), '..', '..'))
"""The test parent directory."""
RESULTS = os.path.join(ROOT, 'results', 'staging', 'map_ctp')
"""The test results directory."""
from nipype import config
cfg = dict(logging=dict(workflow_level='DEBUG', log_directory=RESULTS, log_to_file=True),
execution=dict(crashdump_dir=RESULTS, create_report=False))
config.update_config(cfg)
COLLECTION = 'Sarcoma'
"""The test collection."""
SUBJECTS = ["Sarcoma%02d" % i for i in range(8, 12)]
PAT = "ptid/(Sarcoma\d{2})\s*=\s*QIN-\w+-\d{2}-(\d{4})"
class TestMapCTP:
"""Map CTP unit tests."""
def tearDown(self):
shutil.rmtree(RESULTS, True)
def test_map_ctp(self):
logger.debug("Testing Map CTP on %s..." % SUBJECTS)
map_ctp = MapCTP(collection=COLLECTION, patient_ids=SUBJECTS, dest=RESULTS)
result = map_ctp.run()
prop_file = result.outputs.out_file
assert_true(os.path.exists(prop_file), "Property file was not created: %s" % prop_file)
assert_equal(RESULTS, os.path.dirname(prop_file), "Property file was not created in %s: %s" % (RESULTS, prop_file))
for line in open(prop_file).readlines():
qin_id, ctp_suffix = re.match(PAT, line).groups()
assert_true(qin_id in SUBJECTS, "Subject id not found: %s" % qin_id)
qin_nbr = int(qin_id[-2:])
ctp_nbr = int(ctp_suffix)
assert_equal(qin_nbr, ctp_nbr, "Patient number incorrect; expected: %d found: %d" % (qin_nbr, ctp_nbr))
if __name__ == "__main__":
import nose
nose.main(defaultTest=__name__)
|
|
5096f4432978ef1e5d1f3d449fd3f54050f3c287
|
test/widgets/test_wlan.py
|
test/widgets/test_wlan.py
|
# Copyright (c) 2021 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Widget specific tests
import sys
from importlib import reload
from types import ModuleType
import pytest
import libqtile.config
from libqtile.bar import Bar
def no_op(*args, **kwargs):
pass
class MockIwlib(ModuleType):
DATA = {
'wlan0': {
'NWID': b'Auto',
'Frequency': b'5.18 GHz',
'Access Point': b'12:34:56:78:90:AB',
'BitRate': b'650 Mb/s',
'ESSID': b'QtileNet',
'Mode': b'Managed',
'stats': {
'quality': 49,
'level': 190,
'noise': 0,
'updated': 75
}
}
}
@classmethod
def get_iwconfig(cls, interface):
return cls.DATA.get(interface, dict())
# Patch the widget with our mock iwlib module.
@pytest.fixture
def patched_wlan(monkeypatch):
monkeypatch.setitem(sys.modules, "iwlib", MockIwlib("iwlib"))
from libqtile.widget import wlan
# Reload fixes cases where psutil may have been imported previously
reload(wlan)
yield wlan
@pytest.mark.parametrize(
"kwargs,expected", [
({}, "QtileNet 49/70"),
({"format": "{essid} {percent:2.0%}"}, "QtileNet 70%"),
({"interface": "wlan1"}, "Disconnected")
]
)
def test_wlan_display(minimal_conf_noscreen, manager_nospawn, patched_wlan, kwargs, expected):
widget = patched_wlan.Wlan(**kwargs)
config = minimal_conf_noscreen
config.screens = [
libqtile.config.Screen(
top=Bar([widget], 10)
)
]
manager_nospawn.start(config)
text = manager_nospawn.c.bar["top"].info()["widgets"][0]["text"]
assert text == expected
|
Add test for wlan widget
|
Add test for wlan widget
|
Python
|
mit
|
ramnes/qtile,qtile/qtile,ramnes/qtile,qtile/qtile
|
Add test for wlan widget
|
# Copyright (c) 2021 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Widget specific tests
import sys
from importlib import reload
from types import ModuleType
import pytest
import libqtile.config
from libqtile.bar import Bar
def no_op(*args, **kwargs):
pass
class MockIwlib(ModuleType):
DATA = {
'wlan0': {
'NWID': b'Auto',
'Frequency': b'5.18 GHz',
'Access Point': b'12:34:56:78:90:AB',
'BitRate': b'650 Mb/s',
'ESSID': b'QtileNet',
'Mode': b'Managed',
'stats': {
'quality': 49,
'level': 190,
'noise': 0,
'updated': 75
}
}
}
@classmethod
def get_iwconfig(cls, interface):
return cls.DATA.get(interface, dict())
# Patch the widget with our mock iwlib module.
@pytest.fixture
def patched_wlan(monkeypatch):
monkeypatch.setitem(sys.modules, "iwlib", MockIwlib("iwlib"))
from libqtile.widget import wlan
# Reload fixes cases where psutil may have been imported previously
reload(wlan)
yield wlan
@pytest.mark.parametrize(
"kwargs,expected", [
({}, "QtileNet 49/70"),
({"format": "{essid} {percent:2.0%}"}, "QtileNet 70%"),
({"interface": "wlan1"}, "Disconnected")
]
)
def test_wlan_display(minimal_conf_noscreen, manager_nospawn, patched_wlan, kwargs, expected):
widget = patched_wlan.Wlan(**kwargs)
config = minimal_conf_noscreen
config.screens = [
libqtile.config.Screen(
top=Bar([widget], 10)
)
]
manager_nospawn.start(config)
text = manager_nospawn.c.bar["top"].info()["widgets"][0]["text"]
assert text == expected
|
<commit_before><commit_msg>Add test for wlan widget<commit_after>
|
# Copyright (c) 2021 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Widget specific tests
import sys
from importlib import reload
from types import ModuleType
import pytest
import libqtile.config
from libqtile.bar import Bar
def no_op(*args, **kwargs):
pass
class MockIwlib(ModuleType):
DATA = {
'wlan0': {
'NWID': b'Auto',
'Frequency': b'5.18 GHz',
'Access Point': b'12:34:56:78:90:AB',
'BitRate': b'650 Mb/s',
'ESSID': b'QtileNet',
'Mode': b'Managed',
'stats': {
'quality': 49,
'level': 190,
'noise': 0,
'updated': 75
}
}
}
@classmethod
def get_iwconfig(cls, interface):
return cls.DATA.get(interface, dict())
# Patch the widget with our mock iwlib module.
@pytest.fixture
def patched_wlan(monkeypatch):
monkeypatch.setitem(sys.modules, "iwlib", MockIwlib("iwlib"))
from libqtile.widget import wlan
# Reload fixes cases where psutil may have been imported previously
reload(wlan)
yield wlan
@pytest.mark.parametrize(
"kwargs,expected", [
({}, "QtileNet 49/70"),
({"format": "{essid} {percent:2.0%}"}, "QtileNet 70%"),
({"interface": "wlan1"}, "Disconnected")
]
)
def test_wlan_display(minimal_conf_noscreen, manager_nospawn, patched_wlan, kwargs, expected):
widget = patched_wlan.Wlan(**kwargs)
config = minimal_conf_noscreen
config.screens = [
libqtile.config.Screen(
top=Bar([widget], 10)
)
]
manager_nospawn.start(config)
text = manager_nospawn.c.bar["top"].info()["widgets"][0]["text"]
assert text == expected
|
Add test for wlan widget# Copyright (c) 2021 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Widget specific tests
import sys
from importlib import reload
from types import ModuleType
import pytest
import libqtile.config
from libqtile.bar import Bar
def no_op(*args, **kwargs):
pass
class MockIwlib(ModuleType):
DATA = {
'wlan0': {
'NWID': b'Auto',
'Frequency': b'5.18 GHz',
'Access Point': b'12:34:56:78:90:AB',
'BitRate': b'650 Mb/s',
'ESSID': b'QtileNet',
'Mode': b'Managed',
'stats': {
'quality': 49,
'level': 190,
'noise': 0,
'updated': 75
}
}
}
@classmethod
def get_iwconfig(cls, interface):
return cls.DATA.get(interface, dict())
# Patch the widget with our mock iwlib module.
@pytest.fixture
def patched_wlan(monkeypatch):
monkeypatch.setitem(sys.modules, "iwlib", MockIwlib("iwlib"))
from libqtile.widget import wlan
# Reload fixes cases where psutil may have been imported previously
reload(wlan)
yield wlan
@pytest.mark.parametrize(
"kwargs,expected", [
({}, "QtileNet 49/70"),
({"format": "{essid} {percent:2.0%}"}, "QtileNet 70%"),
({"interface": "wlan1"}, "Disconnected")
]
)
def test_wlan_display(minimal_conf_noscreen, manager_nospawn, patched_wlan, kwargs, expected):
widget = patched_wlan.Wlan(**kwargs)
config = minimal_conf_noscreen
config.screens = [
libqtile.config.Screen(
top=Bar([widget], 10)
)
]
manager_nospawn.start(config)
text = manager_nospawn.c.bar["top"].info()["widgets"][0]["text"]
assert text == expected
|
<commit_before><commit_msg>Add test for wlan widget<commit_after># Copyright (c) 2021 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Widget specific tests
import sys
from importlib import reload
from types import ModuleType
import pytest
import libqtile.config
from libqtile.bar import Bar
def no_op(*args, **kwargs):
pass
class MockIwlib(ModuleType):
DATA = {
'wlan0': {
'NWID': b'Auto',
'Frequency': b'5.18 GHz',
'Access Point': b'12:34:56:78:90:AB',
'BitRate': b'650 Mb/s',
'ESSID': b'QtileNet',
'Mode': b'Managed',
'stats': {
'quality': 49,
'level': 190,
'noise': 0,
'updated': 75
}
}
}
@classmethod
def get_iwconfig(cls, interface):
return cls.DATA.get(interface, dict())
# Patch the widget with our mock iwlib module.
@pytest.fixture
def patched_wlan(monkeypatch):
monkeypatch.setitem(sys.modules, "iwlib", MockIwlib("iwlib"))
from libqtile.widget import wlan
# Reload fixes cases where psutil may have been imported previously
reload(wlan)
yield wlan
@pytest.mark.parametrize(
"kwargs,expected", [
({}, "QtileNet 49/70"),
({"format": "{essid} {percent:2.0%}"}, "QtileNet 70%"),
({"interface": "wlan1"}, "Disconnected")
]
)
def test_wlan_display(minimal_conf_noscreen, manager_nospawn, patched_wlan, kwargs, expected):
widget = patched_wlan.Wlan(**kwargs)
config = minimal_conf_noscreen
config.screens = [
libqtile.config.Screen(
top=Bar([widget], 10)
)
]
manager_nospawn.start(config)
text = manager_nospawn.c.bar["top"].info()["widgets"][0]["text"]
assert text == expected
|
|
e08257d4012b6df624c88b45ba28e8f9829ce2ad
|
spacy/cli/_git_sparse_checkout_example.py
|
spacy/cli/_git_sparse_checkout_example.py
|
import tempfile
import typer
from pathlib import Path
import subprocess
import shlex
import shutil
from contextlib import contextmanager
@contextmanager
def make_tempdir():
d = Path(tempfile.mkdtemp())
yield d
shutil.rmtree(str(d))
def clone_repo(repo, temp_dir):
subprocess.check_call([
"git",
"clone",
repo,
temp_dir,
"--no-checkout",
"--depth", "1",
"--config", "core.sparseCheckout=true"
])
def checkout_and_fetch(temp_dir):
subprocess.check_call([
"git",
"-C", temp_dir,
"fetch"
])
subprocess.check_call([
"git",
"-C", temp_dir,
"checkout"
])
def set_sparse_checkout_dir(temp_dir, subpath):
with (temp_dir / ".git" / "info" / "sparse-checkout").open("w") as file_:
file_.write(subpath)
def main(repo: str, subpath: str, dest: Path):
with make_tempdir() as temp_dir:
clone_repo(repo, temp_dir)
print("After clone", list(temp_dir.iterdir()))
set_sparse_checkout_dir(temp_dir, subpath)
checkout_and_fetch(temp_dir)
print("After checkout", list(temp_dir.iterdir()))
assert (temp_dir / subpath) in list(temp_dir.iterdir())
shutil.copytree(temp_dir / subpath, dest / subpath, dirs_exist_ok=True)
print("Exists after cleanup?", temp_dir.exists())
print("Destination", list(dest.iterdir()))
if __name__ == "__main__":
typer.run(main)
|
Add example of how to do sparse-checkout
|
Add example of how to do sparse-checkout
|
Python
|
mit
|
spacy-io/spaCy,spacy-io/spaCy,spacy-io/spaCy,explosion/spaCy,explosion/spaCy,spacy-io/spaCy,explosion/spaCy,honnibal/spaCy,honnibal/spaCy,honnibal/spaCy,spacy-io/spaCy,explosion/spaCy,honnibal/spaCy,spacy-io/spaCy,explosion/spaCy,explosion/spaCy
|
Add example of how to do sparse-checkout
|
import tempfile
import typer
from pathlib import Path
import subprocess
import shlex
import shutil
from contextlib import contextmanager
@contextmanager
def make_tempdir():
d = Path(tempfile.mkdtemp())
yield d
shutil.rmtree(str(d))
def clone_repo(repo, temp_dir):
subprocess.check_call([
"git",
"clone",
repo,
temp_dir,
"--no-checkout",
"--depth", "1",
"--config", "core.sparseCheckout=true"
])
def checkout_and_fetch(temp_dir):
subprocess.check_call([
"git",
"-C", temp_dir,
"fetch"
])
subprocess.check_call([
"git",
"-C", temp_dir,
"checkout"
])
def set_sparse_checkout_dir(temp_dir, subpath):
with (temp_dir / ".git" / "info" / "sparse-checkout").open("w") as file_:
file_.write(subpath)
def main(repo: str, subpath: str, dest: Path):
with make_tempdir() as temp_dir:
clone_repo(repo, temp_dir)
print("After clone", list(temp_dir.iterdir()))
set_sparse_checkout_dir(temp_dir, subpath)
checkout_and_fetch(temp_dir)
print("After checkout", list(temp_dir.iterdir()))
assert (temp_dir / subpath) in list(temp_dir.iterdir())
shutil.copytree(temp_dir / subpath, dest / subpath, dirs_exist_ok=True)
print("Exists after cleanup?", temp_dir.exists())
print("Destination", list(dest.iterdir()))
if __name__ == "__main__":
typer.run(main)
|
<commit_before><commit_msg>Add example of how to do sparse-checkout<commit_after>
|
import tempfile
import typer
from pathlib import Path
import subprocess
import shlex
import shutil
from contextlib import contextmanager
@contextmanager
def make_tempdir():
d = Path(tempfile.mkdtemp())
yield d
shutil.rmtree(str(d))
def clone_repo(repo, temp_dir):
subprocess.check_call([
"git",
"clone",
repo,
temp_dir,
"--no-checkout",
"--depth", "1",
"--config", "core.sparseCheckout=true"
])
def checkout_and_fetch(temp_dir):
subprocess.check_call([
"git",
"-C", temp_dir,
"fetch"
])
subprocess.check_call([
"git",
"-C", temp_dir,
"checkout"
])
def set_sparse_checkout_dir(temp_dir, subpath):
with (temp_dir / ".git" / "info" / "sparse-checkout").open("w") as file_:
file_.write(subpath)
def main(repo: str, subpath: str, dest: Path):
with make_tempdir() as temp_dir:
clone_repo(repo, temp_dir)
print("After clone", list(temp_dir.iterdir()))
set_sparse_checkout_dir(temp_dir, subpath)
checkout_and_fetch(temp_dir)
print("After checkout", list(temp_dir.iterdir()))
assert (temp_dir / subpath) in list(temp_dir.iterdir())
shutil.copytree(temp_dir / subpath, dest / subpath, dirs_exist_ok=True)
print("Exists after cleanup?", temp_dir.exists())
print("Destination", list(dest.iterdir()))
if __name__ == "__main__":
typer.run(main)
|
Add example of how to do sparse-checkoutimport tempfile
import typer
from pathlib import Path
import subprocess
import shlex
import shutil
from contextlib import contextmanager
@contextmanager
def make_tempdir():
d = Path(tempfile.mkdtemp())
yield d
shutil.rmtree(str(d))
def clone_repo(repo, temp_dir):
subprocess.check_call([
"git",
"clone",
repo,
temp_dir,
"--no-checkout",
"--depth", "1",
"--config", "core.sparseCheckout=true"
])
def checkout_and_fetch(temp_dir):
subprocess.check_call([
"git",
"-C", temp_dir,
"fetch"
])
subprocess.check_call([
"git",
"-C", temp_dir,
"checkout"
])
def set_sparse_checkout_dir(temp_dir, subpath):
with (temp_dir / ".git" / "info" / "sparse-checkout").open("w") as file_:
file_.write(subpath)
def main(repo: str, subpath: str, dest: Path):
with make_tempdir() as temp_dir:
clone_repo(repo, temp_dir)
print("After clone", list(temp_dir.iterdir()))
set_sparse_checkout_dir(temp_dir, subpath)
checkout_and_fetch(temp_dir)
print("After checkout", list(temp_dir.iterdir()))
assert (temp_dir / subpath) in list(temp_dir.iterdir())
shutil.copytree(temp_dir / subpath, dest / subpath, dirs_exist_ok=True)
print("Exists after cleanup?", temp_dir.exists())
print("Destination", list(dest.iterdir()))
if __name__ == "__main__":
typer.run(main)
|
<commit_before><commit_msg>Add example of how to do sparse-checkout<commit_after>import tempfile
import typer
from pathlib import Path
import subprocess
import shlex
import shutil
from contextlib import contextmanager
@contextmanager
def make_tempdir():
d = Path(tempfile.mkdtemp())
yield d
shutil.rmtree(str(d))
def clone_repo(repo, temp_dir):
subprocess.check_call([
"git",
"clone",
repo,
temp_dir,
"--no-checkout",
"--depth", "1",
"--config", "core.sparseCheckout=true"
])
def checkout_and_fetch(temp_dir):
subprocess.check_call([
"git",
"-C", temp_dir,
"fetch"
])
subprocess.check_call([
"git",
"-C", temp_dir,
"checkout"
])
def set_sparse_checkout_dir(temp_dir, subpath):
with (temp_dir / ".git" / "info" / "sparse-checkout").open("w") as file_:
file_.write(subpath)
def main(repo: str, subpath: str, dest: Path):
with make_tempdir() as temp_dir:
clone_repo(repo, temp_dir)
print("After clone", list(temp_dir.iterdir()))
set_sparse_checkout_dir(temp_dir, subpath)
checkout_and_fetch(temp_dir)
print("After checkout", list(temp_dir.iterdir()))
assert (temp_dir / subpath) in list(temp_dir.iterdir())
shutil.copytree(temp_dir / subpath, dest / subpath, dirs_exist_ok=True)
print("Exists after cleanup?", temp_dir.exists())
print("Destination", list(dest.iterdir()))
if __name__ == "__main__":
typer.run(main)
|
|
f3c047a39f3d8438487ab31764d82afb7a86524e
|
apps/api/permissions.py
|
apps/api/permissions.py
|
from oauth2_provider.ext.rest_framework import TokenHasScope
from rest_framework.permissions import DjangoObjectPermissions
class TokenHasScopeOrUserHasObjectPermissionsOrWriteOnly(DjangoObjectPermissions, TokenHasScope):
"""
Allow anyone to write to this endpoint, but only the ones with the required scope to read.
"""
perms_map = {
'GET': ['%(app_label)s.view_%(model_name)s'],
'OPTIONS': [],
'HEAD': [],
'POST': ['%(app_label)s.add_%(model_name)s'],
'PUT': ['%(app_label)s.change_%(model_name)s'],
'PATCH': ['%(app_label)s.change_%(model_name)s'],
'DELETE': ['%(app_label)s.delete_%(model_name)s'],
}
def has_permission(self, request, view):
if request.method == 'POST':
return True
return super().has_permission(request, view)
|
Add API permission class for HasScopeOrWriteOnly
|
Add API permission class for HasScopeOrWriteOnly
|
Python
|
mit
|
dotKom/onlineweb4,dotKom/onlineweb4,dotKom/onlineweb4,dotKom/onlineweb4
|
Add API permission class for HasScopeOrWriteOnly
|
from oauth2_provider.ext.rest_framework import TokenHasScope
from rest_framework.permissions import DjangoObjectPermissions
class TokenHasScopeOrUserHasObjectPermissionsOrWriteOnly(DjangoObjectPermissions, TokenHasScope):
"""
Allow anyone to write to this endpoint, but only the ones with the required scope to read.
"""
perms_map = {
'GET': ['%(app_label)s.view_%(model_name)s'],
'OPTIONS': [],
'HEAD': [],
'POST': ['%(app_label)s.add_%(model_name)s'],
'PUT': ['%(app_label)s.change_%(model_name)s'],
'PATCH': ['%(app_label)s.change_%(model_name)s'],
'DELETE': ['%(app_label)s.delete_%(model_name)s'],
}
def has_permission(self, request, view):
if request.method == 'POST':
return True
return super().has_permission(request, view)
|
<commit_before><commit_msg>Add API permission class for HasScopeOrWriteOnly<commit_after>
|
from oauth2_provider.ext.rest_framework import TokenHasScope
from rest_framework.permissions import DjangoObjectPermissions
class TokenHasScopeOrUserHasObjectPermissionsOrWriteOnly(DjangoObjectPermissions, TokenHasScope):
"""
Allow anyone to write to this endpoint, but only the ones with the required scope to read.
"""
perms_map = {
'GET': ['%(app_label)s.view_%(model_name)s'],
'OPTIONS': [],
'HEAD': [],
'POST': ['%(app_label)s.add_%(model_name)s'],
'PUT': ['%(app_label)s.change_%(model_name)s'],
'PATCH': ['%(app_label)s.change_%(model_name)s'],
'DELETE': ['%(app_label)s.delete_%(model_name)s'],
}
def has_permission(self, request, view):
if request.method == 'POST':
return True
return super().has_permission(request, view)
|
Add API permission class for HasScopeOrWriteOnlyfrom oauth2_provider.ext.rest_framework import TokenHasScope
from rest_framework.permissions import DjangoObjectPermissions
class TokenHasScopeOrUserHasObjectPermissionsOrWriteOnly(DjangoObjectPermissions, TokenHasScope):
"""
Allow anyone to write to this endpoint, but only the ones with the required scope to read.
"""
perms_map = {
'GET': ['%(app_label)s.view_%(model_name)s'],
'OPTIONS': [],
'HEAD': [],
'POST': ['%(app_label)s.add_%(model_name)s'],
'PUT': ['%(app_label)s.change_%(model_name)s'],
'PATCH': ['%(app_label)s.change_%(model_name)s'],
'DELETE': ['%(app_label)s.delete_%(model_name)s'],
}
def has_permission(self, request, view):
if request.method == 'POST':
return True
return super().has_permission(request, view)
|
<commit_before><commit_msg>Add API permission class for HasScopeOrWriteOnly<commit_after>from oauth2_provider.ext.rest_framework import TokenHasScope
from rest_framework.permissions import DjangoObjectPermissions
class TokenHasScopeOrUserHasObjectPermissionsOrWriteOnly(DjangoObjectPermissions, TokenHasScope):
"""
Allow anyone to write to this endpoint, but only the ones with the required scope to read.
"""
perms_map = {
'GET': ['%(app_label)s.view_%(model_name)s'],
'OPTIONS': [],
'HEAD': [],
'POST': ['%(app_label)s.add_%(model_name)s'],
'PUT': ['%(app_label)s.change_%(model_name)s'],
'PATCH': ['%(app_label)s.change_%(model_name)s'],
'DELETE': ['%(app_label)s.delete_%(model_name)s'],
}
def has_permission(self, request, view):
if request.method == 'POST':
return True
return super().has_permission(request, view)
|
|
5ac528eff7c5cb49e329de720cbabc2ac89fc50c
|
ideascube/conf/kb_bdi_tv5monde.py
|
ideascube/conf/kb_bdi_tv5monde.py
|
# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .kb import * # noqa
LANGUAGE_CODE = 'fr'
IDEASCUBE_NAME = 'TV5 Monde Burundi'
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'gutemberg',
},
{
'id': 'khanacademy',
},
]
|
Add conf file for TV5 Monde Burundi
|
Add conf file for TV5 Monde Burundi
|
Python
|
agpl-3.0
|
ideascube/ideascube,ideascube/ideascube,ideascube/ideascube,ideascube/ideascube
|
Add conf file for TV5 Monde Burundi
|
# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .kb import * # noqa
LANGUAGE_CODE = 'fr'
IDEASCUBE_NAME = 'TV5 Monde Burundi'
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'gutemberg',
},
{
'id': 'khanacademy',
},
]
|
<commit_before><commit_msg>Add conf file for TV5 Monde Burundi<commit_after>
|
# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .kb import * # noqa
LANGUAGE_CODE = 'fr'
IDEASCUBE_NAME = 'TV5 Monde Burundi'
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'gutemberg',
},
{
'id': 'khanacademy',
},
]
|
Add conf file for TV5 Monde Burundi# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .kb import * # noqa
LANGUAGE_CODE = 'fr'
IDEASCUBE_NAME = 'TV5 Monde Burundi'
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'gutemberg',
},
{
'id': 'khanacademy',
},
]
|
<commit_before><commit_msg>Add conf file for TV5 Monde Burundi<commit_after># -*- coding: utf-8 -*-
"""KoomBook conf"""
from .kb import * # noqa
LANGUAGE_CODE = 'fr'
IDEASCUBE_NAME = 'TV5 Monde Burundi'
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'gutemberg',
},
{
'id': 'khanacademy',
},
]
|
|
c3dd34a6fa80be8a90d178e3e0d28716583f9a9a
|
src/data_collection/FaceCollector_Main.py
|
src/data_collection/FaceCollector_Main.py
|
import cv2, sys, os, time, logging
video = cv2.VideoCapture(0)
while True:
ret, cameraFrame = video.read()
if not ret:
exit()
cv2.imshow("Live Video", cameraFrame)
continue
|
Test the webcam with cv2
|
Test the webcam with cv2
|
Python
|
apache-2.0
|
xphongvn/smart-attendance-system-ta,xphongvn/smart-attendance-system-ta,xphongvn/smart-attendance-system-ta
|
Test the webcam with cv2
|
import cv2, sys, os, time, logging
video = cv2.VideoCapture(0)
while True:
ret, cameraFrame = video.read()
if not ret:
exit()
cv2.imshow("Live Video", cameraFrame)
continue
|
<commit_before><commit_msg>Test the webcam with cv2<commit_after>
|
import cv2, sys, os, time, logging
video = cv2.VideoCapture(0)
while True:
ret, cameraFrame = video.read()
if not ret:
exit()
cv2.imshow("Live Video", cameraFrame)
continue
|
Test the webcam with cv2import cv2, sys, os, time, logging
video = cv2.VideoCapture(0)
while True:
ret, cameraFrame = video.read()
if not ret:
exit()
cv2.imshow("Live Video", cameraFrame)
continue
|
<commit_before><commit_msg>Test the webcam with cv2<commit_after>import cv2, sys, os, time, logging
video = cv2.VideoCapture(0)
while True:
ret, cameraFrame = video.read()
if not ret:
exit()
cv2.imshow("Live Video", cameraFrame)
continue
|
|
c3b286db4dd39cbe81b81e374819cba1fab2df13
|
ideascube/conf/kb_mooc_cog.py
|
ideascube/conf/kb_mooc_cog.py
|
# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .base import * # noqa
from django.utils.translation import ugettext_lazy as _
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(os.environ.get('DEBUG', True))
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['.koombook.lan.', 'localhost', '127.0.0.1']
LANGUAGE_CODE = 'fr'
TIME_ZONE = None
# Ideas Box specifics
STORAGE_ROOT = '/media/hdd/ideascube/storage'
IDEASCUBE_NAME = 'UNIVERSITE RDC'
DOMAIN = 'koombook.lan'
BACKUP_FORMAT = 'gztar'
STAFF_HOME_CARDS = [c for c in STAFF_HOME_CARDS if c['url'] in ['user_list', 'server:power', 'server:backup', 'server:wifi']]
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'wikipedia',
},
]
IDEASCUBE_BODY_ID = 'koombook'
|
Add conf file for MOOC RDC
|
Add conf file for MOOC RDC
|
Python
|
agpl-3.0
|
ideascube/ideascube,ideascube/ideascube,ideascube/ideascube,ideascube/ideascube
|
Add conf file for MOOC RDC
|
# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .base import * # noqa
from django.utils.translation import ugettext_lazy as _
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(os.environ.get('DEBUG', True))
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['.koombook.lan.', 'localhost', '127.0.0.1']
LANGUAGE_CODE = 'fr'
TIME_ZONE = None
# Ideas Box specifics
STORAGE_ROOT = '/media/hdd/ideascube/storage'
IDEASCUBE_NAME = 'UNIVERSITE RDC'
DOMAIN = 'koombook.lan'
BACKUP_FORMAT = 'gztar'
STAFF_HOME_CARDS = [c for c in STAFF_HOME_CARDS if c['url'] in ['user_list', 'server:power', 'server:backup', 'server:wifi']]
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'wikipedia',
},
]
IDEASCUBE_BODY_ID = 'koombook'
|
<commit_before><commit_msg>Add conf file for MOOC RDC<commit_after>
|
# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .base import * # noqa
from django.utils.translation import ugettext_lazy as _
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(os.environ.get('DEBUG', True))
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['.koombook.lan.', 'localhost', '127.0.0.1']
LANGUAGE_CODE = 'fr'
TIME_ZONE = None
# Ideas Box specifics
STORAGE_ROOT = '/media/hdd/ideascube/storage'
IDEASCUBE_NAME = 'UNIVERSITE RDC'
DOMAIN = 'koombook.lan'
BACKUP_FORMAT = 'gztar'
STAFF_HOME_CARDS = [c for c in STAFF_HOME_CARDS if c['url'] in ['user_list', 'server:power', 'server:backup', 'server:wifi']]
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'wikipedia',
},
]
IDEASCUBE_BODY_ID = 'koombook'
|
Add conf file for MOOC RDC# -*- coding: utf-8 -*-
"""KoomBook conf"""
from .base import * # noqa
from django.utils.translation import ugettext_lazy as _
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(os.environ.get('DEBUG', True))
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['.koombook.lan.', 'localhost', '127.0.0.1']
LANGUAGE_CODE = 'fr'
TIME_ZONE = None
# Ideas Box specifics
STORAGE_ROOT = '/media/hdd/ideascube/storage'
IDEASCUBE_NAME = 'UNIVERSITE RDC'
DOMAIN = 'koombook.lan'
BACKUP_FORMAT = 'gztar'
STAFF_HOME_CARDS = [c for c in STAFF_HOME_CARDS if c['url'] in ['user_list', 'server:power', 'server:backup', 'server:wifi']]
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'wikipedia',
},
]
IDEASCUBE_BODY_ID = 'koombook'
|
<commit_before><commit_msg>Add conf file for MOOC RDC<commit_after># -*- coding: utf-8 -*-
"""KoomBook conf"""
from .base import * # noqa
from django.utils.translation import ugettext_lazy as _
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = bool(os.environ.get('DEBUG', True))
TEMPLATE_DEBUG = False
ALLOWED_HOSTS = ['.koombook.lan.', 'localhost', '127.0.0.1']
LANGUAGE_CODE = 'fr'
TIME_ZONE = None
# Ideas Box specifics
STORAGE_ROOT = '/media/hdd/ideascube/storage'
IDEASCUBE_NAME = 'UNIVERSITE RDC'
DOMAIN = 'koombook.lan'
BACKUP_FORMAT = 'gztar'
STAFF_HOME_CARDS = [c for c in STAFF_HOME_CARDS if c['url'] in ['user_list', 'server:power', 'server:backup', 'server:wifi']]
HOME_CARDS = STAFF_HOME_CARDS + [
{
'id': 'blog',
},
{
'id': 'mediacenter',
},
{
'id': 'wikipedia',
},
]
IDEASCUBE_BODY_ID = 'koombook'
|
|
e082c56ef5629924e3d760fb86eff182cc3579a5
|
misc/make-msi-package.py
|
misc/make-msi-package.py
|
# Copyright 2016, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Release: Create and upload Windows MSI files for Nuitka
"""
from __future__ import print_function
import os
import shutil
import subprocess
import sys
if os.path.isdir("dist"):
shutil.rmtree("dist")
branch_name = subprocess.check_output(
"git name-rev --name-only HEAD".split()
).strip()
print("Building for branch '%s'." % branch_name)
assert branch_name in (
b"master",
b"develop",
b"factory",
), branch_name
assert 0 == subprocess.call(
(
sys.executable,
"setup.py",
"bdist_msi",
"--target-version=" + sys.version[:3]
)
)
for filename in os.listdir("dist"):
if not filename.endswith(".msi"):
continue
break
else:
sys.exit("No MSI created.")
parts = [
filename[:-4].\
replace("-py2.6","").\
replace("-py2.7","").\
replace("-py3.2","").\
replace("-py3.3","").\
replace("-py3.4","").\
replace("-py3.5","").\
replace("Nuitka32","Nuitka").\
replace("Nuitka64","Nuitka"),
"py" + sys.version[:3].replace('.',""),
"msi"
]
new_filename = '.'.join(parts)
if branch_name == b"factory":
new_filename = "Nuitka-factory." + new_filename[new_filename.find("win"):]
os.rename(os.path.join("dist",filename),os.path.join("dist",new_filename))
print("OK, created as dist/" + new_filename)
|
Add script to only create the MSI package.
|
Release: Add script to only create the MSI package.
* This is for external CI to produce MSI packages instead of my
internal one.
|
Python
|
apache-2.0
|
kayhayen/Nuitka,kayhayen/Nuitka,kayhayen/Nuitka,kayhayen/Nuitka
|
Release: Add script to only create the MSI package.
* This is for external CI to produce MSI packages instead of my
internal one.
|
# Copyright 2016, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Release: Create and upload Windows MSI files for Nuitka
"""
from __future__ import print_function
import os
import shutil
import subprocess
import sys
if os.path.isdir("dist"):
shutil.rmtree("dist")
branch_name = subprocess.check_output(
"git name-rev --name-only HEAD".split()
).strip()
print("Building for branch '%s'." % branch_name)
assert branch_name in (
b"master",
b"develop",
b"factory",
), branch_name
assert 0 == subprocess.call(
(
sys.executable,
"setup.py",
"bdist_msi",
"--target-version=" + sys.version[:3]
)
)
for filename in os.listdir("dist"):
if not filename.endswith(".msi"):
continue
break
else:
sys.exit("No MSI created.")
parts = [
filename[:-4].\
replace("-py2.6","").\
replace("-py2.7","").\
replace("-py3.2","").\
replace("-py3.3","").\
replace("-py3.4","").\
replace("-py3.5","").\
replace("Nuitka32","Nuitka").\
replace("Nuitka64","Nuitka"),
"py" + sys.version[:3].replace('.',""),
"msi"
]
new_filename = '.'.join(parts)
if branch_name == b"factory":
new_filename = "Nuitka-factory." + new_filename[new_filename.find("win"):]
os.rename(os.path.join("dist",filename),os.path.join("dist",new_filename))
print("OK, created as dist/" + new_filename)
|
<commit_before><commit_msg>Release: Add script to only create the MSI package.
* This is for external CI to produce MSI packages instead of my
internal one.<commit_after>
|
# Copyright 2016, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Release: Create and upload Windows MSI files for Nuitka
"""
from __future__ import print_function
import os
import shutil
import subprocess
import sys
if os.path.isdir("dist"):
shutil.rmtree("dist")
branch_name = subprocess.check_output(
"git name-rev --name-only HEAD".split()
).strip()
print("Building for branch '%s'." % branch_name)
assert branch_name in (
b"master",
b"develop",
b"factory",
), branch_name
assert 0 == subprocess.call(
(
sys.executable,
"setup.py",
"bdist_msi",
"--target-version=" + sys.version[:3]
)
)
for filename in os.listdir("dist"):
if not filename.endswith(".msi"):
continue
break
else:
sys.exit("No MSI created.")
parts = [
filename[:-4].\
replace("-py2.6","").\
replace("-py2.7","").\
replace("-py3.2","").\
replace("-py3.3","").\
replace("-py3.4","").\
replace("-py3.5","").\
replace("Nuitka32","Nuitka").\
replace("Nuitka64","Nuitka"),
"py" + sys.version[:3].replace('.',""),
"msi"
]
new_filename = '.'.join(parts)
if branch_name == b"factory":
new_filename = "Nuitka-factory." + new_filename[new_filename.find("win"):]
os.rename(os.path.join("dist",filename),os.path.join("dist",new_filename))
print("OK, created as dist/" + new_filename)
|
Release: Add script to only create the MSI package.
* This is for external CI to produce MSI packages instead of my
internal one.# Copyright 2016, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Release: Create and upload Windows MSI files for Nuitka
"""
from __future__ import print_function
import os
import shutil
import subprocess
import sys
if os.path.isdir("dist"):
shutil.rmtree("dist")
branch_name = subprocess.check_output(
"git name-rev --name-only HEAD".split()
).strip()
print("Building for branch '%s'." % branch_name)
assert branch_name in (
b"master",
b"develop",
b"factory",
), branch_name
assert 0 == subprocess.call(
(
sys.executable,
"setup.py",
"bdist_msi",
"--target-version=" + sys.version[:3]
)
)
for filename in os.listdir("dist"):
if not filename.endswith(".msi"):
continue
break
else:
sys.exit("No MSI created.")
parts = [
filename[:-4].\
replace("-py2.6","").\
replace("-py2.7","").\
replace("-py3.2","").\
replace("-py3.3","").\
replace("-py3.4","").\
replace("-py3.5","").\
replace("Nuitka32","Nuitka").\
replace("Nuitka64","Nuitka"),
"py" + sys.version[:3].replace('.',""),
"msi"
]
new_filename = '.'.join(parts)
if branch_name == b"factory":
new_filename = "Nuitka-factory." + new_filename[new_filename.find("win"):]
os.rename(os.path.join("dist",filename),os.path.join("dist",new_filename))
print("OK, created as dist/" + new_filename)
|
<commit_before><commit_msg>Release: Add script to only create the MSI package.
* This is for external CI to produce MSI packages instead of my
internal one.<commit_after># Copyright 2016, Kay Hayen, mailto:kay.hayen@gmail.com
#
# Part of "Nuitka", an optimizing Python compiler that is compatible and
# integrates with CPython, but also works on its own.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Release: Create and upload Windows MSI files for Nuitka
"""
from __future__ import print_function
import os
import shutil
import subprocess
import sys
if os.path.isdir("dist"):
shutil.rmtree("dist")
branch_name = subprocess.check_output(
"git name-rev --name-only HEAD".split()
).strip()
print("Building for branch '%s'." % branch_name)
assert branch_name in (
b"master",
b"develop",
b"factory",
), branch_name
assert 0 == subprocess.call(
(
sys.executable,
"setup.py",
"bdist_msi",
"--target-version=" + sys.version[:3]
)
)
for filename in os.listdir("dist"):
if not filename.endswith(".msi"):
continue
break
else:
sys.exit("No MSI created.")
parts = [
filename[:-4].\
replace("-py2.6","").\
replace("-py2.7","").\
replace("-py3.2","").\
replace("-py3.3","").\
replace("-py3.4","").\
replace("-py3.5","").\
replace("Nuitka32","Nuitka").\
replace("Nuitka64","Nuitka"),
"py" + sys.version[:3].replace('.',""),
"msi"
]
new_filename = '.'.join(parts)
if branch_name == b"factory":
new_filename = "Nuitka-factory." + new_filename[new_filename.find("win"):]
os.rename(os.path.join("dist",filename),os.path.join("dist",new_filename))
print("OK, created as dist/" + new_filename)
|
|
2c14466740a53619acc34a8599dfcd2ff3244db6
|
src/proposals/tests/test_management.py
|
src/proposals/tests/test_management.py
|
from datetime import timedelta
import re
import pytest
from django.utils.timezone import now
from django.core.management import call_command
from proposals.models import TalkProposal
@pytest.fixture()
def weekago_talk_proposal(user):
dt_weekago = now() - timedelta(weeks=1)
proposal = TalkProposal.objects.create(
id=56,
submitter=user,
title='Long long time ago when Python was still 2.x',
)
proposal.created_at = dt_weekago
proposal.save()
return proposal
def test_weekago_talk_created_datetime(weekago_talk_proposal):
proposal_lifetime = now() - weekago_talk_proposal.created_at
print('The proposal has been created for %d days' % proposal_lifetime.days)
assert proposal_lifetime >= timedelta(weeks=1)
@pytest.fixture()
def another_user_dayago_talk_proposal(another_user):
dt_dayago = now() - timedelta(days=1)
proposal = TalkProposal.objects.create(
id=9527,
submitter=another_user,
title='Transition from Ruby to Python',
created_at=dt_dayago,
)
return proposal
def test_recent_proposal_command_default(
talk_proposal, weekago_talk_proposal,
another_user_dayago_talk_proposal,
capsys,
):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(out)
# Test only two talk proposals are retrieved
assert re.search(r"Got total 2 new proposals", out, re.MULTILINE)
# Test the title of these two proposals are in the output
for proposal in [talk_proposal, another_user_dayago_talk_proposal]:
assert re.search(proposal.title, out, re.MULTILINE)
# Test the title of outdated proposals are not in the output
assert not re.search(weekago_talk_proposal.title, out, re.MULTILINE)
def test_cancelled_proposal_not_shown_in_recent_proposals(
cancelled_talk_proposal,
another_user_dayago_talk_proposal,
weekago_talk_proposal,
capsys,
):
call_command('recent_proposals', recent=6)
out, err = capsys.readouterr()
print(out)
# Test only one talk proposal is retrieved
assert re.search(r"Got total 1 new proposals", out, re.MULTILINE)
assert re.search(another_user_dayago_talk_proposal.title, out, re.MULTILINE)
for proposal in [cancelled_talk_proposal, weekago_talk_proposal]:
assert not re.search(
proposal.title, out, re.MULTILINE
)
@pytest.mark.django_db
def test_no_recent_proposal(capsys):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(err)
assert re.search('No proposals are recently submitted', err)
|
Add unittest for command recent_proposals
|
Add unittest for command recent_proposals
|
Python
|
mit
|
pycontw/pycontw2016,uranusjr/pycontw2016,pycontw/pycontw2016,pycontw/pycontw2016,pycontw/pycontw2016,uranusjr/pycontw2016,uranusjr/pycontw2016,uranusjr/pycontw2016
|
Add unittest for command recent_proposals
|
from datetime import timedelta
import re
import pytest
from django.utils.timezone import now
from django.core.management import call_command
from proposals.models import TalkProposal
@pytest.fixture()
def weekago_talk_proposal(user):
dt_weekago = now() - timedelta(weeks=1)
proposal = TalkProposal.objects.create(
id=56,
submitter=user,
title='Long long time ago when Python was still 2.x',
)
proposal.created_at = dt_weekago
proposal.save()
return proposal
def test_weekago_talk_created_datetime(weekago_talk_proposal):
proposal_lifetime = now() - weekago_talk_proposal.created_at
print('The proposal has been created for %d days' % proposal_lifetime.days)
assert proposal_lifetime >= timedelta(weeks=1)
@pytest.fixture()
def another_user_dayago_talk_proposal(another_user):
dt_dayago = now() - timedelta(days=1)
proposal = TalkProposal.objects.create(
id=9527,
submitter=another_user,
title='Transition from Ruby to Python',
created_at=dt_dayago,
)
return proposal
def test_recent_proposal_command_default(
talk_proposal, weekago_talk_proposal,
another_user_dayago_talk_proposal,
capsys,
):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(out)
# Test only two talk proposals are retrieved
assert re.search(r"Got total 2 new proposals", out, re.MULTILINE)
# Test the title of these two proposals are in the output
for proposal in [talk_proposal, another_user_dayago_talk_proposal]:
assert re.search(proposal.title, out, re.MULTILINE)
# Test the title of outdated proposals are not in the output
assert not re.search(weekago_talk_proposal.title, out, re.MULTILINE)
def test_cancelled_proposal_not_shown_in_recent_proposals(
cancelled_talk_proposal,
another_user_dayago_talk_proposal,
weekago_talk_proposal,
capsys,
):
call_command('recent_proposals', recent=6)
out, err = capsys.readouterr()
print(out)
# Test only one talk proposal is retrieved
assert re.search(r"Got total 1 new proposals", out, re.MULTILINE)
assert re.search(another_user_dayago_talk_proposal.title, out, re.MULTILINE)
for proposal in [cancelled_talk_proposal, weekago_talk_proposal]:
assert not re.search(
proposal.title, out, re.MULTILINE
)
@pytest.mark.django_db
def test_no_recent_proposal(capsys):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(err)
assert re.search('No proposals are recently submitted', err)
|
<commit_before><commit_msg>Add unittest for command recent_proposals<commit_after>
|
from datetime import timedelta
import re
import pytest
from django.utils.timezone import now
from django.core.management import call_command
from proposals.models import TalkProposal
@pytest.fixture()
def weekago_talk_proposal(user):
dt_weekago = now() - timedelta(weeks=1)
proposal = TalkProposal.objects.create(
id=56,
submitter=user,
title='Long long time ago when Python was still 2.x',
)
proposal.created_at = dt_weekago
proposal.save()
return proposal
def test_weekago_talk_created_datetime(weekago_talk_proposal):
proposal_lifetime = now() - weekago_talk_proposal.created_at
print('The proposal has been created for %d days' % proposal_lifetime.days)
assert proposal_lifetime >= timedelta(weeks=1)
@pytest.fixture()
def another_user_dayago_talk_proposal(another_user):
dt_dayago = now() - timedelta(days=1)
proposal = TalkProposal.objects.create(
id=9527,
submitter=another_user,
title='Transition from Ruby to Python',
created_at=dt_dayago,
)
return proposal
def test_recent_proposal_command_default(
talk_proposal, weekago_talk_proposal,
another_user_dayago_talk_proposal,
capsys,
):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(out)
# Test only two talk proposals are retrieved
assert re.search(r"Got total 2 new proposals", out, re.MULTILINE)
# Test the title of these two proposals are in the output
for proposal in [talk_proposal, another_user_dayago_talk_proposal]:
assert re.search(proposal.title, out, re.MULTILINE)
# Test the title of outdated proposals are not in the output
assert not re.search(weekago_talk_proposal.title, out, re.MULTILINE)
def test_cancelled_proposal_not_shown_in_recent_proposals(
cancelled_talk_proposal,
another_user_dayago_talk_proposal,
weekago_talk_proposal,
capsys,
):
call_command('recent_proposals', recent=6)
out, err = capsys.readouterr()
print(out)
# Test only one talk proposal is retrieved
assert re.search(r"Got total 1 new proposals", out, re.MULTILINE)
assert re.search(another_user_dayago_talk_proposal.title, out, re.MULTILINE)
for proposal in [cancelled_talk_proposal, weekago_talk_proposal]:
assert not re.search(
proposal.title, out, re.MULTILINE
)
@pytest.mark.django_db
def test_no_recent_proposal(capsys):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(err)
assert re.search('No proposals are recently submitted', err)
|
Add unittest for command recent_proposalsfrom datetime import timedelta
import re
import pytest
from django.utils.timezone import now
from django.core.management import call_command
from proposals.models import TalkProposal
@pytest.fixture()
def weekago_talk_proposal(user):
dt_weekago = now() - timedelta(weeks=1)
proposal = TalkProposal.objects.create(
id=56,
submitter=user,
title='Long long time ago when Python was still 2.x',
)
proposal.created_at = dt_weekago
proposal.save()
return proposal
def test_weekago_talk_created_datetime(weekago_talk_proposal):
proposal_lifetime = now() - weekago_talk_proposal.created_at
print('The proposal has been created for %d days' % proposal_lifetime.days)
assert proposal_lifetime >= timedelta(weeks=1)
@pytest.fixture()
def another_user_dayago_talk_proposal(another_user):
dt_dayago = now() - timedelta(days=1)
proposal = TalkProposal.objects.create(
id=9527,
submitter=another_user,
title='Transition from Ruby to Python',
created_at=dt_dayago,
)
return proposal
def test_recent_proposal_command_default(
talk_proposal, weekago_talk_proposal,
another_user_dayago_talk_proposal,
capsys,
):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(out)
# Test only two talk proposals are retrieved
assert re.search(r"Got total 2 new proposals", out, re.MULTILINE)
# Test the title of these two proposals are in the output
for proposal in [talk_proposal, another_user_dayago_talk_proposal]:
assert re.search(proposal.title, out, re.MULTILINE)
# Test the title of outdated proposals are not in the output
assert not re.search(weekago_talk_proposal.title, out, re.MULTILINE)
def test_cancelled_proposal_not_shown_in_recent_proposals(
cancelled_talk_proposal,
another_user_dayago_talk_proposal,
weekago_talk_proposal,
capsys,
):
call_command('recent_proposals', recent=6)
out, err = capsys.readouterr()
print(out)
# Test only one talk proposal is retrieved
assert re.search(r"Got total 1 new proposals", out, re.MULTILINE)
assert re.search(another_user_dayago_talk_proposal.title, out, re.MULTILINE)
for proposal in [cancelled_talk_proposal, weekago_talk_proposal]:
assert not re.search(
proposal.title, out, re.MULTILINE
)
@pytest.mark.django_db
def test_no_recent_proposal(capsys):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(err)
assert re.search('No proposals are recently submitted', err)
|
<commit_before><commit_msg>Add unittest for command recent_proposals<commit_after>from datetime import timedelta
import re
import pytest
from django.utils.timezone import now
from django.core.management import call_command
from proposals.models import TalkProposal
@pytest.fixture()
def weekago_talk_proposal(user):
dt_weekago = now() - timedelta(weeks=1)
proposal = TalkProposal.objects.create(
id=56,
submitter=user,
title='Long long time ago when Python was still 2.x',
)
proposal.created_at = dt_weekago
proposal.save()
return proposal
def test_weekago_talk_created_datetime(weekago_talk_proposal):
proposal_lifetime = now() - weekago_talk_proposal.created_at
print('The proposal has been created for %d days' % proposal_lifetime.days)
assert proposal_lifetime >= timedelta(weeks=1)
@pytest.fixture()
def another_user_dayago_talk_proposal(another_user):
dt_dayago = now() - timedelta(days=1)
proposal = TalkProposal.objects.create(
id=9527,
submitter=another_user,
title='Transition from Ruby to Python',
created_at=dt_dayago,
)
return proposal
def test_recent_proposal_command_default(
talk_proposal, weekago_talk_proposal,
another_user_dayago_talk_proposal,
capsys,
):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(out)
# Test only two talk proposals are retrieved
assert re.search(r"Got total 2 new proposals", out, re.MULTILINE)
# Test the title of these two proposals are in the output
for proposal in [talk_proposal, another_user_dayago_talk_proposal]:
assert re.search(proposal.title, out, re.MULTILINE)
# Test the title of outdated proposals are not in the output
assert not re.search(weekago_talk_proposal.title, out, re.MULTILINE)
def test_cancelled_proposal_not_shown_in_recent_proposals(
cancelled_talk_proposal,
another_user_dayago_talk_proposal,
weekago_talk_proposal,
capsys,
):
call_command('recent_proposals', recent=6)
out, err = capsys.readouterr()
print(out)
# Test only one talk proposal is retrieved
assert re.search(r"Got total 1 new proposals", out, re.MULTILINE)
assert re.search(another_user_dayago_talk_proposal.title, out, re.MULTILINE)
for proposal in [cancelled_talk_proposal, weekago_talk_proposal]:
assert not re.search(
proposal.title, out, re.MULTILINE
)
@pytest.mark.django_db
def test_no_recent_proposal(capsys):
call_command('recent_proposals')
out, err = capsys.readouterr()
print(err)
assert re.search('No proposals are recently submitted', err)
|
|
0e00742cf60285fc6d45ec5762e00d439ba61ddd
|
IVoiSysAuthorisation.py
|
IVoiSysAuthorisation.py
|
# Copyright (c) 2003-2005 Maxim Sobolev. All rights reserved.
# Copyright (c) 2006-2007 Sippy Software, Inc. All rights reserved.
#
# This file is part of SIPPY, a free RFC3261 SIP stack and B2BUA.
#
# SIPPY is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# For a license to use the SIPPY software under conditions
# other than those described here, or to purchase support for this
# software, please contact Sippy Software, Inc. by e-mail at the
# following addresses: sales@sippysoft.com.
#
# SIPPY is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
from SQLTransactionManager import SQLTransactionManager
from time import time
class IVoiSysAuthorisation(object):
def __init__(self, global_config):
dsn = 'mysql://sbcfront:gb19!lDLn2#)F$NFbd2*@sbcdb1.pennytel.com/sbc'
self.sql_tm = SQLTransactionManager(dsn, nworkers = 4, lazy_connect = True)
def do_auth(self, username, res_cb, *cb_args):
self.sql_tm.sendQuery(('SELECT password, outbound_proxy, domain FROM SBC_Reg_Config ' \
'WHERE account_number = \'%s\'' % username), self._process_result, 0, False, None,
res_cb, cb_args)
def _process_result(self, results, exceptions, res_cb, cb_args):
print results, exceptions
if exceptions[0] != None or len(results[0]) == 0:
res_cb(None, *cb_args)
else:
password, outbound_proxy, domain = results[0][0]
res_cb((password, (outbound_proxy, 5060), domain), *cb_args)
|
Add class specific for IVoiSys to do DB auth.
|
Add class specific for IVoiSys to do DB auth.
|
Python
|
bsd-2-clause
|
jevonearth/rtpproxy,jevonearth/rtpproxy,dsanders11/rtpproxy,sippy/rtpproxy,dsanders11/rtpproxy,synety-jdebp/rtpproxy,jevonearth/rtpproxy,sippy/rtp_cluster,synety-jdebp/rtpproxy,jevonearth/rtpproxy,dsanders11/rtpproxy,sippy/rtpproxy,sippy/rtpproxy,synety-jdebp/rtpproxy,synety-jdebp/rtpproxy,sippy/rtp_cluster
|
Add class specific for IVoiSys to do DB auth.
|
# Copyright (c) 2003-2005 Maxim Sobolev. All rights reserved.
# Copyright (c) 2006-2007 Sippy Software, Inc. All rights reserved.
#
# This file is part of SIPPY, a free RFC3261 SIP stack and B2BUA.
#
# SIPPY is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# For a license to use the SIPPY software under conditions
# other than those described here, or to purchase support for this
# software, please contact Sippy Software, Inc. by e-mail at the
# following addresses: sales@sippysoft.com.
#
# SIPPY is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
from SQLTransactionManager import SQLTransactionManager
from time import time
class IVoiSysAuthorisation(object):
def __init__(self, global_config):
dsn = 'mysql://sbcfront:gb19!lDLn2#)F$NFbd2*@sbcdb1.pennytel.com/sbc'
self.sql_tm = SQLTransactionManager(dsn, nworkers = 4, lazy_connect = True)
def do_auth(self, username, res_cb, *cb_args):
self.sql_tm.sendQuery(('SELECT password, outbound_proxy, domain FROM SBC_Reg_Config ' \
'WHERE account_number = \'%s\'' % username), self._process_result, 0, False, None,
res_cb, cb_args)
def _process_result(self, results, exceptions, res_cb, cb_args):
print results, exceptions
if exceptions[0] != None or len(results[0]) == 0:
res_cb(None, *cb_args)
else:
password, outbound_proxy, domain = results[0][0]
res_cb((password, (outbound_proxy, 5060), domain), *cb_args)
|
<commit_before><commit_msg>Add class specific for IVoiSys to do DB auth.<commit_after>
|
# Copyright (c) 2003-2005 Maxim Sobolev. All rights reserved.
# Copyright (c) 2006-2007 Sippy Software, Inc. All rights reserved.
#
# This file is part of SIPPY, a free RFC3261 SIP stack and B2BUA.
#
# SIPPY is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# For a license to use the SIPPY software under conditions
# other than those described here, or to purchase support for this
# software, please contact Sippy Software, Inc. by e-mail at the
# following addresses: sales@sippysoft.com.
#
# SIPPY is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
from SQLTransactionManager import SQLTransactionManager
from time import time
class IVoiSysAuthorisation(object):
def __init__(self, global_config):
dsn = 'mysql://sbcfront:gb19!lDLn2#)F$NFbd2*@sbcdb1.pennytel.com/sbc'
self.sql_tm = SQLTransactionManager(dsn, nworkers = 4, lazy_connect = True)
def do_auth(self, username, res_cb, *cb_args):
self.sql_tm.sendQuery(('SELECT password, outbound_proxy, domain FROM SBC_Reg_Config ' \
'WHERE account_number = \'%s\'' % username), self._process_result, 0, False, None,
res_cb, cb_args)
def _process_result(self, results, exceptions, res_cb, cb_args):
print results, exceptions
if exceptions[0] != None or len(results[0]) == 0:
res_cb(None, *cb_args)
else:
password, outbound_proxy, domain = results[0][0]
res_cb((password, (outbound_proxy, 5060), domain), *cb_args)
|
Add class specific for IVoiSys to do DB auth.# Copyright (c) 2003-2005 Maxim Sobolev. All rights reserved.
# Copyright (c) 2006-2007 Sippy Software, Inc. All rights reserved.
#
# This file is part of SIPPY, a free RFC3261 SIP stack and B2BUA.
#
# SIPPY is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# For a license to use the SIPPY software under conditions
# other than those described here, or to purchase support for this
# software, please contact Sippy Software, Inc. by e-mail at the
# following addresses: sales@sippysoft.com.
#
# SIPPY is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
from SQLTransactionManager import SQLTransactionManager
from time import time
class IVoiSysAuthorisation(object):
def __init__(self, global_config):
dsn = 'mysql://sbcfront:gb19!lDLn2#)F$NFbd2*@sbcdb1.pennytel.com/sbc'
self.sql_tm = SQLTransactionManager(dsn, nworkers = 4, lazy_connect = True)
def do_auth(self, username, res_cb, *cb_args):
self.sql_tm.sendQuery(('SELECT password, outbound_proxy, domain FROM SBC_Reg_Config ' \
'WHERE account_number = \'%s\'' % username), self._process_result, 0, False, None,
res_cb, cb_args)
def _process_result(self, results, exceptions, res_cb, cb_args):
print results, exceptions
if exceptions[0] != None or len(results[0]) == 0:
res_cb(None, *cb_args)
else:
password, outbound_proxy, domain = results[0][0]
res_cb((password, (outbound_proxy, 5060), domain), *cb_args)
|
<commit_before><commit_msg>Add class specific for IVoiSys to do DB auth.<commit_after># Copyright (c) 2003-2005 Maxim Sobolev. All rights reserved.
# Copyright (c) 2006-2007 Sippy Software, Inc. All rights reserved.
#
# This file is part of SIPPY, a free RFC3261 SIP stack and B2BUA.
#
# SIPPY is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# For a license to use the SIPPY software under conditions
# other than those described here, or to purchase support for this
# software, please contact Sippy Software, Inc. by e-mail at the
# following addresses: sales@sippysoft.com.
#
# SIPPY is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
from SQLTransactionManager import SQLTransactionManager
from time import time
class IVoiSysAuthorisation(object):
def __init__(self, global_config):
dsn = 'mysql://sbcfront:gb19!lDLn2#)F$NFbd2*@sbcdb1.pennytel.com/sbc'
self.sql_tm = SQLTransactionManager(dsn, nworkers = 4, lazy_connect = True)
def do_auth(self, username, res_cb, *cb_args):
self.sql_tm.sendQuery(('SELECT password, outbound_proxy, domain FROM SBC_Reg_Config ' \
'WHERE account_number = \'%s\'' % username), self._process_result, 0, False, None,
res_cb, cb_args)
def _process_result(self, results, exceptions, res_cb, cb_args):
print results, exceptions
if exceptions[0] != None or len(results[0]) == 0:
res_cb(None, *cb_args)
else:
password, outbound_proxy, domain = results[0][0]
res_cb((password, (outbound_proxy, 5060), domain), *cb_args)
|
|
f51623142dfc089aeb46e986b1d0382f3fab3025
|
test/test_producer.py
|
test/test_producer.py
|
import pytest
from kafka import KafkaConsumer, KafkaProducer
from test.conftest import version
from test.testutil import random_string
@pytest.mark.skipif(not version(), reason="No KAFKA_VERSION set")
def test_end_to_end(kafka_broker):
connect_str = 'localhost:' + str(kafka_broker.port)
producer = KafkaProducer(bootstrap_servers=connect_str,
max_block_ms=10000,
value_serializer=str.encode)
consumer = KafkaConsumer(bootstrap_servers=connect_str,
consumer_timeout_ms=10000,
auto_offset_reset='earliest',
value_deserializer=bytes.decode)
topic = random_string(5)
for i in range(1000):
producer.send(topic, 'msg %d' % i)
producer.flush()
producer.close()
consumer.subscribe([topic])
msgs = set()
for i in range(1000):
try:
msgs.add(next(consumer).value)
except StopIteration:
break
assert msgs == set(['msg %d' % i for i in range(1000)])
|
Add simple KafkaProducer -> KafkaConsumer integration test
|
Add simple KafkaProducer -> KafkaConsumer integration test
|
Python
|
apache-2.0
|
wikimedia/operations-debs-python-kafka,zackdever/kafka-python,ohmu/kafka-python,mumrah/kafka-python,scrapinghub/kafka-python,dpkp/kafka-python,Yelp/kafka-python,mumrah/kafka-python,DataDog/kafka-python,scrapinghub/kafka-python,zackdever/kafka-python,wikimedia/operations-debs-python-kafka,Yelp/kafka-python,ohmu/kafka-python,Aloomaio/kafka-python,Aloomaio/kafka-python,dpkp/kafka-python
|
Add simple KafkaProducer -> KafkaConsumer integration test
|
import pytest
from kafka import KafkaConsumer, KafkaProducer
from test.conftest import version
from test.testutil import random_string
@pytest.mark.skipif(not version(), reason="No KAFKA_VERSION set")
def test_end_to_end(kafka_broker):
connect_str = 'localhost:' + str(kafka_broker.port)
producer = KafkaProducer(bootstrap_servers=connect_str,
max_block_ms=10000,
value_serializer=str.encode)
consumer = KafkaConsumer(bootstrap_servers=connect_str,
consumer_timeout_ms=10000,
auto_offset_reset='earliest',
value_deserializer=bytes.decode)
topic = random_string(5)
for i in range(1000):
producer.send(topic, 'msg %d' % i)
producer.flush()
producer.close()
consumer.subscribe([topic])
msgs = set()
for i in range(1000):
try:
msgs.add(next(consumer).value)
except StopIteration:
break
assert msgs == set(['msg %d' % i for i in range(1000)])
|
<commit_before><commit_msg>Add simple KafkaProducer -> KafkaConsumer integration test<commit_after>
|
import pytest
from kafka import KafkaConsumer, KafkaProducer
from test.conftest import version
from test.testutil import random_string
@pytest.mark.skipif(not version(), reason="No KAFKA_VERSION set")
def test_end_to_end(kafka_broker):
connect_str = 'localhost:' + str(kafka_broker.port)
producer = KafkaProducer(bootstrap_servers=connect_str,
max_block_ms=10000,
value_serializer=str.encode)
consumer = KafkaConsumer(bootstrap_servers=connect_str,
consumer_timeout_ms=10000,
auto_offset_reset='earliest',
value_deserializer=bytes.decode)
topic = random_string(5)
for i in range(1000):
producer.send(topic, 'msg %d' % i)
producer.flush()
producer.close()
consumer.subscribe([topic])
msgs = set()
for i in range(1000):
try:
msgs.add(next(consumer).value)
except StopIteration:
break
assert msgs == set(['msg %d' % i for i in range(1000)])
|
Add simple KafkaProducer -> KafkaConsumer integration testimport pytest
from kafka import KafkaConsumer, KafkaProducer
from test.conftest import version
from test.testutil import random_string
@pytest.mark.skipif(not version(), reason="No KAFKA_VERSION set")
def test_end_to_end(kafka_broker):
connect_str = 'localhost:' + str(kafka_broker.port)
producer = KafkaProducer(bootstrap_servers=connect_str,
max_block_ms=10000,
value_serializer=str.encode)
consumer = KafkaConsumer(bootstrap_servers=connect_str,
consumer_timeout_ms=10000,
auto_offset_reset='earliest',
value_deserializer=bytes.decode)
topic = random_string(5)
for i in range(1000):
producer.send(topic, 'msg %d' % i)
producer.flush()
producer.close()
consumer.subscribe([topic])
msgs = set()
for i in range(1000):
try:
msgs.add(next(consumer).value)
except StopIteration:
break
assert msgs == set(['msg %d' % i for i in range(1000)])
|
<commit_before><commit_msg>Add simple KafkaProducer -> KafkaConsumer integration test<commit_after>import pytest
from kafka import KafkaConsumer, KafkaProducer
from test.conftest import version
from test.testutil import random_string
@pytest.mark.skipif(not version(), reason="No KAFKA_VERSION set")
def test_end_to_end(kafka_broker):
connect_str = 'localhost:' + str(kafka_broker.port)
producer = KafkaProducer(bootstrap_servers=connect_str,
max_block_ms=10000,
value_serializer=str.encode)
consumer = KafkaConsumer(bootstrap_servers=connect_str,
consumer_timeout_ms=10000,
auto_offset_reset='earliest',
value_deserializer=bytes.decode)
topic = random_string(5)
for i in range(1000):
producer.send(topic, 'msg %d' % i)
producer.flush()
producer.close()
consumer.subscribe([topic])
msgs = set()
for i in range(1000):
try:
msgs.add(next(consumer).value)
except StopIteration:
break
assert msgs == set(['msg %d' % i for i in range(1000)])
|
|
30c927a6a35f3a6f8d4f11ff4e4a8f12431e80b5
|
shuup_tests/xtheme/test_admin.py
|
shuup_tests/xtheme/test_admin.py
|
from shuup.apps.provides import override_provides
from shuup.testing.utils import apply_request_middleware
from shuup.xtheme.admin_module.views import ThemeConfigView, ThemeConfigDetailView, ThemeGuideTemplateView
from shuup_tests.xtheme.utils import FauxTheme
def test_config_view(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
response = ThemeConfigView.as_view()(request)
assert response.status_code == 200
def test_config_view2(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
with override_provides("xtheme", ["shuup_tests.xtheme.utils:FauxTheme"]):
response = ThemeConfigDetailView.as_view()(request, theme_identifier=FauxTheme.identifier)
assert response.status_code == 200
|
Add some xtheme admin tests
|
Add some xtheme admin tests
|
Python
|
agpl-3.0
|
shoopio/shoop,shoopio/shoop,shoopio/shoop
|
Add some xtheme admin tests
|
from shuup.apps.provides import override_provides
from shuup.testing.utils import apply_request_middleware
from shuup.xtheme.admin_module.views import ThemeConfigView, ThemeConfigDetailView, ThemeGuideTemplateView
from shuup_tests.xtheme.utils import FauxTheme
def test_config_view(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
response = ThemeConfigView.as_view()(request)
assert response.status_code == 200
def test_config_view2(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
with override_provides("xtheme", ["shuup_tests.xtheme.utils:FauxTheme"]):
response = ThemeConfigDetailView.as_view()(request, theme_identifier=FauxTheme.identifier)
assert response.status_code == 200
|
<commit_before><commit_msg>Add some xtheme admin tests<commit_after>
|
from shuup.apps.provides import override_provides
from shuup.testing.utils import apply_request_middleware
from shuup.xtheme.admin_module.views import ThemeConfigView, ThemeConfigDetailView, ThemeGuideTemplateView
from shuup_tests.xtheme.utils import FauxTheme
def test_config_view(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
response = ThemeConfigView.as_view()(request)
assert response.status_code == 200
def test_config_view2(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
with override_provides("xtheme", ["shuup_tests.xtheme.utils:FauxTheme"]):
response = ThemeConfigDetailView.as_view()(request, theme_identifier=FauxTheme.identifier)
assert response.status_code == 200
|
Add some xtheme admin testsfrom shuup.apps.provides import override_provides
from shuup.testing.utils import apply_request_middleware
from shuup.xtheme.admin_module.views import ThemeConfigView, ThemeConfigDetailView, ThemeGuideTemplateView
from shuup_tests.xtheme.utils import FauxTheme
def test_config_view(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
response = ThemeConfigView.as_view()(request)
assert response.status_code == 200
def test_config_view2(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
with override_provides("xtheme", ["shuup_tests.xtheme.utils:FauxTheme"]):
response = ThemeConfigDetailView.as_view()(request, theme_identifier=FauxTheme.identifier)
assert response.status_code == 200
|
<commit_before><commit_msg>Add some xtheme admin tests<commit_after>from shuup.apps.provides import override_provides
from shuup.testing.utils import apply_request_middleware
from shuup.xtheme.admin_module.views import ThemeConfigView, ThemeConfigDetailView, ThemeGuideTemplateView
from shuup_tests.xtheme.utils import FauxTheme
def test_config_view(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
response = ThemeConfigView.as_view()(request)
assert response.status_code == 200
def test_config_view2(rf, admin_user):
request = apply_request_middleware(rf.get("/"), user=admin_user)
with override_provides("xtheme", ["shuup_tests.xtheme.utils:FauxTheme"]):
response = ThemeConfigDetailView.as_view()(request, theme_identifier=FauxTheme.identifier)
assert response.status_code == 200
|
|
d93abe8554db7deaf318939d42214e4cf1fc0807
|
tests/test_catalog.py
|
tests/test_catalog.py
|
"""Objects catalog unittests."""
import unittest2 as unittest
from objects.catalog import AbstractCatalog
from objects.providers import Object
from objects.providers import Value
from objects.errors import Error
class CatalogTests(unittest.TestCase):
"""Catalog test cases."""
class Catalog(AbstractCatalog):
"""Test catalog."""
obj = Object(object())
another_obj = Object(object())
def test_get_used(self):
"""Test retrieving used provider."""
catalog = self.Catalog(self.Catalog.obj)
self.assertIsInstance(catalog.obj(), object)
def test_get_unused(self):
"""Test retrieving unused provider."""
catalog = self.Catalog()
self.assertRaises(Error, getattr, catalog, 'obj')
def test_all_providers(self):
"""Test getting of all catalog providers."""
all_providers = self.Catalog.all_providers()
all_providers_dict = dict(all_providers)
self.assertIsInstance(all_providers, set)
self.assertTrue(len(all_providers) == 2)
self.assertIn('obj', all_providers_dict)
self.assertIn(self.Catalog.obj, all_providers_dict.values())
self.assertIn('another_obj', all_providers_dict)
self.assertIn(self.Catalog.another_obj, all_providers_dict.values())
def test_all_providers_by_type(self):
"""Test getting of all catalog providers of specific type."""
self.assertTrue(len(self.Catalog.all_providers(Object)) == 2)
self.assertTrue(len(self.Catalog.all_providers(Value)) == 0)
|
Add some tests for catalog
|
Add some tests for catalog
|
Python
|
bsd-3-clause
|
ets-labs/dependency_injector,ets-labs/python-dependency-injector,rmk135/objects,rmk135/dependency_injector
|
Add some tests for catalog
|
"""Objects catalog unittests."""
import unittest2 as unittest
from objects.catalog import AbstractCatalog
from objects.providers import Object
from objects.providers import Value
from objects.errors import Error
class CatalogTests(unittest.TestCase):
"""Catalog test cases."""
class Catalog(AbstractCatalog):
"""Test catalog."""
obj = Object(object())
another_obj = Object(object())
def test_get_used(self):
"""Test retrieving used provider."""
catalog = self.Catalog(self.Catalog.obj)
self.assertIsInstance(catalog.obj(), object)
def test_get_unused(self):
"""Test retrieving unused provider."""
catalog = self.Catalog()
self.assertRaises(Error, getattr, catalog, 'obj')
def test_all_providers(self):
"""Test getting of all catalog providers."""
all_providers = self.Catalog.all_providers()
all_providers_dict = dict(all_providers)
self.assertIsInstance(all_providers, set)
self.assertTrue(len(all_providers) == 2)
self.assertIn('obj', all_providers_dict)
self.assertIn(self.Catalog.obj, all_providers_dict.values())
self.assertIn('another_obj', all_providers_dict)
self.assertIn(self.Catalog.another_obj, all_providers_dict.values())
def test_all_providers_by_type(self):
"""Test getting of all catalog providers of specific type."""
self.assertTrue(len(self.Catalog.all_providers(Object)) == 2)
self.assertTrue(len(self.Catalog.all_providers(Value)) == 0)
|
<commit_before><commit_msg>Add some tests for catalog<commit_after>
|
"""Objects catalog unittests."""
import unittest2 as unittest
from objects.catalog import AbstractCatalog
from objects.providers import Object
from objects.providers import Value
from objects.errors import Error
class CatalogTests(unittest.TestCase):
"""Catalog test cases."""
class Catalog(AbstractCatalog):
"""Test catalog."""
obj = Object(object())
another_obj = Object(object())
def test_get_used(self):
"""Test retrieving used provider."""
catalog = self.Catalog(self.Catalog.obj)
self.assertIsInstance(catalog.obj(), object)
def test_get_unused(self):
"""Test retrieving unused provider."""
catalog = self.Catalog()
self.assertRaises(Error, getattr, catalog, 'obj')
def test_all_providers(self):
"""Test getting of all catalog providers."""
all_providers = self.Catalog.all_providers()
all_providers_dict = dict(all_providers)
self.assertIsInstance(all_providers, set)
self.assertTrue(len(all_providers) == 2)
self.assertIn('obj', all_providers_dict)
self.assertIn(self.Catalog.obj, all_providers_dict.values())
self.assertIn('another_obj', all_providers_dict)
self.assertIn(self.Catalog.another_obj, all_providers_dict.values())
def test_all_providers_by_type(self):
"""Test getting of all catalog providers of specific type."""
self.assertTrue(len(self.Catalog.all_providers(Object)) == 2)
self.assertTrue(len(self.Catalog.all_providers(Value)) == 0)
|
Add some tests for catalog"""Objects catalog unittests."""
import unittest2 as unittest
from objects.catalog import AbstractCatalog
from objects.providers import Object
from objects.providers import Value
from objects.errors import Error
class CatalogTests(unittest.TestCase):
"""Catalog test cases."""
class Catalog(AbstractCatalog):
"""Test catalog."""
obj = Object(object())
another_obj = Object(object())
def test_get_used(self):
"""Test retrieving used provider."""
catalog = self.Catalog(self.Catalog.obj)
self.assertIsInstance(catalog.obj(), object)
def test_get_unused(self):
"""Test retrieving unused provider."""
catalog = self.Catalog()
self.assertRaises(Error, getattr, catalog, 'obj')
def test_all_providers(self):
"""Test getting of all catalog providers."""
all_providers = self.Catalog.all_providers()
all_providers_dict = dict(all_providers)
self.assertIsInstance(all_providers, set)
self.assertTrue(len(all_providers) == 2)
self.assertIn('obj', all_providers_dict)
self.assertIn(self.Catalog.obj, all_providers_dict.values())
self.assertIn('another_obj', all_providers_dict)
self.assertIn(self.Catalog.another_obj, all_providers_dict.values())
def test_all_providers_by_type(self):
"""Test getting of all catalog providers of specific type."""
self.assertTrue(len(self.Catalog.all_providers(Object)) == 2)
self.assertTrue(len(self.Catalog.all_providers(Value)) == 0)
|
<commit_before><commit_msg>Add some tests for catalog<commit_after>"""Objects catalog unittests."""
import unittest2 as unittest
from objects.catalog import AbstractCatalog
from objects.providers import Object
from objects.providers import Value
from objects.errors import Error
class CatalogTests(unittest.TestCase):
"""Catalog test cases."""
class Catalog(AbstractCatalog):
"""Test catalog."""
obj = Object(object())
another_obj = Object(object())
def test_get_used(self):
"""Test retrieving used provider."""
catalog = self.Catalog(self.Catalog.obj)
self.assertIsInstance(catalog.obj(), object)
def test_get_unused(self):
"""Test retrieving unused provider."""
catalog = self.Catalog()
self.assertRaises(Error, getattr, catalog, 'obj')
def test_all_providers(self):
"""Test getting of all catalog providers."""
all_providers = self.Catalog.all_providers()
all_providers_dict = dict(all_providers)
self.assertIsInstance(all_providers, set)
self.assertTrue(len(all_providers) == 2)
self.assertIn('obj', all_providers_dict)
self.assertIn(self.Catalog.obj, all_providers_dict.values())
self.assertIn('another_obj', all_providers_dict)
self.assertIn(self.Catalog.another_obj, all_providers_dict.values())
def test_all_providers_by_type(self):
"""Test getting of all catalog providers of specific type."""
self.assertTrue(len(self.Catalog.all_providers(Object)) == 2)
self.assertTrue(len(self.Catalog.all_providers(Value)) == 0)
|
|
63c20427f573d95bfc5326d0614455640123e6ed
|
modules/tools/extractor/extractor.py
|
modules/tools/extractor/extractor.py
|
#!/usr/bin/env python
###############################################################################
# Copyright 2017 The Apollo Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
import rospy
from std_msgs.msg import String
from modules.planning.proto.planning_pb2 import ADCTrajectory
from modules.routing.proto.routing_pb2 import RoutingResponse
class Extractor(object):
def __init__(self):
self.routing = rospy.Publisher(
'/apollo/routing_response', RoutingResponse, queue_size=1)
def callback_planning(self, data):
self.routing.publish(data.debug.planning_data.routing)
print "New Planning"
def main():
"""
Main function
"""
extract = Extractor()
rospy.init_node('extract_routing', anonymous=True)
planning_sub = rospy.Subscriber(
'/apollo/planning',
ADCTrajectory,
extract.callback_planning,
queue_size=1)
rospy.spin()
if __name__ == '__main__':
main()
|
Add tool to extract routing from planning debug
|
Add tool to extract routing from planning debug
|
Python
|
apache-2.0
|
startcode/apollo,startcode/apollo,fy2462/apollo,startcode/apollo,startcode/apollo,startcode/apollo,fy2462/apollo,fy2462/apollo,fy2462/apollo,startcode/apollo,fy2462/apollo,fy2462/apollo
|
Add tool to extract routing from planning debug
|
#!/usr/bin/env python
###############################################################################
# Copyright 2017 The Apollo Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
import rospy
from std_msgs.msg import String
from modules.planning.proto.planning_pb2 import ADCTrajectory
from modules.routing.proto.routing_pb2 import RoutingResponse
class Extractor(object):
def __init__(self):
self.routing = rospy.Publisher(
'/apollo/routing_response', RoutingResponse, queue_size=1)
def callback_planning(self, data):
self.routing.publish(data.debug.planning_data.routing)
print "New Planning"
def main():
"""
Main function
"""
extract = Extractor()
rospy.init_node('extract_routing', anonymous=True)
planning_sub = rospy.Subscriber(
'/apollo/planning',
ADCTrajectory,
extract.callback_planning,
queue_size=1)
rospy.spin()
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add tool to extract routing from planning debug<commit_after>
|
#!/usr/bin/env python
###############################################################################
# Copyright 2017 The Apollo Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
import rospy
from std_msgs.msg import String
from modules.planning.proto.planning_pb2 import ADCTrajectory
from modules.routing.proto.routing_pb2 import RoutingResponse
class Extractor(object):
def __init__(self):
self.routing = rospy.Publisher(
'/apollo/routing_response', RoutingResponse, queue_size=1)
def callback_planning(self, data):
self.routing.publish(data.debug.planning_data.routing)
print "New Planning"
def main():
"""
Main function
"""
extract = Extractor()
rospy.init_node('extract_routing', anonymous=True)
planning_sub = rospy.Subscriber(
'/apollo/planning',
ADCTrajectory,
extract.callback_planning,
queue_size=1)
rospy.spin()
if __name__ == '__main__':
main()
|
Add tool to extract routing from planning debug#!/usr/bin/env python
###############################################################################
# Copyright 2017 The Apollo Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
import rospy
from std_msgs.msg import String
from modules.planning.proto.planning_pb2 import ADCTrajectory
from modules.routing.proto.routing_pb2 import RoutingResponse
class Extractor(object):
def __init__(self):
self.routing = rospy.Publisher(
'/apollo/routing_response', RoutingResponse, queue_size=1)
def callback_planning(self, data):
self.routing.publish(data.debug.planning_data.routing)
print "New Planning"
def main():
"""
Main function
"""
extract = Extractor()
rospy.init_node('extract_routing', anonymous=True)
planning_sub = rospy.Subscriber(
'/apollo/planning',
ADCTrajectory,
extract.callback_planning,
queue_size=1)
rospy.spin()
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add tool to extract routing from planning debug<commit_after>#!/usr/bin/env python
###############################################################################
# Copyright 2017 The Apollo Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################
import rospy
from std_msgs.msg import String
from modules.planning.proto.planning_pb2 import ADCTrajectory
from modules.routing.proto.routing_pb2 import RoutingResponse
class Extractor(object):
def __init__(self):
self.routing = rospy.Publisher(
'/apollo/routing_response', RoutingResponse, queue_size=1)
def callback_planning(self, data):
self.routing.publish(data.debug.planning_data.routing)
print "New Planning"
def main():
"""
Main function
"""
extract = Extractor()
rospy.init_node('extract_routing', anonymous=True)
planning_sub = rospy.Subscriber(
'/apollo/planning',
ADCTrajectory,
extract.callback_planning,
queue_size=1)
rospy.spin()
if __name__ == '__main__':
main()
|
|
9185f1e51c5884ac63839ba8c17ad31a44c88647
|
virtool/tests/cls.py
|
virtool/tests/cls.py
|
class AuthorizedTest:
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
class ProtectedTest(AuthorizedTest):
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
async def test_not_permitted(self, do_get):
resp = await do_get("/api/users", authorize=True)
assert resp.status == 403
assert await resp.json() == {
"message": "Not permitted"
}
|
Add test superclasses for authorization and permissions
|
Add test superclasses for authorization and permissions
|
Python
|
mit
|
virtool/virtool,virtool/virtool,igboyes/virtool,igboyes/virtool
|
Add test superclasses for authorization and permissions
|
class AuthorizedTest:
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
class ProtectedTest(AuthorizedTest):
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
async def test_not_permitted(self, do_get):
resp = await do_get("/api/users", authorize=True)
assert resp.status == 403
assert await resp.json() == {
"message": "Not permitted"
}
|
<commit_before><commit_msg>Add test superclasses for authorization and permissions<commit_after>
|
class AuthorizedTest:
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
class ProtectedTest(AuthorizedTest):
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
async def test_not_permitted(self, do_get):
resp = await do_get("/api/users", authorize=True)
assert resp.status == 403
assert await resp.json() == {
"message": "Not permitted"
}
|
Add test superclasses for authorization and permissionsclass AuthorizedTest:
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
class ProtectedTest(AuthorizedTest):
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
async def test_not_permitted(self, do_get):
resp = await do_get("/api/users", authorize=True)
assert resp.status == 403
assert await resp.json() == {
"message": "Not permitted"
}
|
<commit_before><commit_msg>Add test superclasses for authorization and permissions<commit_after>class AuthorizedTest:
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
class ProtectedTest(AuthorizedTest):
async def test_not_authorized(self, do_get):
resp = await do_get("/api/users")
assert resp.status == 403
assert await resp.json() == {
"message": "Not authorized"
}
async def test_not_permitted(self, do_get):
resp = await do_get("/api/users", authorize=True)
assert resp.status == 403
assert await resp.json() == {
"message": "Not permitted"
}
|
|
f53628d7adc6a74fccbc28e2483e4360ae561438
|
korail_split_ticket_checker_test.py
|
korail_split_ticket_checker_test.py
|
# -*- coding: utf-8 -*-
import unittest
import korail_split_ticket_checker
import datetime
class TestKorailSplitTickerChecker(unittest.TestCase):
def setUp(self):
tomorrow = datetime.datetime.now() + datetime.timedelta(days=1)
self.date = tomorrow.strftime('%Y%m%d')
def test_get_train_routes_ktx(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = "KTX"
self.train_no = 101
self.stations = 8
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_sanchon(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"KTX-산천"
self.train_no = 407
self.stations = 10
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_saemaul(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"ITX-새마을"
self.train_no = 1081
self.stations = 17
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_mugungwha(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"무궁화호"
self.train_no = 1221
self.stations = 24
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
if __name__ == '__main__':
unittest.main()
|
Add a unit test for get_train_routes func.
|
Add a unit test for get_train_routes func.
|
Python
|
mit
|
kimtree/korail-split-ticket-checker
|
Add a unit test for get_train_routes func.
|
# -*- coding: utf-8 -*-
import unittest
import korail_split_ticket_checker
import datetime
class TestKorailSplitTickerChecker(unittest.TestCase):
def setUp(self):
tomorrow = datetime.datetime.now() + datetime.timedelta(days=1)
self.date = tomorrow.strftime('%Y%m%d')
def test_get_train_routes_ktx(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = "KTX"
self.train_no = 101
self.stations = 8
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_sanchon(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"KTX-산천"
self.train_no = 407
self.stations = 10
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_saemaul(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"ITX-새마을"
self.train_no = 1081
self.stations = 17
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_mugungwha(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"무궁화호"
self.train_no = 1221
self.stations = 24
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add a unit test for get_train_routes func.<commit_after>
|
# -*- coding: utf-8 -*-
import unittest
import korail_split_ticket_checker
import datetime
class TestKorailSplitTickerChecker(unittest.TestCase):
def setUp(self):
tomorrow = datetime.datetime.now() + datetime.timedelta(days=1)
self.date = tomorrow.strftime('%Y%m%d')
def test_get_train_routes_ktx(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = "KTX"
self.train_no = 101
self.stations = 8
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_sanchon(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"KTX-산천"
self.train_no = 407
self.stations = 10
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_saemaul(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"ITX-새마을"
self.train_no = 1081
self.stations = 17
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_mugungwha(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"무궁화호"
self.train_no = 1221
self.stations = 24
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
if __name__ == '__main__':
unittest.main()
|
Add a unit test for get_train_routes func.# -*- coding: utf-8 -*-
import unittest
import korail_split_ticket_checker
import datetime
class TestKorailSplitTickerChecker(unittest.TestCase):
def setUp(self):
tomorrow = datetime.datetime.now() + datetime.timedelta(days=1)
self.date = tomorrow.strftime('%Y%m%d')
def test_get_train_routes_ktx(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = "KTX"
self.train_no = 101
self.stations = 8
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_sanchon(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"KTX-산천"
self.train_no = 407
self.stations = 10
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_saemaul(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"ITX-새마을"
self.train_no = 1081
self.stations = 17
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_mugungwha(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"무궁화호"
self.train_no = 1221
self.stations = 24
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add a unit test for get_train_routes func.<commit_after># -*- coding: utf-8 -*-
import unittest
import korail_split_ticket_checker
import datetime
class TestKorailSplitTickerChecker(unittest.TestCase):
def setUp(self):
tomorrow = datetime.datetime.now() + datetime.timedelta(days=1)
self.date = tomorrow.strftime('%Y%m%d')
def test_get_train_routes_ktx(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = "KTX"
self.train_no = 101
self.stations = 8
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_sanchon(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"KTX-산천"
self.train_no = 407
self.stations = 10
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_saemaul(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"ITX-새마을"
self.train_no = 1081
self.stations = 17
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
def test_get_train_routes_ktx_mugungwha(self):
# 2014년 6월 9일 개정 시간표 기준
self.train_type = u"무궁화호"
self.train_no = 1221
self.stations = 24
train_type, stations = korail_split_ticket_checker.get_train_routes(self.date, self.train_no)
self.assertEqual(self.train_type, train_type, "Failed to get train type")
self.assertEqual(self.stations, len(stations))
if __name__ == '__main__':
unittest.main()
|
|
a90c15e409451bbca366c7c3994f90e61e5f726b
|
tests/test_start.py
|
tests/test_start.py
|
import unittest
from coilmq.server.socket_server import ThreadedStompServer
from coilmq.start import server_from_config
class GetServerTestCase(unittest.TestCase):
def test_server_from_config_default(self):
self.assertIsInstance(server_from_config(), ThreadedStompServer)
|
Add test for the start module
|
Add test for the start module
|
Python
|
apache-2.0
|
hozn/coilmq
|
Add test for the start module
|
import unittest
from coilmq.server.socket_server import ThreadedStompServer
from coilmq.start import server_from_config
class GetServerTestCase(unittest.TestCase):
def test_server_from_config_default(self):
self.assertIsInstance(server_from_config(), ThreadedStompServer)
|
<commit_before><commit_msg>Add test for the start module<commit_after>
|
import unittest
from coilmq.server.socket_server import ThreadedStompServer
from coilmq.start import server_from_config
class GetServerTestCase(unittest.TestCase):
def test_server_from_config_default(self):
self.assertIsInstance(server_from_config(), ThreadedStompServer)
|
Add test for the start moduleimport unittest
from coilmq.server.socket_server import ThreadedStompServer
from coilmq.start import server_from_config
class GetServerTestCase(unittest.TestCase):
def test_server_from_config_default(self):
self.assertIsInstance(server_from_config(), ThreadedStompServer)
|
<commit_before><commit_msg>Add test for the start module<commit_after>import unittest
from coilmq.server.socket_server import ThreadedStompServer
from coilmq.start import server_from_config
class GetServerTestCase(unittest.TestCase):
def test_server_from_config_default(self):
self.assertIsInstance(server_from_config(), ThreadedStompServer)
|
|
2f37d4f81513d6d8a783883ffa18bffc5eb3e559
|
examples/various_nameservers.py
|
examples/various_nameservers.py
|
"""
This example is relatively complex.
We will be creating a system in which agents will connect to different
name servers. Each name server will therefore represent a 'group' of agents.
That way, we can easily shut down all agents belonging to a group, without
interfering with the others.
"""
import time
from osbrain import run_agent
from osbrain import run_nameserver
def annoy(agent, message):
agent.send('annoy', message)
def log_message(agent, message):
agent.log_info('Received: %s' % message)
if __name__ == '__main__':
ns = run_nameserver()
ns_annoy = run_nameserver()
# Create normal PUB/SUB communication
listener = run_agent('listener', nsaddr=ns.addr())
addr = listener.bind('SUB', alias='sub', handler=log_message)
speaker = run_agent('speaker', nsaddr=ns.addr())
speaker.connect(addr, alias='annoy')
speaker.each(0.2, annoy, 'Blah blah...')
# Create annoying agents registered in another name server
annoyer_apple = run_agent('annoyer0', nsaddr=ns_annoy.addr())
annoyer_apple.connect(addr, alias='annoy')
annoyer_apple.each(0.2, annoy, 'apple')
annoyer_grape = run_agent('annoyer1', nsaddr=ns_annoy.addr())
annoyer_grape.connect(addr, alias='annoy')
annoyer_grape.each(0.2, annoy, 'grape')
time.sleep(1)
# Shutting down the annoying agents through their name server
ns_annoy.shutdown()
time.sleep(1)
ns.shutdown()
|
Add example of various name servers from one script
|
Add example of various name servers from one script
|
Python
|
apache-2.0
|
opensistemas-hub/osbrain
|
Add example of various name servers from one script
|
"""
This example is relatively complex.
We will be creating a system in which agents will connect to different
name servers. Each name server will therefore represent a 'group' of agents.
That way, we can easily shut down all agents belonging to a group, without
interfering with the others.
"""
import time
from osbrain import run_agent
from osbrain import run_nameserver
def annoy(agent, message):
agent.send('annoy', message)
def log_message(agent, message):
agent.log_info('Received: %s' % message)
if __name__ == '__main__':
ns = run_nameserver()
ns_annoy = run_nameserver()
# Create normal PUB/SUB communication
listener = run_agent('listener', nsaddr=ns.addr())
addr = listener.bind('SUB', alias='sub', handler=log_message)
speaker = run_agent('speaker', nsaddr=ns.addr())
speaker.connect(addr, alias='annoy')
speaker.each(0.2, annoy, 'Blah blah...')
# Create annoying agents registered in another name server
annoyer_apple = run_agent('annoyer0', nsaddr=ns_annoy.addr())
annoyer_apple.connect(addr, alias='annoy')
annoyer_apple.each(0.2, annoy, 'apple')
annoyer_grape = run_agent('annoyer1', nsaddr=ns_annoy.addr())
annoyer_grape.connect(addr, alias='annoy')
annoyer_grape.each(0.2, annoy, 'grape')
time.sleep(1)
# Shutting down the annoying agents through their name server
ns_annoy.shutdown()
time.sleep(1)
ns.shutdown()
|
<commit_before><commit_msg>Add example of various name servers from one script<commit_after>
|
"""
This example is relatively complex.
We will be creating a system in which agents will connect to different
name servers. Each name server will therefore represent a 'group' of agents.
That way, we can easily shut down all agents belonging to a group, without
interfering with the others.
"""
import time
from osbrain import run_agent
from osbrain import run_nameserver
def annoy(agent, message):
agent.send('annoy', message)
def log_message(agent, message):
agent.log_info('Received: %s' % message)
if __name__ == '__main__':
ns = run_nameserver()
ns_annoy = run_nameserver()
# Create normal PUB/SUB communication
listener = run_agent('listener', nsaddr=ns.addr())
addr = listener.bind('SUB', alias='sub', handler=log_message)
speaker = run_agent('speaker', nsaddr=ns.addr())
speaker.connect(addr, alias='annoy')
speaker.each(0.2, annoy, 'Blah blah...')
# Create annoying agents registered in another name server
annoyer_apple = run_agent('annoyer0', nsaddr=ns_annoy.addr())
annoyer_apple.connect(addr, alias='annoy')
annoyer_apple.each(0.2, annoy, 'apple')
annoyer_grape = run_agent('annoyer1', nsaddr=ns_annoy.addr())
annoyer_grape.connect(addr, alias='annoy')
annoyer_grape.each(0.2, annoy, 'grape')
time.sleep(1)
# Shutting down the annoying agents through their name server
ns_annoy.shutdown()
time.sleep(1)
ns.shutdown()
|
Add example of various name servers from one script"""
This example is relatively complex.
We will be creating a system in which agents will connect to different
name servers. Each name server will therefore represent a 'group' of agents.
That way, we can easily shut down all agents belonging to a group, without
interfering with the others.
"""
import time
from osbrain import run_agent
from osbrain import run_nameserver
def annoy(agent, message):
agent.send('annoy', message)
def log_message(agent, message):
agent.log_info('Received: %s' % message)
if __name__ == '__main__':
ns = run_nameserver()
ns_annoy = run_nameserver()
# Create normal PUB/SUB communication
listener = run_agent('listener', nsaddr=ns.addr())
addr = listener.bind('SUB', alias='sub', handler=log_message)
speaker = run_agent('speaker', nsaddr=ns.addr())
speaker.connect(addr, alias='annoy')
speaker.each(0.2, annoy, 'Blah blah...')
# Create annoying agents registered in another name server
annoyer_apple = run_agent('annoyer0', nsaddr=ns_annoy.addr())
annoyer_apple.connect(addr, alias='annoy')
annoyer_apple.each(0.2, annoy, 'apple')
annoyer_grape = run_agent('annoyer1', nsaddr=ns_annoy.addr())
annoyer_grape.connect(addr, alias='annoy')
annoyer_grape.each(0.2, annoy, 'grape')
time.sleep(1)
# Shutting down the annoying agents through their name server
ns_annoy.shutdown()
time.sleep(1)
ns.shutdown()
|
<commit_before><commit_msg>Add example of various name servers from one script<commit_after>"""
This example is relatively complex.
We will be creating a system in which agents will connect to different
name servers. Each name server will therefore represent a 'group' of agents.
That way, we can easily shut down all agents belonging to a group, without
interfering with the others.
"""
import time
from osbrain import run_agent
from osbrain import run_nameserver
def annoy(agent, message):
agent.send('annoy', message)
def log_message(agent, message):
agent.log_info('Received: %s' % message)
if __name__ == '__main__':
ns = run_nameserver()
ns_annoy = run_nameserver()
# Create normal PUB/SUB communication
listener = run_agent('listener', nsaddr=ns.addr())
addr = listener.bind('SUB', alias='sub', handler=log_message)
speaker = run_agent('speaker', nsaddr=ns.addr())
speaker.connect(addr, alias='annoy')
speaker.each(0.2, annoy, 'Blah blah...')
# Create annoying agents registered in another name server
annoyer_apple = run_agent('annoyer0', nsaddr=ns_annoy.addr())
annoyer_apple.connect(addr, alias='annoy')
annoyer_apple.each(0.2, annoy, 'apple')
annoyer_grape = run_agent('annoyer1', nsaddr=ns_annoy.addr())
annoyer_grape.connect(addr, alias='annoy')
annoyer_grape.each(0.2, annoy, 'grape')
time.sleep(1)
# Shutting down the annoying agents through their name server
ns_annoy.shutdown()
time.sleep(1)
ns.shutdown()
|
|
9dba843bdeede54eab30e7f3b537c75965748110
|
functional/tests/network/v2/test_floating_ip.py
|
functional/tests/network/v2/test_floating_ip.py
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from functional.common import test
class FloatingIpTests(test.TestCase):
"""Functional tests for floating ip. """
SUBNET_NAME = uuid.uuid4().hex
NETWORK_NAME = uuid.uuid4().hex
ID = None
HEADERS = ['ID']
FIELDS = ['id']
@classmethod
def setUpClass(cls):
# Create a network for the floating ip.
cls.openstack('network create --external ' + cls.NETWORK_NAME)
# Create a subnet for the network.
cls.openstack(
'subnet create --network ' + cls.NETWORK_NAME +
' --subnet-range 10.10.10.0/24 ' +
cls.SUBNET_NAME
)
opts = cls.get_show_opts(cls.FIELDS)
raw_output = cls.openstack(
'ip floating create ' + cls.NETWORK_NAME + opts)
cls.ID = raw_output.strip('\n')
@classmethod
def tearDownClass(cls):
raw_output = cls.openstack('ip floating delete ' + cls.ID)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('subnet delete ' + cls.SUBNET_NAME)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('network delete ' + cls.NETWORK_NAME)
cls.assertOutput('', raw_output)
def test_floating_ip_list(self):
opts = self.get_list_opts(self.HEADERS)
raw_output = self.openstack('ip floating list' + opts)
self.assertIn(self.ID, raw_output)
def test_floating_ip_show(self):
opts = self.get_show_opts(self.FIELDS)
raw_output = self.openstack('ip floating show ' + self.ID + opts)
self.assertEqual(self.ID + "\n", raw_output)
|
Add functional tests for commands of floating ip
|
Add functional tests for commands of floating ip
This patch add functinal tests for commands of floating ip
Change-Id: I7f29578d0e14884f21183bfb82228d2fe7b7a029
|
Python
|
apache-2.0
|
openstack/python-openstackclient,dtroyer/python-openstackclient,dtroyer/python-openstackclient,redhat-openstack/python-openstackclient,openstack/python-openstackclient,redhat-openstack/python-openstackclient
|
Add functional tests for commands of floating ip
This patch add functinal tests for commands of floating ip
Change-Id: I7f29578d0e14884f21183bfb82228d2fe7b7a029
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from functional.common import test
class FloatingIpTests(test.TestCase):
"""Functional tests for floating ip. """
SUBNET_NAME = uuid.uuid4().hex
NETWORK_NAME = uuid.uuid4().hex
ID = None
HEADERS = ['ID']
FIELDS = ['id']
@classmethod
def setUpClass(cls):
# Create a network for the floating ip.
cls.openstack('network create --external ' + cls.NETWORK_NAME)
# Create a subnet for the network.
cls.openstack(
'subnet create --network ' + cls.NETWORK_NAME +
' --subnet-range 10.10.10.0/24 ' +
cls.SUBNET_NAME
)
opts = cls.get_show_opts(cls.FIELDS)
raw_output = cls.openstack(
'ip floating create ' + cls.NETWORK_NAME + opts)
cls.ID = raw_output.strip('\n')
@classmethod
def tearDownClass(cls):
raw_output = cls.openstack('ip floating delete ' + cls.ID)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('subnet delete ' + cls.SUBNET_NAME)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('network delete ' + cls.NETWORK_NAME)
cls.assertOutput('', raw_output)
def test_floating_ip_list(self):
opts = self.get_list_opts(self.HEADERS)
raw_output = self.openstack('ip floating list' + opts)
self.assertIn(self.ID, raw_output)
def test_floating_ip_show(self):
opts = self.get_show_opts(self.FIELDS)
raw_output = self.openstack('ip floating show ' + self.ID + opts)
self.assertEqual(self.ID + "\n", raw_output)
|
<commit_before><commit_msg>Add functional tests for commands of floating ip
This patch add functinal tests for commands of floating ip
Change-Id: I7f29578d0e14884f21183bfb82228d2fe7b7a029<commit_after>
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from functional.common import test
class FloatingIpTests(test.TestCase):
"""Functional tests for floating ip. """
SUBNET_NAME = uuid.uuid4().hex
NETWORK_NAME = uuid.uuid4().hex
ID = None
HEADERS = ['ID']
FIELDS = ['id']
@classmethod
def setUpClass(cls):
# Create a network for the floating ip.
cls.openstack('network create --external ' + cls.NETWORK_NAME)
# Create a subnet for the network.
cls.openstack(
'subnet create --network ' + cls.NETWORK_NAME +
' --subnet-range 10.10.10.0/24 ' +
cls.SUBNET_NAME
)
opts = cls.get_show_opts(cls.FIELDS)
raw_output = cls.openstack(
'ip floating create ' + cls.NETWORK_NAME + opts)
cls.ID = raw_output.strip('\n')
@classmethod
def tearDownClass(cls):
raw_output = cls.openstack('ip floating delete ' + cls.ID)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('subnet delete ' + cls.SUBNET_NAME)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('network delete ' + cls.NETWORK_NAME)
cls.assertOutput('', raw_output)
def test_floating_ip_list(self):
opts = self.get_list_opts(self.HEADERS)
raw_output = self.openstack('ip floating list' + opts)
self.assertIn(self.ID, raw_output)
def test_floating_ip_show(self):
opts = self.get_show_opts(self.FIELDS)
raw_output = self.openstack('ip floating show ' + self.ID + opts)
self.assertEqual(self.ID + "\n", raw_output)
|
Add functional tests for commands of floating ip
This patch add functinal tests for commands of floating ip
Change-Id: I7f29578d0e14884f21183bfb82228d2fe7b7a029# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from functional.common import test
class FloatingIpTests(test.TestCase):
"""Functional tests for floating ip. """
SUBNET_NAME = uuid.uuid4().hex
NETWORK_NAME = uuid.uuid4().hex
ID = None
HEADERS = ['ID']
FIELDS = ['id']
@classmethod
def setUpClass(cls):
# Create a network for the floating ip.
cls.openstack('network create --external ' + cls.NETWORK_NAME)
# Create a subnet for the network.
cls.openstack(
'subnet create --network ' + cls.NETWORK_NAME +
' --subnet-range 10.10.10.0/24 ' +
cls.SUBNET_NAME
)
opts = cls.get_show_opts(cls.FIELDS)
raw_output = cls.openstack(
'ip floating create ' + cls.NETWORK_NAME + opts)
cls.ID = raw_output.strip('\n')
@classmethod
def tearDownClass(cls):
raw_output = cls.openstack('ip floating delete ' + cls.ID)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('subnet delete ' + cls.SUBNET_NAME)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('network delete ' + cls.NETWORK_NAME)
cls.assertOutput('', raw_output)
def test_floating_ip_list(self):
opts = self.get_list_opts(self.HEADERS)
raw_output = self.openstack('ip floating list' + opts)
self.assertIn(self.ID, raw_output)
def test_floating_ip_show(self):
opts = self.get_show_opts(self.FIELDS)
raw_output = self.openstack('ip floating show ' + self.ID + opts)
self.assertEqual(self.ID + "\n", raw_output)
|
<commit_before><commit_msg>Add functional tests for commands of floating ip
This patch add functinal tests for commands of floating ip
Change-Id: I7f29578d0e14884f21183bfb82228d2fe7b7a029<commit_after># Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from functional.common import test
class FloatingIpTests(test.TestCase):
"""Functional tests for floating ip. """
SUBNET_NAME = uuid.uuid4().hex
NETWORK_NAME = uuid.uuid4().hex
ID = None
HEADERS = ['ID']
FIELDS = ['id']
@classmethod
def setUpClass(cls):
# Create a network for the floating ip.
cls.openstack('network create --external ' + cls.NETWORK_NAME)
# Create a subnet for the network.
cls.openstack(
'subnet create --network ' + cls.NETWORK_NAME +
' --subnet-range 10.10.10.0/24 ' +
cls.SUBNET_NAME
)
opts = cls.get_show_opts(cls.FIELDS)
raw_output = cls.openstack(
'ip floating create ' + cls.NETWORK_NAME + opts)
cls.ID = raw_output.strip('\n')
@classmethod
def tearDownClass(cls):
raw_output = cls.openstack('ip floating delete ' + cls.ID)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('subnet delete ' + cls.SUBNET_NAME)
cls.assertOutput('', raw_output)
raw_output = cls.openstack('network delete ' + cls.NETWORK_NAME)
cls.assertOutput('', raw_output)
def test_floating_ip_list(self):
opts = self.get_list_opts(self.HEADERS)
raw_output = self.openstack('ip floating list' + opts)
self.assertIn(self.ID, raw_output)
def test_floating_ip_show(self):
opts = self.get_show_opts(self.FIELDS)
raw_output = self.openstack('ip floating show ' + self.ID + opts)
self.assertEqual(self.ID + "\n", raw_output)
|
|
0e54e8ac75acbd289c2fde2d7fae486cc31ab3ab
|
tests/test_block_aio.py
|
tests/test_block_aio.py
|
# -*- coding: utf-8 -*-
import aiounittest
from graphenecommon.utils import parse_time
from .fixtures_aio import fixture_data, Block, BlockHeader
class Testcases(aiounittest.AsyncTestCase):
def setUp(self):
fixture_data()
async def test_block(self):
block = await Block(1)
self.assertEqual(block["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(block.time(), parse_time("2015-10-13T14:12:24"))
async def test_blockheader(self):
header = await BlockHeader(1)
self.assertEqual(header["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(header.time(), parse_time("2015-10-13T14:12:24"))
|
Add test for async Block
|
Add test for async Block
|
Python
|
mit
|
xeroc/python-graphenelib
|
Add test for async Block
|
# -*- coding: utf-8 -*-
import aiounittest
from graphenecommon.utils import parse_time
from .fixtures_aio import fixture_data, Block, BlockHeader
class Testcases(aiounittest.AsyncTestCase):
def setUp(self):
fixture_data()
async def test_block(self):
block = await Block(1)
self.assertEqual(block["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(block.time(), parse_time("2015-10-13T14:12:24"))
async def test_blockheader(self):
header = await BlockHeader(1)
self.assertEqual(header["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(header.time(), parse_time("2015-10-13T14:12:24"))
|
<commit_before><commit_msg>Add test for async Block<commit_after>
|
# -*- coding: utf-8 -*-
import aiounittest
from graphenecommon.utils import parse_time
from .fixtures_aio import fixture_data, Block, BlockHeader
class Testcases(aiounittest.AsyncTestCase):
def setUp(self):
fixture_data()
async def test_block(self):
block = await Block(1)
self.assertEqual(block["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(block.time(), parse_time("2015-10-13T14:12:24"))
async def test_blockheader(self):
header = await BlockHeader(1)
self.assertEqual(header["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(header.time(), parse_time("2015-10-13T14:12:24"))
|
Add test for async Block# -*- coding: utf-8 -*-
import aiounittest
from graphenecommon.utils import parse_time
from .fixtures_aio import fixture_data, Block, BlockHeader
class Testcases(aiounittest.AsyncTestCase):
def setUp(self):
fixture_data()
async def test_block(self):
block = await Block(1)
self.assertEqual(block["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(block.time(), parse_time("2015-10-13T14:12:24"))
async def test_blockheader(self):
header = await BlockHeader(1)
self.assertEqual(header["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(header.time(), parse_time("2015-10-13T14:12:24"))
|
<commit_before><commit_msg>Add test for async Block<commit_after># -*- coding: utf-8 -*-
import aiounittest
from graphenecommon.utils import parse_time
from .fixtures_aio import fixture_data, Block, BlockHeader
class Testcases(aiounittest.AsyncTestCase):
def setUp(self):
fixture_data()
async def test_block(self):
block = await Block(1)
self.assertEqual(block["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(block.time(), parse_time("2015-10-13T14:12:24"))
async def test_blockheader(self):
header = await BlockHeader(1)
self.assertEqual(header["previous"], "0000000000000000000000000000000000000000")
self.assertEqual(header.time(), parse_time("2015-10-13T14:12:24"))
|
|
f6b851e14d108ae016278351cec71e76bf96ba5e
|
h2o-py/tests/testdir_algos/gbm/pyunit_gbm_quantiles_no_num.py
|
h2o-py/tests/testdir_algos/gbm/pyunit_gbm_quantiles_no_num.py
|
import sys, os
sys.path.insert(1, os.path.join("..","..",".."))
import h2o
from tests import pyunit_utils
from h2o.estimators.gbm import H2OGradientBoostingEstimator
def gbm_quantiles_global_with_only_categorical_colums():
prostate_train = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate_train.csv"))
prostate_train = prostate_train.drop("AGE")
for col_name in prostate_train.names:
prostate_train[col_name] = prostate_train[col_name].ascharacter().asfactor()
gbm_h2o = H2OGradientBoostingEstimator(histogram_type="quantiles_global")
gbm_h2o.train(y="CAPSULE", training_frame=prostate_train)
if __name__ == "__main__":
pyunit_utils.standalone_test(gbm_quantiles_global_with_only_categorical_colums)
else:
gbm_quantiles_global_with_only_categorical_colums()
|
Add test showing QuantilesGlobal works with categorical-only dataset
|
PUBDEV-8722: Add test showing QuantilesGlobal works with categorical-only dataset
|
Python
|
apache-2.0
|
h2oai/h2o-3,h2oai/h2o-3,h2oai/h2o-3,h2oai/h2o-3,h2oai/h2o-3,h2oai/h2o-3,h2oai/h2o-3,h2oai/h2o-3
|
PUBDEV-8722: Add test showing QuantilesGlobal works with categorical-only dataset
|
import sys, os
sys.path.insert(1, os.path.join("..","..",".."))
import h2o
from tests import pyunit_utils
from h2o.estimators.gbm import H2OGradientBoostingEstimator
def gbm_quantiles_global_with_only_categorical_colums():
prostate_train = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate_train.csv"))
prostate_train = prostate_train.drop("AGE")
for col_name in prostate_train.names:
prostate_train[col_name] = prostate_train[col_name].ascharacter().asfactor()
gbm_h2o = H2OGradientBoostingEstimator(histogram_type="quantiles_global")
gbm_h2o.train(y="CAPSULE", training_frame=prostate_train)
if __name__ == "__main__":
pyunit_utils.standalone_test(gbm_quantiles_global_with_only_categorical_colums)
else:
gbm_quantiles_global_with_only_categorical_colums()
|
<commit_before><commit_msg>PUBDEV-8722: Add test showing QuantilesGlobal works with categorical-only dataset<commit_after>
|
import sys, os
sys.path.insert(1, os.path.join("..","..",".."))
import h2o
from tests import pyunit_utils
from h2o.estimators.gbm import H2OGradientBoostingEstimator
def gbm_quantiles_global_with_only_categorical_colums():
prostate_train = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate_train.csv"))
prostate_train = prostate_train.drop("AGE")
for col_name in prostate_train.names:
prostate_train[col_name] = prostate_train[col_name].ascharacter().asfactor()
gbm_h2o = H2OGradientBoostingEstimator(histogram_type="quantiles_global")
gbm_h2o.train(y="CAPSULE", training_frame=prostate_train)
if __name__ == "__main__":
pyunit_utils.standalone_test(gbm_quantiles_global_with_only_categorical_colums)
else:
gbm_quantiles_global_with_only_categorical_colums()
|
PUBDEV-8722: Add test showing QuantilesGlobal works with categorical-only datasetimport sys, os
sys.path.insert(1, os.path.join("..","..",".."))
import h2o
from tests import pyunit_utils
from h2o.estimators.gbm import H2OGradientBoostingEstimator
def gbm_quantiles_global_with_only_categorical_colums():
prostate_train = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate_train.csv"))
prostate_train = prostate_train.drop("AGE")
for col_name in prostate_train.names:
prostate_train[col_name] = prostate_train[col_name].ascharacter().asfactor()
gbm_h2o = H2OGradientBoostingEstimator(histogram_type="quantiles_global")
gbm_h2o.train(y="CAPSULE", training_frame=prostate_train)
if __name__ == "__main__":
pyunit_utils.standalone_test(gbm_quantiles_global_with_only_categorical_colums)
else:
gbm_quantiles_global_with_only_categorical_colums()
|
<commit_before><commit_msg>PUBDEV-8722: Add test showing QuantilesGlobal works with categorical-only dataset<commit_after>import sys, os
sys.path.insert(1, os.path.join("..","..",".."))
import h2o
from tests import pyunit_utils
from h2o.estimators.gbm import H2OGradientBoostingEstimator
def gbm_quantiles_global_with_only_categorical_colums():
prostate_train = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate_train.csv"))
prostate_train = prostate_train.drop("AGE")
for col_name in prostate_train.names:
prostate_train[col_name] = prostate_train[col_name].ascharacter().asfactor()
gbm_h2o = H2OGradientBoostingEstimator(histogram_type="quantiles_global")
gbm_h2o.train(y="CAPSULE", training_frame=prostate_train)
if __name__ == "__main__":
pyunit_utils.standalone_test(gbm_quantiles_global_with_only_categorical_colums)
else:
gbm_quantiles_global_with_only_categorical_colums()
|
|
e6dea0455e9d0d9158843b40045f3753dbdb17c2
|
Orange/widgets/evaluate/tests/test_owrocanalysis.py
|
Orange/widgets/evaluate/tests/test_owrocanalysis.py
|
import unittest
import numpy
import Orange.data
import Orange.evaluation
import Orange.classification
from Orange.widgets.evaluate import owrocanalysis
class TestROC(unittest.TestCase):
def test_ROCData_from_results(self):
data = Orange.data.Table("iris")
learners = [
Orange.classification.MajorityLearner(),
Orange.classification.LogisticRegressionLearner(),
Orange.classification.TreeLearner()
]
res = Orange.evaluation.CrossValidation(data, learners, k=10)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 10)
self.assertTrue(all(c.is_valid for c in rocdata.folds))
self.assertTrue(rocdata.avg_vertical.is_valid)
self.assertTrue(rocdata.avg_threshold.is_valid)
data = data[numpy.random.choice(len(data), size=20)]
res = Orange.evaluation.LeaveOneOut(data, learners)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
# equivalent test to the LeaveOneOut but from a slightly different
# constructed Orange.evaluation.Results
res = Orange.evaluation.CrossValidation(data, learners, k=20)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
|
Add basic tests for roc analysis utilities
|
owrocanalysis: Add basic tests for roc analysis utilities
|
Python
|
bsd-2-clause
|
cheral/orange3,cheral/orange3,cheral/orange3,cheral/orange3,cheral/orange3,cheral/orange3
|
owrocanalysis: Add basic tests for roc analysis utilities
|
import unittest
import numpy
import Orange.data
import Orange.evaluation
import Orange.classification
from Orange.widgets.evaluate import owrocanalysis
class TestROC(unittest.TestCase):
def test_ROCData_from_results(self):
data = Orange.data.Table("iris")
learners = [
Orange.classification.MajorityLearner(),
Orange.classification.LogisticRegressionLearner(),
Orange.classification.TreeLearner()
]
res = Orange.evaluation.CrossValidation(data, learners, k=10)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 10)
self.assertTrue(all(c.is_valid for c in rocdata.folds))
self.assertTrue(rocdata.avg_vertical.is_valid)
self.assertTrue(rocdata.avg_threshold.is_valid)
data = data[numpy.random.choice(len(data), size=20)]
res = Orange.evaluation.LeaveOneOut(data, learners)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
# equivalent test to the LeaveOneOut but from a slightly different
# constructed Orange.evaluation.Results
res = Orange.evaluation.CrossValidation(data, learners, k=20)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
|
<commit_before><commit_msg>owrocanalysis: Add basic tests for roc analysis utilities<commit_after>
|
import unittest
import numpy
import Orange.data
import Orange.evaluation
import Orange.classification
from Orange.widgets.evaluate import owrocanalysis
class TestROC(unittest.TestCase):
def test_ROCData_from_results(self):
data = Orange.data.Table("iris")
learners = [
Orange.classification.MajorityLearner(),
Orange.classification.LogisticRegressionLearner(),
Orange.classification.TreeLearner()
]
res = Orange.evaluation.CrossValidation(data, learners, k=10)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 10)
self.assertTrue(all(c.is_valid for c in rocdata.folds))
self.assertTrue(rocdata.avg_vertical.is_valid)
self.assertTrue(rocdata.avg_threshold.is_valid)
data = data[numpy.random.choice(len(data), size=20)]
res = Orange.evaluation.LeaveOneOut(data, learners)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
# equivalent test to the LeaveOneOut but from a slightly different
# constructed Orange.evaluation.Results
res = Orange.evaluation.CrossValidation(data, learners, k=20)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
|
owrocanalysis: Add basic tests for roc analysis utilitiesimport unittest
import numpy
import Orange.data
import Orange.evaluation
import Orange.classification
from Orange.widgets.evaluate import owrocanalysis
class TestROC(unittest.TestCase):
def test_ROCData_from_results(self):
data = Orange.data.Table("iris")
learners = [
Orange.classification.MajorityLearner(),
Orange.classification.LogisticRegressionLearner(),
Orange.classification.TreeLearner()
]
res = Orange.evaluation.CrossValidation(data, learners, k=10)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 10)
self.assertTrue(all(c.is_valid for c in rocdata.folds))
self.assertTrue(rocdata.avg_vertical.is_valid)
self.assertTrue(rocdata.avg_threshold.is_valid)
data = data[numpy.random.choice(len(data), size=20)]
res = Orange.evaluation.LeaveOneOut(data, learners)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
# equivalent test to the LeaveOneOut but from a slightly different
# constructed Orange.evaluation.Results
res = Orange.evaluation.CrossValidation(data, learners, k=20)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
|
<commit_before><commit_msg>owrocanalysis: Add basic tests for roc analysis utilities<commit_after>import unittest
import numpy
import Orange.data
import Orange.evaluation
import Orange.classification
from Orange.widgets.evaluate import owrocanalysis
class TestROC(unittest.TestCase):
def test_ROCData_from_results(self):
data = Orange.data.Table("iris")
learners = [
Orange.classification.MajorityLearner(),
Orange.classification.LogisticRegressionLearner(),
Orange.classification.TreeLearner()
]
res = Orange.evaluation.CrossValidation(data, learners, k=10)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 10)
self.assertTrue(all(c.is_valid for c in rocdata.folds))
self.assertTrue(rocdata.avg_vertical.is_valid)
self.assertTrue(rocdata.avg_threshold.is_valid)
data = data[numpy.random.choice(len(data), size=20)]
res = Orange.evaluation.LeaveOneOut(data, learners)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
# equivalent test to the LeaveOneOut but from a slightly different
# constructed Orange.evaluation.Results
res = Orange.evaluation.CrossValidation(data, learners, k=20)
for i, _ in enumerate(learners):
for c in range(len(data.domain.class_var.values)):
rocdata = owrocanalysis.ROCData_from_results(res, i, target=c)
self.assertTrue(rocdata.merged.is_valid)
self.assertEqual(len(rocdata.folds), 20)
# all individual fold curves and averaged curve data
# should be invalid
self.assertTrue(all(not c.is_valid for c in rocdata.folds))
self.assertFalse(rocdata.avg_vertical.is_valid)
self.assertFalse(rocdata.avg_threshold.is_valid)
|
|
d5ef19d5e024fb68d4781fb78ec6136e28904ae5
|
tools/valouev2kmers.py
|
tools/valouev2kmers.py
|
# Takes a file in the format used by Valouev and extracts k-mers as new maps
import sys
import numpy as np
k = 12 # window size
for lno,line in enumerate(open(sys.argv[1])):
if lno % 3 == 0:
label = line.strip()
if lno % 3 == 1:
parts = line.strip().split()
enz, enzabr = parts[:2]
frags = parts[2:]
windows = len(frags) - k
if windows >= 0:
for window in range(windows+1):
print(label + "-" + str(window))
print("\t".join(parts[:2] + frags[window:window+k]))
print()
|
Add script for doing OPTIMA-overlap queries
|
Add script for doing OPTIMA-overlap queries
|
Python
|
mit
|
mmuggli/doppelganger,mmuggli/KOHDISTA,mmuggli/KOHDISTA,mmuggli/KOHDISTA,mmuggli/doppelganger,mmuggli/KOHDISTA,mmuggli/doppelganger,mmuggli/doppelganger
|
Add script for doing OPTIMA-overlap queries
|
# Takes a file in the format used by Valouev and extracts k-mers as new maps
import sys
import numpy as np
k = 12 # window size
for lno,line in enumerate(open(sys.argv[1])):
if lno % 3 == 0:
label = line.strip()
if lno % 3 == 1:
parts = line.strip().split()
enz, enzabr = parts[:2]
frags = parts[2:]
windows = len(frags) - k
if windows >= 0:
for window in range(windows+1):
print(label + "-" + str(window))
print("\t".join(parts[:2] + frags[window:window+k]))
print()
|
<commit_before><commit_msg>Add script for doing OPTIMA-overlap queries<commit_after>
|
# Takes a file in the format used by Valouev and extracts k-mers as new maps
import sys
import numpy as np
k = 12 # window size
for lno,line in enumerate(open(sys.argv[1])):
if lno % 3 == 0:
label = line.strip()
if lno % 3 == 1:
parts = line.strip().split()
enz, enzabr = parts[:2]
frags = parts[2:]
windows = len(frags) - k
if windows >= 0:
for window in range(windows+1):
print(label + "-" + str(window))
print("\t".join(parts[:2] + frags[window:window+k]))
print()
|
Add script for doing OPTIMA-overlap queries# Takes a file in the format used by Valouev and extracts k-mers as new maps
import sys
import numpy as np
k = 12 # window size
for lno,line in enumerate(open(sys.argv[1])):
if lno % 3 == 0:
label = line.strip()
if lno % 3 == 1:
parts = line.strip().split()
enz, enzabr = parts[:2]
frags = parts[2:]
windows = len(frags) - k
if windows >= 0:
for window in range(windows+1):
print(label + "-" + str(window))
print("\t".join(parts[:2] + frags[window:window+k]))
print()
|
<commit_before><commit_msg>Add script for doing OPTIMA-overlap queries<commit_after># Takes a file in the format used by Valouev and extracts k-mers as new maps
import sys
import numpy as np
k = 12 # window size
for lno,line in enumerate(open(sys.argv[1])):
if lno % 3 == 0:
label = line.strip()
if lno % 3 == 1:
parts = line.strip().split()
enz, enzabr = parts[:2]
frags = parts[2:]
windows = len(frags) - k
if windows >= 0:
for window in range(windows+1):
print(label + "-" + str(window))
print("\t".join(parts[:2] + frags[window:window+k]))
print()
|
|
288305839dab0f5a54bf3bed5a3a55849afadfff
|
migrations/versions/470_remove_old_imported_buyer_accounts.py
|
migrations/versions/470_remove_old_imported_buyer_accounts.py
|
"""Remove old imported buyer accounts
Revision ID: 470
Revises: 460
Create Date: 2016-01-22 16:20:29.793439
"""
# revision identifiers, used by Alembic.
revision = '470'
down_revision = '460'
from alembic import op
def upgrade():
op.execute("""
DELETE FROM users
WHERE users.role = 'buyer';
""")
def downgrade():
pass # :(
|
Add a migration to remove old imported buyer accounts
|
Add a migration to remove old imported buyer accounts
Users with a "buyer" role were imported from the Grails app and are
a mix of buyer and supplier users. We used to convert them to supplier
accounts during registration, but now that we're planning to open buyer
registration for valid buyers we're going to delete all old accounts and
ask user to re-register if needed instead of trying to validate if the
existing buyer accounts are valid buyers.
|
Python
|
mit
|
alphagov/digitalmarketplace-api,alphagov/digitalmarketplace-api,alphagov/digitalmarketplace-api
|
Add a migration to remove old imported buyer accounts
Users with a "buyer" role were imported from the Grails app and are
a mix of buyer and supplier users. We used to convert them to supplier
accounts during registration, but now that we're planning to open buyer
registration for valid buyers we're going to delete all old accounts and
ask user to re-register if needed instead of trying to validate if the
existing buyer accounts are valid buyers.
|
"""Remove old imported buyer accounts
Revision ID: 470
Revises: 460
Create Date: 2016-01-22 16:20:29.793439
"""
# revision identifiers, used by Alembic.
revision = '470'
down_revision = '460'
from alembic import op
def upgrade():
op.execute("""
DELETE FROM users
WHERE users.role = 'buyer';
""")
def downgrade():
pass # :(
|
<commit_before><commit_msg>Add a migration to remove old imported buyer accounts
Users with a "buyer" role were imported from the Grails app and are
a mix of buyer and supplier users. We used to convert them to supplier
accounts during registration, but now that we're planning to open buyer
registration for valid buyers we're going to delete all old accounts and
ask user to re-register if needed instead of trying to validate if the
existing buyer accounts are valid buyers.<commit_after>
|
"""Remove old imported buyer accounts
Revision ID: 470
Revises: 460
Create Date: 2016-01-22 16:20:29.793439
"""
# revision identifiers, used by Alembic.
revision = '470'
down_revision = '460'
from alembic import op
def upgrade():
op.execute("""
DELETE FROM users
WHERE users.role = 'buyer';
""")
def downgrade():
pass # :(
|
Add a migration to remove old imported buyer accounts
Users with a "buyer" role were imported from the Grails app and are
a mix of buyer and supplier users. We used to convert them to supplier
accounts during registration, but now that we're planning to open buyer
registration for valid buyers we're going to delete all old accounts and
ask user to re-register if needed instead of trying to validate if the
existing buyer accounts are valid buyers."""Remove old imported buyer accounts
Revision ID: 470
Revises: 460
Create Date: 2016-01-22 16:20:29.793439
"""
# revision identifiers, used by Alembic.
revision = '470'
down_revision = '460'
from alembic import op
def upgrade():
op.execute("""
DELETE FROM users
WHERE users.role = 'buyer';
""")
def downgrade():
pass # :(
|
<commit_before><commit_msg>Add a migration to remove old imported buyer accounts
Users with a "buyer" role were imported from the Grails app and are
a mix of buyer and supplier users. We used to convert them to supplier
accounts during registration, but now that we're planning to open buyer
registration for valid buyers we're going to delete all old accounts and
ask user to re-register if needed instead of trying to validate if the
existing buyer accounts are valid buyers.<commit_after>"""Remove old imported buyer accounts
Revision ID: 470
Revises: 460
Create Date: 2016-01-22 16:20:29.793439
"""
# revision identifiers, used by Alembic.
revision = '470'
down_revision = '460'
from alembic import op
def upgrade():
op.execute("""
DELETE FROM users
WHERE users.role = 'buyer';
""")
def downgrade():
pass # :(
|
|
b1c8d27d2fdb44a98b33176bf7b14ef6d2d889d2
|
scripts/add-extension.py
|
scripts/add-extension.py
|
#!/usr/bin/env python
# Note, you need to pip install python-magic to use this.
import magic
import os
import sys
def fix_with_magic(filename):
base, ext = os.path.splitext(filename)
if ext:
print "skipping {}".format(filename)
return
result = magic.from_file(filename, mime=True)
if result.startswith("text/"):
new_name = filename + "." + result[5:]
if os.path.exists(new_name):
print "{} already exists!".format(new_name)
raise SystemExit(1)
os.rename(filename, new_name)
else:
print "Unsupported! {}".format(filename)
raise SystemExit(1)
if __name__ == '__main__':
fix_with_magic(sys.argv[1])
|
Add script to add extension based on mimetype
|
Add script to add extension based on mimetype
|
Python
|
bsd-3-clause
|
shaleh/useful-things,shaleh/useful-things
|
Add script to add extension based on mimetype
|
#!/usr/bin/env python
# Note, you need to pip install python-magic to use this.
import magic
import os
import sys
def fix_with_magic(filename):
base, ext = os.path.splitext(filename)
if ext:
print "skipping {}".format(filename)
return
result = magic.from_file(filename, mime=True)
if result.startswith("text/"):
new_name = filename + "." + result[5:]
if os.path.exists(new_name):
print "{} already exists!".format(new_name)
raise SystemExit(1)
os.rename(filename, new_name)
else:
print "Unsupported! {}".format(filename)
raise SystemExit(1)
if __name__ == '__main__':
fix_with_magic(sys.argv[1])
|
<commit_before><commit_msg>Add script to add extension based on mimetype<commit_after>
|
#!/usr/bin/env python
# Note, you need to pip install python-magic to use this.
import magic
import os
import sys
def fix_with_magic(filename):
base, ext = os.path.splitext(filename)
if ext:
print "skipping {}".format(filename)
return
result = magic.from_file(filename, mime=True)
if result.startswith("text/"):
new_name = filename + "." + result[5:]
if os.path.exists(new_name):
print "{} already exists!".format(new_name)
raise SystemExit(1)
os.rename(filename, new_name)
else:
print "Unsupported! {}".format(filename)
raise SystemExit(1)
if __name__ == '__main__':
fix_with_magic(sys.argv[1])
|
Add script to add extension based on mimetype#!/usr/bin/env python
# Note, you need to pip install python-magic to use this.
import magic
import os
import sys
def fix_with_magic(filename):
base, ext = os.path.splitext(filename)
if ext:
print "skipping {}".format(filename)
return
result = magic.from_file(filename, mime=True)
if result.startswith("text/"):
new_name = filename + "." + result[5:]
if os.path.exists(new_name):
print "{} already exists!".format(new_name)
raise SystemExit(1)
os.rename(filename, new_name)
else:
print "Unsupported! {}".format(filename)
raise SystemExit(1)
if __name__ == '__main__':
fix_with_magic(sys.argv[1])
|
<commit_before><commit_msg>Add script to add extension based on mimetype<commit_after>#!/usr/bin/env python
# Note, you need to pip install python-magic to use this.
import magic
import os
import sys
def fix_with_magic(filename):
base, ext = os.path.splitext(filename)
if ext:
print "skipping {}".format(filename)
return
result = magic.from_file(filename, mime=True)
if result.startswith("text/"):
new_name = filename + "." + result[5:]
if os.path.exists(new_name):
print "{} already exists!".format(new_name)
raise SystemExit(1)
os.rename(filename, new_name)
else:
print "Unsupported! {}".format(filename)
raise SystemExit(1)
if __name__ == '__main__':
fix_with_magic(sys.argv[1])
|
|
3da85702dc841c53d73ef890cec040dcb6a51812
|
utils/event_matcher.py
|
utils/event_matcher.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
logger = logging.getLogger(__name__)
def is_matching_event(play, event):
"""
Checks whether specified play (retrieved from json data) and database event
match.
"""
if play['play_type'] == 'PENL':
return is_matching_penalty_event(event, play)
# elif event.type in ['HIT', 'BLOCK', 'FAC']:
# return is_matching_hit_block_faceoff_event(play, event)
# elif event.type in ['GIVE', 'TAKE']:
# return is_matching_giveaway_takeaway_event(play, event)
# elif event.type in ['SHOT', 'GOAL']:
# return is_matching_shot_event(play, event)
# elif event.type == 'MISS':
# return is_matching_miss_event(play, event)
return False
def is_matching_penalty_event(penalty, play):
"""
Checks whether the given play retrieved from json data matches with the
specified (penalty) event.
"""
print("\tid ", play['active'], penalty.player_id)
print("\tpim ", play['pim'], penalty.pim)
print("\tinfraction", play['infraction'], penalty.infraction.lower())
if penalty.player_id is None:
if (
play['pim'], play['infraction']
) == (
penalty.pim, penalty.infraction.lower()
):
# TODO: logger.debug
return True
else:
if play['active'] is not None and play['passive'] is not None:
if (
play['active'], play['passive'],
play['pim'], play['infraction']
) == (
penalty.player_id, penalty.drawn_player_id,
penalty.pim, penalty.infraction.lower()
):
return True
elif play['active'] is not None:
if (
play['active'], play['pim'], play['infraction']
) == (
penalty.player_id, penalty.pim, penalty.infraction.lower()
):
return True
return False
|
Add utility functions to distinguish events with same type/time
|
Add utility functions to distinguish events with same type/time
|
Python
|
mit
|
leaffan/pynhldb
|
Add utility functions to distinguish events with same type/time
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
logger = logging.getLogger(__name__)
def is_matching_event(play, event):
"""
Checks whether specified play (retrieved from json data) and database event
match.
"""
if play['play_type'] == 'PENL':
return is_matching_penalty_event(event, play)
# elif event.type in ['HIT', 'BLOCK', 'FAC']:
# return is_matching_hit_block_faceoff_event(play, event)
# elif event.type in ['GIVE', 'TAKE']:
# return is_matching_giveaway_takeaway_event(play, event)
# elif event.type in ['SHOT', 'GOAL']:
# return is_matching_shot_event(play, event)
# elif event.type == 'MISS':
# return is_matching_miss_event(play, event)
return False
def is_matching_penalty_event(penalty, play):
"""
Checks whether the given play retrieved from json data matches with the
specified (penalty) event.
"""
print("\tid ", play['active'], penalty.player_id)
print("\tpim ", play['pim'], penalty.pim)
print("\tinfraction", play['infraction'], penalty.infraction.lower())
if penalty.player_id is None:
if (
play['pim'], play['infraction']
) == (
penalty.pim, penalty.infraction.lower()
):
# TODO: logger.debug
return True
else:
if play['active'] is not None and play['passive'] is not None:
if (
play['active'], play['passive'],
play['pim'], play['infraction']
) == (
penalty.player_id, penalty.drawn_player_id,
penalty.pim, penalty.infraction.lower()
):
return True
elif play['active'] is not None:
if (
play['active'], play['pim'], play['infraction']
) == (
penalty.player_id, penalty.pim, penalty.infraction.lower()
):
return True
return False
|
<commit_before><commit_msg>Add utility functions to distinguish events with same type/time<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
logger = logging.getLogger(__name__)
def is_matching_event(play, event):
"""
Checks whether specified play (retrieved from json data) and database event
match.
"""
if play['play_type'] == 'PENL':
return is_matching_penalty_event(event, play)
# elif event.type in ['HIT', 'BLOCK', 'FAC']:
# return is_matching_hit_block_faceoff_event(play, event)
# elif event.type in ['GIVE', 'TAKE']:
# return is_matching_giveaway_takeaway_event(play, event)
# elif event.type in ['SHOT', 'GOAL']:
# return is_matching_shot_event(play, event)
# elif event.type == 'MISS':
# return is_matching_miss_event(play, event)
return False
def is_matching_penalty_event(penalty, play):
"""
Checks whether the given play retrieved from json data matches with the
specified (penalty) event.
"""
print("\tid ", play['active'], penalty.player_id)
print("\tpim ", play['pim'], penalty.pim)
print("\tinfraction", play['infraction'], penalty.infraction.lower())
if penalty.player_id is None:
if (
play['pim'], play['infraction']
) == (
penalty.pim, penalty.infraction.lower()
):
# TODO: logger.debug
return True
else:
if play['active'] is not None and play['passive'] is not None:
if (
play['active'], play['passive'],
play['pim'], play['infraction']
) == (
penalty.player_id, penalty.drawn_player_id,
penalty.pim, penalty.infraction.lower()
):
return True
elif play['active'] is not None:
if (
play['active'], play['pim'], play['infraction']
) == (
penalty.player_id, penalty.pim, penalty.infraction.lower()
):
return True
return False
|
Add utility functions to distinguish events with same type/time#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
logger = logging.getLogger(__name__)
def is_matching_event(play, event):
"""
Checks whether specified play (retrieved from json data) and database event
match.
"""
if play['play_type'] == 'PENL':
return is_matching_penalty_event(event, play)
# elif event.type in ['HIT', 'BLOCK', 'FAC']:
# return is_matching_hit_block_faceoff_event(play, event)
# elif event.type in ['GIVE', 'TAKE']:
# return is_matching_giveaway_takeaway_event(play, event)
# elif event.type in ['SHOT', 'GOAL']:
# return is_matching_shot_event(play, event)
# elif event.type == 'MISS':
# return is_matching_miss_event(play, event)
return False
def is_matching_penalty_event(penalty, play):
"""
Checks whether the given play retrieved from json data matches with the
specified (penalty) event.
"""
print("\tid ", play['active'], penalty.player_id)
print("\tpim ", play['pim'], penalty.pim)
print("\tinfraction", play['infraction'], penalty.infraction.lower())
if penalty.player_id is None:
if (
play['pim'], play['infraction']
) == (
penalty.pim, penalty.infraction.lower()
):
# TODO: logger.debug
return True
else:
if play['active'] is not None and play['passive'] is not None:
if (
play['active'], play['passive'],
play['pim'], play['infraction']
) == (
penalty.player_id, penalty.drawn_player_id,
penalty.pim, penalty.infraction.lower()
):
return True
elif play['active'] is not None:
if (
play['active'], play['pim'], play['infraction']
) == (
penalty.player_id, penalty.pim, penalty.infraction.lower()
):
return True
return False
|
<commit_before><commit_msg>Add utility functions to distinguish events with same type/time<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
logger = logging.getLogger(__name__)
def is_matching_event(play, event):
"""
Checks whether specified play (retrieved from json data) and database event
match.
"""
if play['play_type'] == 'PENL':
return is_matching_penalty_event(event, play)
# elif event.type in ['HIT', 'BLOCK', 'FAC']:
# return is_matching_hit_block_faceoff_event(play, event)
# elif event.type in ['GIVE', 'TAKE']:
# return is_matching_giveaway_takeaway_event(play, event)
# elif event.type in ['SHOT', 'GOAL']:
# return is_matching_shot_event(play, event)
# elif event.type == 'MISS':
# return is_matching_miss_event(play, event)
return False
def is_matching_penalty_event(penalty, play):
"""
Checks whether the given play retrieved from json data matches with the
specified (penalty) event.
"""
print("\tid ", play['active'], penalty.player_id)
print("\tpim ", play['pim'], penalty.pim)
print("\tinfraction", play['infraction'], penalty.infraction.lower())
if penalty.player_id is None:
if (
play['pim'], play['infraction']
) == (
penalty.pim, penalty.infraction.lower()
):
# TODO: logger.debug
return True
else:
if play['active'] is not None and play['passive'] is not None:
if (
play['active'], play['passive'],
play['pim'], play['infraction']
) == (
penalty.player_id, penalty.drawn_player_id,
penalty.pim, penalty.infraction.lower()
):
return True
elif play['active'] is not None:
if (
play['active'], play['pim'], play['infraction']
) == (
penalty.player_id, penalty.pim, penalty.infraction.lower()
):
return True
return False
|
|
e46dfbd615f0d0fa8ac8d213cd053d71d7b42024
|
migrations/versions/201609141652_55d523872c43_add_abstract_fks.py
|
migrations/versions/201609141652_55d523872c43_add_abstract_fks.py
|
"""Add abstract FKs
Revision ID: 55d523872c43
Revises: 2ce1756a2f12
Create Date: 2016-09-14 16:52:29.196932
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '55d523872c43'
down_revision = '2ce1756a2f12'
def upgrade():
op.create_foreign_key(None,
'abstract_field_values', 'abstracts',
['abstract_id'], ['id'],
source_schema='event_abstracts', referent_schema='event_abstracts')
op.create_foreign_key(None,
'contributions', 'abstracts',
['abstract_id'], ['id'],
source_schema='events', referent_schema='event_abstracts')
def downgrade():
op.drop_constraint('fk_contributions_abstract_id_abstracts', 'contributions', schema='events')
op.drop_constraint('fk_abstract_field_values_abstract_id_abstracts', 'abstract_field_values',
schema='event_abstracts')
|
Add alembic revision to create FKs
|
Add alembic revision to create FKs
To be run AFTER running the zodbimporter
|
Python
|
mit
|
DirkHoffmann/indico,DirkHoffmann/indico,ThiefMaster/indico,mvidalgarcia/indico,mic4ael/indico,mic4ael/indico,pferreir/indico,indico/indico,DirkHoffmann/indico,OmeGak/indico,indico/indico,indico/indico,pferreir/indico,mvidalgarcia/indico,mic4ael/indico,DirkHoffmann/indico,ThiefMaster/indico,mic4ael/indico,pferreir/indico,mvidalgarcia/indico,indico/indico,OmeGak/indico,OmeGak/indico,ThiefMaster/indico,ThiefMaster/indico,pferreir/indico,OmeGak/indico,mvidalgarcia/indico
|
Add alembic revision to create FKs
To be run AFTER running the zodbimporter
|
"""Add abstract FKs
Revision ID: 55d523872c43
Revises: 2ce1756a2f12
Create Date: 2016-09-14 16:52:29.196932
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '55d523872c43'
down_revision = '2ce1756a2f12'
def upgrade():
op.create_foreign_key(None,
'abstract_field_values', 'abstracts',
['abstract_id'], ['id'],
source_schema='event_abstracts', referent_schema='event_abstracts')
op.create_foreign_key(None,
'contributions', 'abstracts',
['abstract_id'], ['id'],
source_schema='events', referent_schema='event_abstracts')
def downgrade():
op.drop_constraint('fk_contributions_abstract_id_abstracts', 'contributions', schema='events')
op.drop_constraint('fk_abstract_field_values_abstract_id_abstracts', 'abstract_field_values',
schema='event_abstracts')
|
<commit_before><commit_msg>Add alembic revision to create FKs
To be run AFTER running the zodbimporter<commit_after>
|
"""Add abstract FKs
Revision ID: 55d523872c43
Revises: 2ce1756a2f12
Create Date: 2016-09-14 16:52:29.196932
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '55d523872c43'
down_revision = '2ce1756a2f12'
def upgrade():
op.create_foreign_key(None,
'abstract_field_values', 'abstracts',
['abstract_id'], ['id'],
source_schema='event_abstracts', referent_schema='event_abstracts')
op.create_foreign_key(None,
'contributions', 'abstracts',
['abstract_id'], ['id'],
source_schema='events', referent_schema='event_abstracts')
def downgrade():
op.drop_constraint('fk_contributions_abstract_id_abstracts', 'contributions', schema='events')
op.drop_constraint('fk_abstract_field_values_abstract_id_abstracts', 'abstract_field_values',
schema='event_abstracts')
|
Add alembic revision to create FKs
To be run AFTER running the zodbimporter"""Add abstract FKs
Revision ID: 55d523872c43
Revises: 2ce1756a2f12
Create Date: 2016-09-14 16:52:29.196932
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '55d523872c43'
down_revision = '2ce1756a2f12'
def upgrade():
op.create_foreign_key(None,
'abstract_field_values', 'abstracts',
['abstract_id'], ['id'],
source_schema='event_abstracts', referent_schema='event_abstracts')
op.create_foreign_key(None,
'contributions', 'abstracts',
['abstract_id'], ['id'],
source_schema='events', referent_schema='event_abstracts')
def downgrade():
op.drop_constraint('fk_contributions_abstract_id_abstracts', 'contributions', schema='events')
op.drop_constraint('fk_abstract_field_values_abstract_id_abstracts', 'abstract_field_values',
schema='event_abstracts')
|
<commit_before><commit_msg>Add alembic revision to create FKs
To be run AFTER running the zodbimporter<commit_after>"""Add abstract FKs
Revision ID: 55d523872c43
Revises: 2ce1756a2f12
Create Date: 2016-09-14 16:52:29.196932
"""
from alembic import op
# revision identifiers, used by Alembic.
revision = '55d523872c43'
down_revision = '2ce1756a2f12'
def upgrade():
op.create_foreign_key(None,
'abstract_field_values', 'abstracts',
['abstract_id'], ['id'],
source_schema='event_abstracts', referent_schema='event_abstracts')
op.create_foreign_key(None,
'contributions', 'abstracts',
['abstract_id'], ['id'],
source_schema='events', referent_schema='event_abstracts')
def downgrade():
op.drop_constraint('fk_contributions_abstract_id_abstracts', 'contributions', schema='events')
op.drop_constraint('fk_abstract_field_values_abstract_id_abstracts', 'abstract_field_values',
schema='event_abstracts')
|
|
3b48c8a14937547f7e6cc9baa9ea37b744855739
|
judge/migrations/0116_contest_curator_and_tester.py
|
judge/migrations/0116_contest_curator_and_tester.py
|
# Generated by Django 2.2.13 on 2021-04-26 02:57
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('judge', '0115_contest_scoreboard_visibility'),
]
operations = [
migrations.AlterField(
model_name='contest',
name='organizers',
field=models.ManyToManyField(help_text='These users will be able to edit the contest.', related_name='_contest_authors_+', to='judge.Profile'),
),
migrations.RenameField(
model_name='contest',
old_name='organizers',
new_name='authors',
),
migrations.AddField(
model_name='contest',
name='curators',
field=models.ManyToManyField(blank=True, help_text='These users will be able to edit the contest, but will not be listed as authors.', related_name='_contest_curators_+', to='judge.Profile'),
),
migrations.AddField(
model_name='contest',
name='testers',
field=models.ManyToManyField(blank=True, help_text='These users will be able to view the contest, but not edit it.', related_name='_contest_testers_+', to='judge.Profile'),
),
]
|
Add migration for contest curator and tester
|
Add migration for contest curator and tester
|
Python
|
agpl-3.0
|
DMOJ/site,DMOJ/site,DMOJ/site,DMOJ/site
|
Add migration for contest curator and tester
|
# Generated by Django 2.2.13 on 2021-04-26 02:57
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('judge', '0115_contest_scoreboard_visibility'),
]
operations = [
migrations.AlterField(
model_name='contest',
name='organizers',
field=models.ManyToManyField(help_text='These users will be able to edit the contest.', related_name='_contest_authors_+', to='judge.Profile'),
),
migrations.RenameField(
model_name='contest',
old_name='organizers',
new_name='authors',
),
migrations.AddField(
model_name='contest',
name='curators',
field=models.ManyToManyField(blank=True, help_text='These users will be able to edit the contest, but will not be listed as authors.', related_name='_contest_curators_+', to='judge.Profile'),
),
migrations.AddField(
model_name='contest',
name='testers',
field=models.ManyToManyField(blank=True, help_text='These users will be able to view the contest, but not edit it.', related_name='_contest_testers_+', to='judge.Profile'),
),
]
|
<commit_before><commit_msg>Add migration for contest curator and tester<commit_after>
|
# Generated by Django 2.2.13 on 2021-04-26 02:57
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('judge', '0115_contest_scoreboard_visibility'),
]
operations = [
migrations.AlterField(
model_name='contest',
name='organizers',
field=models.ManyToManyField(help_text='These users will be able to edit the contest.', related_name='_contest_authors_+', to='judge.Profile'),
),
migrations.RenameField(
model_name='contest',
old_name='organizers',
new_name='authors',
),
migrations.AddField(
model_name='contest',
name='curators',
field=models.ManyToManyField(blank=True, help_text='These users will be able to edit the contest, but will not be listed as authors.', related_name='_contest_curators_+', to='judge.Profile'),
),
migrations.AddField(
model_name='contest',
name='testers',
field=models.ManyToManyField(blank=True, help_text='These users will be able to view the contest, but not edit it.', related_name='_contest_testers_+', to='judge.Profile'),
),
]
|
Add migration for contest curator and tester# Generated by Django 2.2.13 on 2021-04-26 02:57
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('judge', '0115_contest_scoreboard_visibility'),
]
operations = [
migrations.AlterField(
model_name='contest',
name='organizers',
field=models.ManyToManyField(help_text='These users will be able to edit the contest.', related_name='_contest_authors_+', to='judge.Profile'),
),
migrations.RenameField(
model_name='contest',
old_name='organizers',
new_name='authors',
),
migrations.AddField(
model_name='contest',
name='curators',
field=models.ManyToManyField(blank=True, help_text='These users will be able to edit the contest, but will not be listed as authors.', related_name='_contest_curators_+', to='judge.Profile'),
),
migrations.AddField(
model_name='contest',
name='testers',
field=models.ManyToManyField(blank=True, help_text='These users will be able to view the contest, but not edit it.', related_name='_contest_testers_+', to='judge.Profile'),
),
]
|
<commit_before><commit_msg>Add migration for contest curator and tester<commit_after># Generated by Django 2.2.13 on 2021-04-26 02:57
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('judge', '0115_contest_scoreboard_visibility'),
]
operations = [
migrations.AlterField(
model_name='contest',
name='organizers',
field=models.ManyToManyField(help_text='These users will be able to edit the contest.', related_name='_contest_authors_+', to='judge.Profile'),
),
migrations.RenameField(
model_name='contest',
old_name='organizers',
new_name='authors',
),
migrations.AddField(
model_name='contest',
name='curators',
field=models.ManyToManyField(blank=True, help_text='These users will be able to edit the contest, but will not be listed as authors.', related_name='_contest_curators_+', to='judge.Profile'),
),
migrations.AddField(
model_name='contest',
name='testers',
field=models.ManyToManyField(blank=True, help_text='These users will be able to view the contest, but not edit it.', related_name='_contest_testers_+', to='judge.Profile'),
),
]
|
|
08293b3c679e7079e80fcb1afc1f8b26570a6f2c
|
mrp_subcontracting/models/mrp_routing_workcenter.py
|
mrp_subcontracting/models/mrp_routing_workcenter.py
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
'product.product', 'Semifinished Subcontracting',
domain=[('type', '=', 'product'),
])
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
comodel_name='product.product', string='Semifinished Subcontracting')
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
|
Allow to select any type of subcontracting product
|
[FIX] mrp_subcontracting: Allow to select any type of subcontracting product
|
Python
|
agpl-3.0
|
diagramsoftware/odoomrp-wip,agaldona/odoomrp-wip-1,raycarnes/odoomrp-wip,alhashash/odoomrp-wip,esthermm/odoomrp-wip,odoocn/odoomrp-wip,esthermm/odoomrp-wip,oihane/odoomrp-wip,factorlibre/odoomrp-wip,xpansa/odoomrp-wip,maljac/odoomrp-wip,diagramsoftware/odoomrp-wip,dvitme/odoomrp-wip,windedge/odoomrp-wip,odoomrp/odoomrp-wip,sergiocorato/odoomrp-wip,michaeljohn32/odoomrp-wip,Eficent/odoomrp-wip,factorlibre/odoomrp-wip,sergiocorato/odoomrp-wip,jobiols/odoomrp-wip,InakiZabala/odoomrp-wip,alfredoavanzosc/odoomrp-wip-1,ddico/odoomrp-wip,odoomrp/odoomrp-wip,Daniel-CA/odoomrp-wip-public,jorsea/odoomrp-wip,oihane/odoomrp-wip,Daniel-CA/odoomrp-wip-public,agaldona/odoomrp-wip-1,Eficent/odoomrp-wip,invitu/odoomrp-wip,Antiun/odoomrp-wip,jobiols/odoomrp-wip,Endika/odoomrp-wip
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
'product.product', 'Semifinished Subcontracting',
domain=[('type', '=', 'product'),
])
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
[FIX] mrp_subcontracting: Allow to select any type of subcontracting product
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
comodel_name='product.product', string='Semifinished Subcontracting')
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
|
<commit_before># -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
'product.product', 'Semifinished Subcontracting',
domain=[('type', '=', 'product'),
])
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
<commit_msg>[FIX] mrp_subcontracting: Allow to select any type of subcontracting product<commit_after>
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
comodel_name='product.product', string='Semifinished Subcontracting')
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
|
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
'product.product', 'Semifinished Subcontracting',
domain=[('type', '=', 'product'),
])
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
[FIX] mrp_subcontracting: Allow to select any type of subcontracting product# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
comodel_name='product.product', string='Semifinished Subcontracting')
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
|
<commit_before># -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
'product.product', 'Semifinished Subcontracting',
domain=[('type', '=', 'product'),
])
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
<commit_msg>[FIX] mrp_subcontracting: Allow to select any type of subcontracting product<commit_after># -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields
class MrpRoutingWorkcenter(models.Model):
_inherit = 'mrp.routing.workcenter'
external = fields.Boolean('External', help="Is Subcontract Operation")
semifinished_id = fields.Many2one(
comodel_name='product.product', string='Semifinished Subcontracting')
picking_type_id = fields.Many2one('stock.picking.type', 'Picking Type',
domain=[('code', '=', 'outgoing')])
|
8f18bc32e84de1ce25bfcb972c6b9f1b24e0f6af
|
runners/test/python/sparse_codegen_test_util.py
|
runners/test/python/sparse_codegen_test_util.py
|
"""Testing error handling in the common test utilities.
The module tests the error handling in the utilities we use for writing
exhaustive tests for the sparse codegen.
"""
import inspect
import sys
# Import MLIR related modules.
from mlir.dialects import sparse_tensor as st
from mlir.dialects.linalg.opdsl import lang as dsl
# Import common test tools.
import sparse_codegen_test_common as tc
# A test returns 1 when it fails to indicate the number of failing test. This is
# to help accumulating the total number of failing tests.
_PASS = 0
_FAIL = 1
def _pass_test(name: str) -> int:
print(f"{name} passed.")
return _PASS
def _fail_test(name: str) -> int:
print(f"{name} failed.")
return _FAIL
def _test_mismatching_ordering_sparsity() -> int:
"""Test for inconsistent input descriptor parameters.
The dimension ordering and the sparsities in this test don't have the same
length.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 1, 2], [st.DimLevelType.dense, st.DimLevelType.dense],
0, 0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_ordering() -> int:
"""Test for invalid dimension orderings.
The dimension ordering in this test is not a permutation of 0..n-1, where
n is the length of the dimension ordering.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 2], [st.DimLevelType.dense, st.DimLevelType.dense], 0,
0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_affine_expression() -> int:
"""Test for invalid affine expressions.
The affine expression in the first input here is not defined in the iteration
space.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.TestDesc([dsl.S.M, dsl.S.N], [8, 8], [dsl.S.M, dsl.S.X])
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def run_test():
num_failed = (
_test_mismatching_ordering_sparsity() + _test_invalid_ordering() +
_test_invalid_affine_expression())
if num_failed == 0:
print("All test passed.")
else:
print(f"{num_failed} tests failed.")
sys.exit("FAILURE")
run_test()
|
Add tests for the exhaustive test utilities.
|
[mlir][python[[sparse] Add tests for the exhaustive test utilities.
Add tests for the common utilities that we use for writing exhaustive tests for
the MLIR sparse codegen.
Simplify the relevant build rules by adding a new filegroup and moving a shared
library to an existing filegroup.
PiperOrigin-RevId: 385571670
|
Python
|
apache-2.0
|
iree-org/iree-llvm-sandbox,iree-org/iree-llvm-sandbox,iree-org/iree-llvm-sandbox,iree-org/iree-llvm-sandbox
|
[mlir][python[[sparse] Add tests for the exhaustive test utilities.
Add tests for the common utilities that we use for writing exhaustive tests for
the MLIR sparse codegen.
Simplify the relevant build rules by adding a new filegroup and moving a shared
library to an existing filegroup.
PiperOrigin-RevId: 385571670
|
"""Testing error handling in the common test utilities.
The module tests the error handling in the utilities we use for writing
exhaustive tests for the sparse codegen.
"""
import inspect
import sys
# Import MLIR related modules.
from mlir.dialects import sparse_tensor as st
from mlir.dialects.linalg.opdsl import lang as dsl
# Import common test tools.
import sparse_codegen_test_common as tc
# A test returns 1 when it fails to indicate the number of failing test. This is
# to help accumulating the total number of failing tests.
_PASS = 0
_FAIL = 1
def _pass_test(name: str) -> int:
print(f"{name} passed.")
return _PASS
def _fail_test(name: str) -> int:
print(f"{name} failed.")
return _FAIL
def _test_mismatching_ordering_sparsity() -> int:
"""Test for inconsistent input descriptor parameters.
The dimension ordering and the sparsities in this test don't have the same
length.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 1, 2], [st.DimLevelType.dense, st.DimLevelType.dense],
0, 0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_ordering() -> int:
"""Test for invalid dimension orderings.
The dimension ordering in this test is not a permutation of 0..n-1, where
n is the length of the dimension ordering.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 2], [st.DimLevelType.dense, st.DimLevelType.dense], 0,
0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_affine_expression() -> int:
"""Test for invalid affine expressions.
The affine expression in the first input here is not defined in the iteration
space.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.TestDesc([dsl.S.M, dsl.S.N], [8, 8], [dsl.S.M, dsl.S.X])
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def run_test():
num_failed = (
_test_mismatching_ordering_sparsity() + _test_invalid_ordering() +
_test_invalid_affine_expression())
if num_failed == 0:
print("All test passed.")
else:
print(f"{num_failed} tests failed.")
sys.exit("FAILURE")
run_test()
|
<commit_before><commit_msg>[mlir][python[[sparse] Add tests for the exhaustive test utilities.
Add tests for the common utilities that we use for writing exhaustive tests for
the MLIR sparse codegen.
Simplify the relevant build rules by adding a new filegroup and moving a shared
library to an existing filegroup.
PiperOrigin-RevId: 385571670<commit_after>
|
"""Testing error handling in the common test utilities.
The module tests the error handling in the utilities we use for writing
exhaustive tests for the sparse codegen.
"""
import inspect
import sys
# Import MLIR related modules.
from mlir.dialects import sparse_tensor as st
from mlir.dialects.linalg.opdsl import lang as dsl
# Import common test tools.
import sparse_codegen_test_common as tc
# A test returns 1 when it fails to indicate the number of failing test. This is
# to help accumulating the total number of failing tests.
_PASS = 0
_FAIL = 1
def _pass_test(name: str) -> int:
print(f"{name} passed.")
return _PASS
def _fail_test(name: str) -> int:
print(f"{name} failed.")
return _FAIL
def _test_mismatching_ordering_sparsity() -> int:
"""Test for inconsistent input descriptor parameters.
The dimension ordering and the sparsities in this test don't have the same
length.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 1, 2], [st.DimLevelType.dense, st.DimLevelType.dense],
0, 0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_ordering() -> int:
"""Test for invalid dimension orderings.
The dimension ordering in this test is not a permutation of 0..n-1, where
n is the length of the dimension ordering.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 2], [st.DimLevelType.dense, st.DimLevelType.dense], 0,
0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_affine_expression() -> int:
"""Test for invalid affine expressions.
The affine expression in the first input here is not defined in the iteration
space.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.TestDesc([dsl.S.M, dsl.S.N], [8, 8], [dsl.S.M, dsl.S.X])
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def run_test():
num_failed = (
_test_mismatching_ordering_sparsity() + _test_invalid_ordering() +
_test_invalid_affine_expression())
if num_failed == 0:
print("All test passed.")
else:
print(f"{num_failed} tests failed.")
sys.exit("FAILURE")
run_test()
|
[mlir][python[[sparse] Add tests for the exhaustive test utilities.
Add tests for the common utilities that we use for writing exhaustive tests for
the MLIR sparse codegen.
Simplify the relevant build rules by adding a new filegroup and moving a shared
library to an existing filegroup.
PiperOrigin-RevId: 385571670"""Testing error handling in the common test utilities.
The module tests the error handling in the utilities we use for writing
exhaustive tests for the sparse codegen.
"""
import inspect
import sys
# Import MLIR related modules.
from mlir.dialects import sparse_tensor as st
from mlir.dialects.linalg.opdsl import lang as dsl
# Import common test tools.
import sparse_codegen_test_common as tc
# A test returns 1 when it fails to indicate the number of failing test. This is
# to help accumulating the total number of failing tests.
_PASS = 0
_FAIL = 1
def _pass_test(name: str) -> int:
print(f"{name} passed.")
return _PASS
def _fail_test(name: str) -> int:
print(f"{name} failed.")
return _FAIL
def _test_mismatching_ordering_sparsity() -> int:
"""Test for inconsistent input descriptor parameters.
The dimension ordering and the sparsities in this test don't have the same
length.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 1, 2], [st.DimLevelType.dense, st.DimLevelType.dense],
0, 0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_ordering() -> int:
"""Test for invalid dimension orderings.
The dimension ordering in this test is not a permutation of 0..n-1, where
n is the length of the dimension ordering.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 2], [st.DimLevelType.dense, st.DimLevelType.dense], 0,
0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_affine_expression() -> int:
"""Test for invalid affine expressions.
The affine expression in the first input here is not defined in the iteration
space.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.TestDesc([dsl.S.M, dsl.S.N], [8, 8], [dsl.S.M, dsl.S.X])
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def run_test():
num_failed = (
_test_mismatching_ordering_sparsity() + _test_invalid_ordering() +
_test_invalid_affine_expression())
if num_failed == 0:
print("All test passed.")
else:
print(f"{num_failed} tests failed.")
sys.exit("FAILURE")
run_test()
|
<commit_before><commit_msg>[mlir][python[[sparse] Add tests for the exhaustive test utilities.
Add tests for the common utilities that we use for writing exhaustive tests for
the MLIR sparse codegen.
Simplify the relevant build rules by adding a new filegroup and moving a shared
library to an existing filegroup.
PiperOrigin-RevId: 385571670<commit_after>"""Testing error handling in the common test utilities.
The module tests the error handling in the utilities we use for writing
exhaustive tests for the sparse codegen.
"""
import inspect
import sys
# Import MLIR related modules.
from mlir.dialects import sparse_tensor as st
from mlir.dialects.linalg.opdsl import lang as dsl
# Import common test tools.
import sparse_codegen_test_common as tc
# A test returns 1 when it fails to indicate the number of failing test. This is
# to help accumulating the total number of failing tests.
_PASS = 0
_FAIL = 1
def _pass_test(name: str) -> int:
print(f"{name} passed.")
return _PASS
def _fail_test(name: str) -> int:
print(f"{name} failed.")
return _FAIL
def _test_mismatching_ordering_sparsity() -> int:
"""Test for inconsistent input descriptor parameters.
The dimension ordering and the sparsities in this test don't have the same
length.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 1, 2], [st.DimLevelType.dense, st.DimLevelType.dense],
0, 0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_ordering() -> int:
"""Test for invalid dimension orderings.
The dimension ordering in this test is not a permutation of 0..n-1, where
n is the length of the dimension ordering.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.InputDesc([0, 2], [st.DimLevelType.dense, st.DimLevelType.dense], 0,
0)
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def _test_invalid_affine_expression() -> int:
"""Test for invalid affine expressions.
The affine expression in the first input here is not defined in the iteration
space.
"""
name = inspect.currentframe().f_code.co_name
try:
_ = tc.TestDesc([dsl.S.M, dsl.S.N], [8, 8], [dsl.S.M, dsl.S.X])
except ValueError:
num_failed = _pass_test(name)
else:
num_failed = _fail_test(name)
return num_failed
def run_test():
num_failed = (
_test_mismatching_ordering_sparsity() + _test_invalid_ordering() +
_test_invalid_affine_expression())
if num_failed == 0:
print("All test passed.")
else:
print(f"{num_failed} tests failed.")
sys.exit("FAILURE")
run_test()
|
|
dca6f5450c449aeedca792326964ddb91992ec95
|
test/unit/sorting/test_bucket_sort.py
|
test/unit/sorting/test_bucket_sort.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import unittest
from helper.read_data_file import read_int_array
from sorting.bucket_sort import sort
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
class InsertionSortTester(unittest.TestCase):
# Test sort in default order, i.e., in ascending order.
def test_sort_default(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array)
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in ascending order.
def test_sort_ascending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'asc')
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in descending order.
def test_sort_descending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'desc')
expect = [969, 864, 702, 647, 637, 589, 567, 445, 444, 417, 140, 113, 86, 76, 65]
self.assertEqual(expect, array)
if __name__ == '__main__':
unittest.main()
|
Add unit test for bucket sort implementation.
|
Add unit test for bucket sort implementation.
|
Python
|
mit
|
weichen2046/algorithm-study,weichen2046/algorithm-study
|
Add unit test for bucket sort implementation.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import unittest
from helper.read_data_file import read_int_array
from sorting.bucket_sort import sort
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
class InsertionSortTester(unittest.TestCase):
# Test sort in default order, i.e., in ascending order.
def test_sort_default(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array)
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in ascending order.
def test_sort_ascending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'asc')
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in descending order.
def test_sort_descending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'desc')
expect = [969, 864, 702, 647, 637, 589, 567, 445, 444, 417, 140, 113, 86, 76, 65]
self.assertEqual(expect, array)
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add unit test for bucket sort implementation.<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import unittest
from helper.read_data_file import read_int_array
from sorting.bucket_sort import sort
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
class InsertionSortTester(unittest.TestCase):
# Test sort in default order, i.e., in ascending order.
def test_sort_default(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array)
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in ascending order.
def test_sort_ascending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'asc')
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in descending order.
def test_sort_descending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'desc')
expect = [969, 864, 702, 647, 637, 589, 567, 445, 444, 417, 140, 113, 86, 76, 65]
self.assertEqual(expect, array)
if __name__ == '__main__':
unittest.main()
|
Add unit test for bucket sort implementation.#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import unittest
from helper.read_data_file import read_int_array
from sorting.bucket_sort import sort
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
class InsertionSortTester(unittest.TestCase):
# Test sort in default order, i.e., in ascending order.
def test_sort_default(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array)
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in ascending order.
def test_sort_ascending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'asc')
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in descending order.
def test_sort_descending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'desc')
expect = [969, 864, 702, 647, 637, 589, 567, 445, 444, 417, 140, 113, 86, 76, 65]
self.assertEqual(expect, array)
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add unit test for bucket sort implementation.<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import unittest
from helper.read_data_file import read_int_array
from sorting.bucket_sort import sort
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
class InsertionSortTester(unittest.TestCase):
# Test sort in default order, i.e., in ascending order.
def test_sort_default(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array)
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in ascending order.
def test_sort_ascending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'asc')
expect = [65, 76, 86, 113, 140, 417, 444, 445, 567, 589, 637, 647, 702, 864, 969]
self.assertEqual(expect, array)
# Test sort in descending order.
def test_sort_descending(self):
array = read_int_array(os.path.join(BASE_DIR, 'data1.data'))
array = sort(array, 'desc')
expect = [969, 864, 702, 647, 637, 589, 567, 445, 444, 417, 140, 113, 86, 76, 65]
self.assertEqual(expect, array)
if __name__ == '__main__':
unittest.main()
|
|
ce09da7df6668692133786bfdd5e25c6ac5d2c17
|
conpaas-director/cpsdirector/get_external_idps.py
|
conpaas-director/cpsdirector/get_external_idps.py
|
#!/usr/bin/python
import ConfigParser
import pprint
def get_external_idps(director_configfile):
"""
get_external_idps(director_configfile)
Checks in the conpaas section if the support_external_idp option is present and set.
If so, checks if external_idps option is present, and for all
named idps collects all the options in the respective idp sections.
Validation of option names and values n the idp sections is left to the calling program.
Returns a dictonary with all idps and their options.
"""
dict1 = {}
Config = ConfigParser.ConfigParser()
result = Config.read(director_configfile)
if result == []:
print 'Could not read %s' % filename
idp_support = False # easy init
external_idps = [] # easy init
try:
idp_support = Config.getboolean('conpaas', 'support_external_idp')
if idp_support:
external_idps = Config.get('conpaas', 'external_idps').split(',')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError) as e:
print e
if not idp_support:
return dict1
for xi in external_idps:
options = [] # easy init
try:
options = Config.options(xi)
except ConfigParser.NoSectionError as e:
print e
continue # next xi
dict2 = {}
for option in options:
dict2[option] = Config.get(xi, option)
dict1[xi] = dict2
return dict1
if __name__ == '__main__':
ei_dict = get_external_idps("/etc/cpsdirector/director.cfg")
pprint.pprint(ei_dict)
|
Add code to support OpenID
|
Add code to support OpenID
|
Python
|
bsd-3-clause
|
ConPaaS-team/conpaas,ConPaaS-team/conpaas,ConPaaS-team/conpaas,ConPaaS-team/conpaas,ConPaaS-team/conpaas,ConPaaS-team/conpaas,ConPaaS-team/conpaas
|
Add code to support OpenID
|
#!/usr/bin/python
import ConfigParser
import pprint
def get_external_idps(director_configfile):
"""
get_external_idps(director_configfile)
Checks in the conpaas section if the support_external_idp option is present and set.
If so, checks if external_idps option is present, and for all
named idps collects all the options in the respective idp sections.
Validation of option names and values n the idp sections is left to the calling program.
Returns a dictonary with all idps and their options.
"""
dict1 = {}
Config = ConfigParser.ConfigParser()
result = Config.read(director_configfile)
if result == []:
print 'Could not read %s' % filename
idp_support = False # easy init
external_idps = [] # easy init
try:
idp_support = Config.getboolean('conpaas', 'support_external_idp')
if idp_support:
external_idps = Config.get('conpaas', 'external_idps').split(',')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError) as e:
print e
if not idp_support:
return dict1
for xi in external_idps:
options = [] # easy init
try:
options = Config.options(xi)
except ConfigParser.NoSectionError as e:
print e
continue # next xi
dict2 = {}
for option in options:
dict2[option] = Config.get(xi, option)
dict1[xi] = dict2
return dict1
if __name__ == '__main__':
ei_dict = get_external_idps("/etc/cpsdirector/director.cfg")
pprint.pprint(ei_dict)
|
<commit_before><commit_msg>Add code to support OpenID<commit_after>
|
#!/usr/bin/python
import ConfigParser
import pprint
def get_external_idps(director_configfile):
"""
get_external_idps(director_configfile)
Checks in the conpaas section if the support_external_idp option is present and set.
If so, checks if external_idps option is present, and for all
named idps collects all the options in the respective idp sections.
Validation of option names and values n the idp sections is left to the calling program.
Returns a dictonary with all idps and their options.
"""
dict1 = {}
Config = ConfigParser.ConfigParser()
result = Config.read(director_configfile)
if result == []:
print 'Could not read %s' % filename
idp_support = False # easy init
external_idps = [] # easy init
try:
idp_support = Config.getboolean('conpaas', 'support_external_idp')
if idp_support:
external_idps = Config.get('conpaas', 'external_idps').split(',')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError) as e:
print e
if not idp_support:
return dict1
for xi in external_idps:
options = [] # easy init
try:
options = Config.options(xi)
except ConfigParser.NoSectionError as e:
print e
continue # next xi
dict2 = {}
for option in options:
dict2[option] = Config.get(xi, option)
dict1[xi] = dict2
return dict1
if __name__ == '__main__':
ei_dict = get_external_idps("/etc/cpsdirector/director.cfg")
pprint.pprint(ei_dict)
|
Add code to support OpenID#!/usr/bin/python
import ConfigParser
import pprint
def get_external_idps(director_configfile):
"""
get_external_idps(director_configfile)
Checks in the conpaas section if the support_external_idp option is present and set.
If so, checks if external_idps option is present, and for all
named idps collects all the options in the respective idp sections.
Validation of option names and values n the idp sections is left to the calling program.
Returns a dictonary with all idps and their options.
"""
dict1 = {}
Config = ConfigParser.ConfigParser()
result = Config.read(director_configfile)
if result == []:
print 'Could not read %s' % filename
idp_support = False # easy init
external_idps = [] # easy init
try:
idp_support = Config.getboolean('conpaas', 'support_external_idp')
if idp_support:
external_idps = Config.get('conpaas', 'external_idps').split(',')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError) as e:
print e
if not idp_support:
return dict1
for xi in external_idps:
options = [] # easy init
try:
options = Config.options(xi)
except ConfigParser.NoSectionError as e:
print e
continue # next xi
dict2 = {}
for option in options:
dict2[option] = Config.get(xi, option)
dict1[xi] = dict2
return dict1
if __name__ == '__main__':
ei_dict = get_external_idps("/etc/cpsdirector/director.cfg")
pprint.pprint(ei_dict)
|
<commit_before><commit_msg>Add code to support OpenID<commit_after>#!/usr/bin/python
import ConfigParser
import pprint
def get_external_idps(director_configfile):
"""
get_external_idps(director_configfile)
Checks in the conpaas section if the support_external_idp option is present and set.
If so, checks if external_idps option is present, and for all
named idps collects all the options in the respective idp sections.
Validation of option names and values n the idp sections is left to the calling program.
Returns a dictonary with all idps and their options.
"""
dict1 = {}
Config = ConfigParser.ConfigParser()
result = Config.read(director_configfile)
if result == []:
print 'Could not read %s' % filename
idp_support = False # easy init
external_idps = [] # easy init
try:
idp_support = Config.getboolean('conpaas', 'support_external_idp')
if idp_support:
external_idps = Config.get('conpaas', 'external_idps').split(',')
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError) as e:
print e
if not idp_support:
return dict1
for xi in external_idps:
options = [] # easy init
try:
options = Config.options(xi)
except ConfigParser.NoSectionError as e:
print e
continue # next xi
dict2 = {}
for option in options:
dict2[option] = Config.get(xi, option)
dict1[xi] = dict2
return dict1
if __name__ == '__main__':
ei_dict = get_external_idps("/etc/cpsdirector/director.cfg")
pprint.pprint(ei_dict)
|
|
a47860e73c68ffbba97f1a70355223a2052a5f3c
|
tests/modules/test_pulseaudio.py
|
tests/modules/test_pulseaudio.py
|
# pylint: disable=C0103,C0111
import mock
import unittest
import tests.mocks as mocks
from bumblebee.input import LEFT_MOUSE, RIGHT_MOUSE, WHEEL_UP, WHEEL_DOWN
from bumblebee.modules.pulseaudio import Module
class TestPulseAudioModule(unittest.TestCase):
def setUp(self):
mocks.setup_test(self, Module)
def tearDown(self):
mocks.teardown_test(self)
def test_leftclick(self):
mocks.mouseEvent(stdin=self.stdin, button=LEFT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-mute @DEFAULT_SOURCE@ toggle")
def test_rightclick(self):
mocks.mouseEvent(stdin=self.stdin, button=RIGHT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pavucontrol")
def test_wheelup(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_UP, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ +2%")
def test_wheeldown(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_DOWN, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ -2%")
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
|
Add unit tests for pulseaudio module
|
[tests] Add unit tests for pulseaudio module
|
Python
|
mit
|
tobi-wan-kenobi/bumblebee-status,tobi-wan-kenobi/bumblebee-status
|
[tests] Add unit tests for pulseaudio module
|
# pylint: disable=C0103,C0111
import mock
import unittest
import tests.mocks as mocks
from bumblebee.input import LEFT_MOUSE, RIGHT_MOUSE, WHEEL_UP, WHEEL_DOWN
from bumblebee.modules.pulseaudio import Module
class TestPulseAudioModule(unittest.TestCase):
def setUp(self):
mocks.setup_test(self, Module)
def tearDown(self):
mocks.teardown_test(self)
def test_leftclick(self):
mocks.mouseEvent(stdin=self.stdin, button=LEFT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-mute @DEFAULT_SOURCE@ toggle")
def test_rightclick(self):
mocks.mouseEvent(stdin=self.stdin, button=RIGHT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pavucontrol")
def test_wheelup(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_UP, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ +2%")
def test_wheeldown(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_DOWN, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ -2%")
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
|
<commit_before><commit_msg>[tests] Add unit tests for pulseaudio module<commit_after>
|
# pylint: disable=C0103,C0111
import mock
import unittest
import tests.mocks as mocks
from bumblebee.input import LEFT_MOUSE, RIGHT_MOUSE, WHEEL_UP, WHEEL_DOWN
from bumblebee.modules.pulseaudio import Module
class TestPulseAudioModule(unittest.TestCase):
def setUp(self):
mocks.setup_test(self, Module)
def tearDown(self):
mocks.teardown_test(self)
def test_leftclick(self):
mocks.mouseEvent(stdin=self.stdin, button=LEFT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-mute @DEFAULT_SOURCE@ toggle")
def test_rightclick(self):
mocks.mouseEvent(stdin=self.stdin, button=RIGHT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pavucontrol")
def test_wheelup(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_UP, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ +2%")
def test_wheeldown(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_DOWN, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ -2%")
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
|
[tests] Add unit tests for pulseaudio module# pylint: disable=C0103,C0111
import mock
import unittest
import tests.mocks as mocks
from bumblebee.input import LEFT_MOUSE, RIGHT_MOUSE, WHEEL_UP, WHEEL_DOWN
from bumblebee.modules.pulseaudio import Module
class TestPulseAudioModule(unittest.TestCase):
def setUp(self):
mocks.setup_test(self, Module)
def tearDown(self):
mocks.teardown_test(self)
def test_leftclick(self):
mocks.mouseEvent(stdin=self.stdin, button=LEFT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-mute @DEFAULT_SOURCE@ toggle")
def test_rightclick(self):
mocks.mouseEvent(stdin=self.stdin, button=RIGHT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pavucontrol")
def test_wheelup(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_UP, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ +2%")
def test_wheeldown(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_DOWN, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ -2%")
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
|
<commit_before><commit_msg>[tests] Add unit tests for pulseaudio module<commit_after># pylint: disable=C0103,C0111
import mock
import unittest
import tests.mocks as mocks
from bumblebee.input import LEFT_MOUSE, RIGHT_MOUSE, WHEEL_UP, WHEEL_DOWN
from bumblebee.modules.pulseaudio import Module
class TestPulseAudioModule(unittest.TestCase):
def setUp(self):
mocks.setup_test(self, Module)
def tearDown(self):
mocks.teardown_test(self)
def test_leftclick(self):
mocks.mouseEvent(stdin=self.stdin, button=LEFT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-mute @DEFAULT_SOURCE@ toggle")
def test_rightclick(self):
mocks.mouseEvent(stdin=self.stdin, button=RIGHT_MOUSE, inp=self.input, module=self.module)
self.popen.assert_call("pavucontrol")
def test_wheelup(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_UP, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ +2%")
def test_wheeldown(self):
mocks.mouseEvent(stdin=self.stdin, button=WHEEL_DOWN, inp=self.input, module=self.module)
self.popen.assert_call("pactl set-source-volume @DEFAULT_SOURCE@ -2%")
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
|
|
c88947ad3dd5ad0c2e113e8fb8a37aff642b3381
|
timm/data/parsers/parser_hfds.py
|
timm/data/parsers/parser_hfds.py
|
""" Dataset parser interface that wraps Hugging Face datasets
Hacked together by / Copyright 2022 Ross Wightman
"""
import io
import math
import torch
import torch.distributed as dist
from PIL import Image
try:
import datasets
except ImportError as e:
print("Please install Hugging Face datasets package `pip install datasets`.")
exit(1)
from .parser import Parser
def get_class_labels(info):
if 'label' not in info.features:
return {}
class_label = info.features['label']
class_to_idx = {n: class_label.str2int(n) for n in class_label.names}
return class_to_idx
class ParserHfds(Parser):
def __init__(
self,
root,
name,
split='train',
class_map=None,
download=False,
):
"""
"""
super().__init__()
self.root = root
self.split = split
self.dataset = datasets.load_dataset(
name, # 'name' maps to path arg in hf datasets
split=split,
cache_dir=self.root, # timm doesn't expect hidden cache dir for datasets, specify a path
#use_auth_token=True,
)
# leave decode for caller, plus we want easy access to original path names...
self.dataset = self.dataset.cast_column('image', datasets.Image(decode=False))
self.class_to_idx = get_class_labels(self.dataset.info)
self.split_info = self.dataset.info.splits[split]
self.num_examples = self.split_info.num_examples
def __getitem__(self, index):
item = self.dataset[index]
image = item['image']
if 'bytes' in image and image['bytes']:
image = io.BytesIO(image['bytes'])
else:
assert 'path' in image and image['path']
image = open(image['path'], 'rb')
return image, item['label']
def __len__(self):
return len(self.dataset)
def _filename(self, index, basename=False, absolute=False):
item = self.dataset[index]
return item['image']['path']
|
Add initial Hugging Face Datasets parser impl.
|
Add initial Hugging Face Datasets parser impl.
|
Python
|
apache-2.0
|
rwightman/pytorch-image-models,rwightman/pytorch-image-models
|
Add initial Hugging Face Datasets parser impl.
|
""" Dataset parser interface that wraps Hugging Face datasets
Hacked together by / Copyright 2022 Ross Wightman
"""
import io
import math
import torch
import torch.distributed as dist
from PIL import Image
try:
import datasets
except ImportError as e:
print("Please install Hugging Face datasets package `pip install datasets`.")
exit(1)
from .parser import Parser
def get_class_labels(info):
if 'label' not in info.features:
return {}
class_label = info.features['label']
class_to_idx = {n: class_label.str2int(n) for n in class_label.names}
return class_to_idx
class ParserHfds(Parser):
def __init__(
self,
root,
name,
split='train',
class_map=None,
download=False,
):
"""
"""
super().__init__()
self.root = root
self.split = split
self.dataset = datasets.load_dataset(
name, # 'name' maps to path arg in hf datasets
split=split,
cache_dir=self.root, # timm doesn't expect hidden cache dir for datasets, specify a path
#use_auth_token=True,
)
# leave decode for caller, plus we want easy access to original path names...
self.dataset = self.dataset.cast_column('image', datasets.Image(decode=False))
self.class_to_idx = get_class_labels(self.dataset.info)
self.split_info = self.dataset.info.splits[split]
self.num_examples = self.split_info.num_examples
def __getitem__(self, index):
item = self.dataset[index]
image = item['image']
if 'bytes' in image and image['bytes']:
image = io.BytesIO(image['bytes'])
else:
assert 'path' in image and image['path']
image = open(image['path'], 'rb')
return image, item['label']
def __len__(self):
return len(self.dataset)
def _filename(self, index, basename=False, absolute=False):
item = self.dataset[index]
return item['image']['path']
|
<commit_before><commit_msg>Add initial Hugging Face Datasets parser impl.<commit_after>
|
""" Dataset parser interface that wraps Hugging Face datasets
Hacked together by / Copyright 2022 Ross Wightman
"""
import io
import math
import torch
import torch.distributed as dist
from PIL import Image
try:
import datasets
except ImportError as e:
print("Please install Hugging Face datasets package `pip install datasets`.")
exit(1)
from .parser import Parser
def get_class_labels(info):
if 'label' not in info.features:
return {}
class_label = info.features['label']
class_to_idx = {n: class_label.str2int(n) for n in class_label.names}
return class_to_idx
class ParserHfds(Parser):
def __init__(
self,
root,
name,
split='train',
class_map=None,
download=False,
):
"""
"""
super().__init__()
self.root = root
self.split = split
self.dataset = datasets.load_dataset(
name, # 'name' maps to path arg in hf datasets
split=split,
cache_dir=self.root, # timm doesn't expect hidden cache dir for datasets, specify a path
#use_auth_token=True,
)
# leave decode for caller, plus we want easy access to original path names...
self.dataset = self.dataset.cast_column('image', datasets.Image(decode=False))
self.class_to_idx = get_class_labels(self.dataset.info)
self.split_info = self.dataset.info.splits[split]
self.num_examples = self.split_info.num_examples
def __getitem__(self, index):
item = self.dataset[index]
image = item['image']
if 'bytes' in image and image['bytes']:
image = io.BytesIO(image['bytes'])
else:
assert 'path' in image and image['path']
image = open(image['path'], 'rb')
return image, item['label']
def __len__(self):
return len(self.dataset)
def _filename(self, index, basename=False, absolute=False):
item = self.dataset[index]
return item['image']['path']
|
Add initial Hugging Face Datasets parser impl.""" Dataset parser interface that wraps Hugging Face datasets
Hacked together by / Copyright 2022 Ross Wightman
"""
import io
import math
import torch
import torch.distributed as dist
from PIL import Image
try:
import datasets
except ImportError as e:
print("Please install Hugging Face datasets package `pip install datasets`.")
exit(1)
from .parser import Parser
def get_class_labels(info):
if 'label' not in info.features:
return {}
class_label = info.features['label']
class_to_idx = {n: class_label.str2int(n) for n in class_label.names}
return class_to_idx
class ParserHfds(Parser):
def __init__(
self,
root,
name,
split='train',
class_map=None,
download=False,
):
"""
"""
super().__init__()
self.root = root
self.split = split
self.dataset = datasets.load_dataset(
name, # 'name' maps to path arg in hf datasets
split=split,
cache_dir=self.root, # timm doesn't expect hidden cache dir for datasets, specify a path
#use_auth_token=True,
)
# leave decode for caller, plus we want easy access to original path names...
self.dataset = self.dataset.cast_column('image', datasets.Image(decode=False))
self.class_to_idx = get_class_labels(self.dataset.info)
self.split_info = self.dataset.info.splits[split]
self.num_examples = self.split_info.num_examples
def __getitem__(self, index):
item = self.dataset[index]
image = item['image']
if 'bytes' in image and image['bytes']:
image = io.BytesIO(image['bytes'])
else:
assert 'path' in image and image['path']
image = open(image['path'], 'rb')
return image, item['label']
def __len__(self):
return len(self.dataset)
def _filename(self, index, basename=False, absolute=False):
item = self.dataset[index]
return item['image']['path']
|
<commit_before><commit_msg>Add initial Hugging Face Datasets parser impl.<commit_after>""" Dataset parser interface that wraps Hugging Face datasets
Hacked together by / Copyright 2022 Ross Wightman
"""
import io
import math
import torch
import torch.distributed as dist
from PIL import Image
try:
import datasets
except ImportError as e:
print("Please install Hugging Face datasets package `pip install datasets`.")
exit(1)
from .parser import Parser
def get_class_labels(info):
if 'label' not in info.features:
return {}
class_label = info.features['label']
class_to_idx = {n: class_label.str2int(n) for n in class_label.names}
return class_to_idx
class ParserHfds(Parser):
def __init__(
self,
root,
name,
split='train',
class_map=None,
download=False,
):
"""
"""
super().__init__()
self.root = root
self.split = split
self.dataset = datasets.load_dataset(
name, # 'name' maps to path arg in hf datasets
split=split,
cache_dir=self.root, # timm doesn't expect hidden cache dir for datasets, specify a path
#use_auth_token=True,
)
# leave decode for caller, plus we want easy access to original path names...
self.dataset = self.dataset.cast_column('image', datasets.Image(decode=False))
self.class_to_idx = get_class_labels(self.dataset.info)
self.split_info = self.dataset.info.splits[split]
self.num_examples = self.split_info.num_examples
def __getitem__(self, index):
item = self.dataset[index]
image = item['image']
if 'bytes' in image and image['bytes']:
image = io.BytesIO(image['bytes'])
else:
assert 'path' in image and image['path']
image = open(image['path'], 'rb')
return image, item['label']
def __len__(self):
return len(self.dataset)
def _filename(self, index, basename=False, absolute=False):
item = self.dataset[index]
return item['image']['path']
|
|
626832a2e65635a0e47d0b01fbc8d49c9fcf9952
|
rpc_flush_datastack.py
|
rpc_flush_datastack.py
|
#==============================================================================
# rpc_flush_datastack.py
# Python script that flushes a dataport for a device
#
# IMPORTANT NOTE!!: This will remove all data in a dataport (data source)
# USE AT YOUR OWN RISK
#
#
#==============================================================================
## Tested with python 2.6.5
##
## Copyright (c) 2010, Exosite LLC
## All rights reserved.
##
## For License see LICENSE file
import socket
import sys
try:
if sys.version_info < (2 , 6):
json_module= 'python-simplejson'
import simplejson as json
else:
json_module= 'python-json'
import json
except ImportError:
print "The package '%s' is required." % json_module
sys.exit(1)
HOST = 'm2.exosite.com'
PORT = 80
DEVICECIK = 'YOUR CIK HERE'
ALIAS = 'YOUR ALIAS HERE'
RESOURCEID = {"alias":ALIAS}
#RESOURCEID = 'DATA STACK RID' # Use instead of ALIAS
ARGUMENTS = [RESOURCEID]
PROCEDURE = "flush"
CALLREQUEST1 = {"id" : 1, "procedure":PROCEDURE, "arguments":ARGUMENTS}
CALLS = [CALLREQUEST1]
AUTH = { "cik" : DEVICECIK }
RPC = { "auth" : AUTH, "calls":CALLS}
json_rpc = json.dumps(RPC)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('POST /api:v1/rpc/process HTTP/1.1\r\n')
s.send('Host: m2.exosite.com\r\n')
s.send('Content-Type: application/json; charset=utf-8\r\n')
body = json_rpc
s.send('Content-Length: '+ str(len(body)) +'\r\n\r\n')
s.send(body)
data = s.recv(1024)
s.close()
print 'Received: \r\n', str(data)
|
Add flush data stack example utility file
|
Add flush data stack example utility file
|
Python
|
bsd-3-clause
|
exosite-garage/utility_scripts
|
Add flush data stack example utility file
|
#==============================================================================
# rpc_flush_datastack.py
# Python script that flushes a dataport for a device
#
# IMPORTANT NOTE!!: This will remove all data in a dataport (data source)
# USE AT YOUR OWN RISK
#
#
#==============================================================================
## Tested with python 2.6.5
##
## Copyright (c) 2010, Exosite LLC
## All rights reserved.
##
## For License see LICENSE file
import socket
import sys
try:
if sys.version_info < (2 , 6):
json_module= 'python-simplejson'
import simplejson as json
else:
json_module= 'python-json'
import json
except ImportError:
print "The package '%s' is required." % json_module
sys.exit(1)
HOST = 'm2.exosite.com'
PORT = 80
DEVICECIK = 'YOUR CIK HERE'
ALIAS = 'YOUR ALIAS HERE'
RESOURCEID = {"alias":ALIAS}
#RESOURCEID = 'DATA STACK RID' # Use instead of ALIAS
ARGUMENTS = [RESOURCEID]
PROCEDURE = "flush"
CALLREQUEST1 = {"id" : 1, "procedure":PROCEDURE, "arguments":ARGUMENTS}
CALLS = [CALLREQUEST1]
AUTH = { "cik" : DEVICECIK }
RPC = { "auth" : AUTH, "calls":CALLS}
json_rpc = json.dumps(RPC)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('POST /api:v1/rpc/process HTTP/1.1\r\n')
s.send('Host: m2.exosite.com\r\n')
s.send('Content-Type: application/json; charset=utf-8\r\n')
body = json_rpc
s.send('Content-Length: '+ str(len(body)) +'\r\n\r\n')
s.send(body)
data = s.recv(1024)
s.close()
print 'Received: \r\n', str(data)
|
<commit_before><commit_msg>Add flush data stack example utility file<commit_after>
|
#==============================================================================
# rpc_flush_datastack.py
# Python script that flushes a dataport for a device
#
# IMPORTANT NOTE!!: This will remove all data in a dataport (data source)
# USE AT YOUR OWN RISK
#
#
#==============================================================================
## Tested with python 2.6.5
##
## Copyright (c) 2010, Exosite LLC
## All rights reserved.
##
## For License see LICENSE file
import socket
import sys
try:
if sys.version_info < (2 , 6):
json_module= 'python-simplejson'
import simplejson as json
else:
json_module= 'python-json'
import json
except ImportError:
print "The package '%s' is required." % json_module
sys.exit(1)
HOST = 'm2.exosite.com'
PORT = 80
DEVICECIK = 'YOUR CIK HERE'
ALIAS = 'YOUR ALIAS HERE'
RESOURCEID = {"alias":ALIAS}
#RESOURCEID = 'DATA STACK RID' # Use instead of ALIAS
ARGUMENTS = [RESOURCEID]
PROCEDURE = "flush"
CALLREQUEST1 = {"id" : 1, "procedure":PROCEDURE, "arguments":ARGUMENTS}
CALLS = [CALLREQUEST1]
AUTH = { "cik" : DEVICECIK }
RPC = { "auth" : AUTH, "calls":CALLS}
json_rpc = json.dumps(RPC)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('POST /api:v1/rpc/process HTTP/1.1\r\n')
s.send('Host: m2.exosite.com\r\n')
s.send('Content-Type: application/json; charset=utf-8\r\n')
body = json_rpc
s.send('Content-Length: '+ str(len(body)) +'\r\n\r\n')
s.send(body)
data = s.recv(1024)
s.close()
print 'Received: \r\n', str(data)
|
Add flush data stack example utility file#==============================================================================
# rpc_flush_datastack.py
# Python script that flushes a dataport for a device
#
# IMPORTANT NOTE!!: This will remove all data in a dataport (data source)
# USE AT YOUR OWN RISK
#
#
#==============================================================================
## Tested with python 2.6.5
##
## Copyright (c) 2010, Exosite LLC
## All rights reserved.
##
## For License see LICENSE file
import socket
import sys
try:
if sys.version_info < (2 , 6):
json_module= 'python-simplejson'
import simplejson as json
else:
json_module= 'python-json'
import json
except ImportError:
print "The package '%s' is required." % json_module
sys.exit(1)
HOST = 'm2.exosite.com'
PORT = 80
DEVICECIK = 'YOUR CIK HERE'
ALIAS = 'YOUR ALIAS HERE'
RESOURCEID = {"alias":ALIAS}
#RESOURCEID = 'DATA STACK RID' # Use instead of ALIAS
ARGUMENTS = [RESOURCEID]
PROCEDURE = "flush"
CALLREQUEST1 = {"id" : 1, "procedure":PROCEDURE, "arguments":ARGUMENTS}
CALLS = [CALLREQUEST1]
AUTH = { "cik" : DEVICECIK }
RPC = { "auth" : AUTH, "calls":CALLS}
json_rpc = json.dumps(RPC)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('POST /api:v1/rpc/process HTTP/1.1\r\n')
s.send('Host: m2.exosite.com\r\n')
s.send('Content-Type: application/json; charset=utf-8\r\n')
body = json_rpc
s.send('Content-Length: '+ str(len(body)) +'\r\n\r\n')
s.send(body)
data = s.recv(1024)
s.close()
print 'Received: \r\n', str(data)
|
<commit_before><commit_msg>Add flush data stack example utility file<commit_after>#==============================================================================
# rpc_flush_datastack.py
# Python script that flushes a dataport for a device
#
# IMPORTANT NOTE!!: This will remove all data in a dataport (data source)
# USE AT YOUR OWN RISK
#
#
#==============================================================================
## Tested with python 2.6.5
##
## Copyright (c) 2010, Exosite LLC
## All rights reserved.
##
## For License see LICENSE file
import socket
import sys
try:
if sys.version_info < (2 , 6):
json_module= 'python-simplejson'
import simplejson as json
else:
json_module= 'python-json'
import json
except ImportError:
print "The package '%s' is required." % json_module
sys.exit(1)
HOST = 'm2.exosite.com'
PORT = 80
DEVICECIK = 'YOUR CIK HERE'
ALIAS = 'YOUR ALIAS HERE'
RESOURCEID = {"alias":ALIAS}
#RESOURCEID = 'DATA STACK RID' # Use instead of ALIAS
ARGUMENTS = [RESOURCEID]
PROCEDURE = "flush"
CALLREQUEST1 = {"id" : 1, "procedure":PROCEDURE, "arguments":ARGUMENTS}
CALLS = [CALLREQUEST1]
AUTH = { "cik" : DEVICECIK }
RPC = { "auth" : AUTH, "calls":CALLS}
json_rpc = json.dumps(RPC)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
s.send('POST /api:v1/rpc/process HTTP/1.1\r\n')
s.send('Host: m2.exosite.com\r\n')
s.send('Content-Type: application/json; charset=utf-8\r\n')
body = json_rpc
s.send('Content-Length: '+ str(len(body)) +'\r\n\r\n')
s.send(body)
data = s.recv(1024)
s.close()
print 'Received: \r\n', str(data)
|
|
bc7c8e879e6643f240f98503415a28493c19d181
|
scripts/get_services_to_deploy.py
|
scripts/get_services_to_deploy.py
|
#!/usr/bin/env python
import requests
import json
import sys
import argparse
parser = argparse.ArgumentParser('Queries Monorail to get the services and deployNotes where a list of Pull requests will be deployed')
parser.add_argument('--url', help='URL where Monorail is located')
parser.add_argument('--pr', help='List of pull requests we will ask for')
args = parser.parse_args()
curl_uri = '/deploy-info?pr='
if (args.url):
if args.url.startswith('http'):
request_url = args.url+curl_uri
else:
request_url = 'http://'+args.url+curl_uri
else:
print 'We need an URL'
exit(1)
if (args.pr):
pull_requests = args.pr.split()
pull_requests = ",".join(pull_requests)
else:
print 'A list of PRs is needed'
exit(1)
try:
r = requests.get(request_url+pull_requests)
except:
print 'Something went wrong querying Monorail'
exit(2)
json_list = r.json()
services=json_list['services']
deploynotes=json_list['deployNotes']
if deploynotes == 'True':
print 'There are deployNotes so we cannot continue'
exit(3)
if len(services) == 0:
print 'There are not services defined so we cannot continue'
exit(4)
exit (0)
|
Add script that gets the services and deployNotes where a list of Pull requests will be deployed
|
Add script that gets the services and deployNotes where a list of Pull requests will be deployed
|
Python
|
mit
|
AudienseCo/monorail,AudienseCo/monorail
|
Add script that gets the services and deployNotes where a list of Pull requests will be deployed
|
#!/usr/bin/env python
import requests
import json
import sys
import argparse
parser = argparse.ArgumentParser('Queries Monorail to get the services and deployNotes where a list of Pull requests will be deployed')
parser.add_argument('--url', help='URL where Monorail is located')
parser.add_argument('--pr', help='List of pull requests we will ask for')
args = parser.parse_args()
curl_uri = '/deploy-info?pr='
if (args.url):
if args.url.startswith('http'):
request_url = args.url+curl_uri
else:
request_url = 'http://'+args.url+curl_uri
else:
print 'We need an URL'
exit(1)
if (args.pr):
pull_requests = args.pr.split()
pull_requests = ",".join(pull_requests)
else:
print 'A list of PRs is needed'
exit(1)
try:
r = requests.get(request_url+pull_requests)
except:
print 'Something went wrong querying Monorail'
exit(2)
json_list = r.json()
services=json_list['services']
deploynotes=json_list['deployNotes']
if deploynotes == 'True':
print 'There are deployNotes so we cannot continue'
exit(3)
if len(services) == 0:
print 'There are not services defined so we cannot continue'
exit(4)
exit (0)
|
<commit_before><commit_msg>Add script that gets the services and deployNotes where a list of Pull requests will be deployed<commit_after>
|
#!/usr/bin/env python
import requests
import json
import sys
import argparse
parser = argparse.ArgumentParser('Queries Monorail to get the services and deployNotes where a list of Pull requests will be deployed')
parser.add_argument('--url', help='URL where Monorail is located')
parser.add_argument('--pr', help='List of pull requests we will ask for')
args = parser.parse_args()
curl_uri = '/deploy-info?pr='
if (args.url):
if args.url.startswith('http'):
request_url = args.url+curl_uri
else:
request_url = 'http://'+args.url+curl_uri
else:
print 'We need an URL'
exit(1)
if (args.pr):
pull_requests = args.pr.split()
pull_requests = ",".join(pull_requests)
else:
print 'A list of PRs is needed'
exit(1)
try:
r = requests.get(request_url+pull_requests)
except:
print 'Something went wrong querying Monorail'
exit(2)
json_list = r.json()
services=json_list['services']
deploynotes=json_list['deployNotes']
if deploynotes == 'True':
print 'There are deployNotes so we cannot continue'
exit(3)
if len(services) == 0:
print 'There are not services defined so we cannot continue'
exit(4)
exit (0)
|
Add script that gets the services and deployNotes where a list of Pull requests will be deployed#!/usr/bin/env python
import requests
import json
import sys
import argparse
parser = argparse.ArgumentParser('Queries Monorail to get the services and deployNotes where a list of Pull requests will be deployed')
parser.add_argument('--url', help='URL where Monorail is located')
parser.add_argument('--pr', help='List of pull requests we will ask for')
args = parser.parse_args()
curl_uri = '/deploy-info?pr='
if (args.url):
if args.url.startswith('http'):
request_url = args.url+curl_uri
else:
request_url = 'http://'+args.url+curl_uri
else:
print 'We need an URL'
exit(1)
if (args.pr):
pull_requests = args.pr.split()
pull_requests = ",".join(pull_requests)
else:
print 'A list of PRs is needed'
exit(1)
try:
r = requests.get(request_url+pull_requests)
except:
print 'Something went wrong querying Monorail'
exit(2)
json_list = r.json()
services=json_list['services']
deploynotes=json_list['deployNotes']
if deploynotes == 'True':
print 'There are deployNotes so we cannot continue'
exit(3)
if len(services) == 0:
print 'There are not services defined so we cannot continue'
exit(4)
exit (0)
|
<commit_before><commit_msg>Add script that gets the services and deployNotes where a list of Pull requests will be deployed<commit_after>#!/usr/bin/env python
import requests
import json
import sys
import argparse
parser = argparse.ArgumentParser('Queries Monorail to get the services and deployNotes where a list of Pull requests will be deployed')
parser.add_argument('--url', help='URL where Monorail is located')
parser.add_argument('--pr', help='List of pull requests we will ask for')
args = parser.parse_args()
curl_uri = '/deploy-info?pr='
if (args.url):
if args.url.startswith('http'):
request_url = args.url+curl_uri
else:
request_url = 'http://'+args.url+curl_uri
else:
print 'We need an URL'
exit(1)
if (args.pr):
pull_requests = args.pr.split()
pull_requests = ",".join(pull_requests)
else:
print 'A list of PRs is needed'
exit(1)
try:
r = requests.get(request_url+pull_requests)
except:
print 'Something went wrong querying Monorail'
exit(2)
json_list = r.json()
services=json_list['services']
deploynotes=json_list['deployNotes']
if deploynotes == 'True':
print 'There are deployNotes so we cannot continue'
exit(3)
if len(services) == 0:
print 'There are not services defined so we cannot continue'
exit(4)
exit (0)
|
|
4a07f1c9c36ecd47e3fce1738c87d42615d53f8c
|
odl/operator/utility.py
|
odl/operator/utility.py
|
# Copyright 2014, 2015 The ODL development group
#
# This file is part of ODL.
#
# ODL is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ODL is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ODL. If not, see <http://www.gnu.org/licenses/>.
# Imports for common Python 2/3 codebase
from __future__ import print_function, division, absolute_import
from future import standard_library
standard_library.install_aliases()
# External
import numpy as np
# Internal
from odl.space.base_ntuples import FnBase
from odl.set.pspace import ProductSpace
def matrix_representation(op):
"""Returns a matrix representation of a linear operator.
Parameters
----------
op : :class:`~odl.Operator`
The linear operator of which one wants a matrix representation.
Returns
----------
matrix : `numpy.ndarray`
The matrix representation of the operator.
Notes
----------
The algorithm works by letting the operator act on all unit vectors, and
stacking the output as a matrix.
"""
if not op.is_linear:
print('WARNING: The operator is not linear; cannot produce matrix',
'representation of it.')
return
if not isinstance(op.domain, FnBase) or isinstance(op.domain, ProductSpace):
print('WARNING: The operator domain is not discreate or produc space;',
'cannot produce matrix representation of it.')
return
if not isinstance(op.range, FnBase):
print('WARNING: The operator range is not discreate; cannot produce',
'matrix representation of it.')
return
n = op.range.size
m = op.domain.size
matrix = np.zeros([n, m])
v = op.domain.element()
tmp = op.range.element()
for i in range(m):
v.set_zero()
v[i] = 1.0
matrix[:,i] = op(v, out=tmp)
return matrix
|
Add a function for matrix representation.
|
DEV: Add a function for matrix representation.
See issue #49.
|
Python
|
mpl-2.0
|
aringh/odl,odlgroup/odl,odlgroup/odl,kohr-h/odl,kohr-h/odl,aringh/odl
|
DEV: Add a function for matrix representation.
See issue #49.
|
# Copyright 2014, 2015 The ODL development group
#
# This file is part of ODL.
#
# ODL is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ODL is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ODL. If not, see <http://www.gnu.org/licenses/>.
# Imports for common Python 2/3 codebase
from __future__ import print_function, division, absolute_import
from future import standard_library
standard_library.install_aliases()
# External
import numpy as np
# Internal
from odl.space.base_ntuples import FnBase
from odl.set.pspace import ProductSpace
def matrix_representation(op):
"""Returns a matrix representation of a linear operator.
Parameters
----------
op : :class:`~odl.Operator`
The linear operator of which one wants a matrix representation.
Returns
----------
matrix : `numpy.ndarray`
The matrix representation of the operator.
Notes
----------
The algorithm works by letting the operator act on all unit vectors, and
stacking the output as a matrix.
"""
if not op.is_linear:
print('WARNING: The operator is not linear; cannot produce matrix',
'representation of it.')
return
if not isinstance(op.domain, FnBase) or isinstance(op.domain, ProductSpace):
print('WARNING: The operator domain is not discreate or produc space;',
'cannot produce matrix representation of it.')
return
if not isinstance(op.range, FnBase):
print('WARNING: The operator range is not discreate; cannot produce',
'matrix representation of it.')
return
n = op.range.size
m = op.domain.size
matrix = np.zeros([n, m])
v = op.domain.element()
tmp = op.range.element()
for i in range(m):
v.set_zero()
v[i] = 1.0
matrix[:,i] = op(v, out=tmp)
return matrix
|
<commit_before><commit_msg>DEV: Add a function for matrix representation.
See issue #49.<commit_after>
|
# Copyright 2014, 2015 The ODL development group
#
# This file is part of ODL.
#
# ODL is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ODL is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ODL. If not, see <http://www.gnu.org/licenses/>.
# Imports for common Python 2/3 codebase
from __future__ import print_function, division, absolute_import
from future import standard_library
standard_library.install_aliases()
# External
import numpy as np
# Internal
from odl.space.base_ntuples import FnBase
from odl.set.pspace import ProductSpace
def matrix_representation(op):
"""Returns a matrix representation of a linear operator.
Parameters
----------
op : :class:`~odl.Operator`
The linear operator of which one wants a matrix representation.
Returns
----------
matrix : `numpy.ndarray`
The matrix representation of the operator.
Notes
----------
The algorithm works by letting the operator act on all unit vectors, and
stacking the output as a matrix.
"""
if not op.is_linear:
print('WARNING: The operator is not linear; cannot produce matrix',
'representation of it.')
return
if not isinstance(op.domain, FnBase) or isinstance(op.domain, ProductSpace):
print('WARNING: The operator domain is not discreate or produc space;',
'cannot produce matrix representation of it.')
return
if not isinstance(op.range, FnBase):
print('WARNING: The operator range is not discreate; cannot produce',
'matrix representation of it.')
return
n = op.range.size
m = op.domain.size
matrix = np.zeros([n, m])
v = op.domain.element()
tmp = op.range.element()
for i in range(m):
v.set_zero()
v[i] = 1.0
matrix[:,i] = op(v, out=tmp)
return matrix
|
DEV: Add a function for matrix representation.
See issue #49.# Copyright 2014, 2015 The ODL development group
#
# This file is part of ODL.
#
# ODL is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ODL is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ODL. If not, see <http://www.gnu.org/licenses/>.
# Imports for common Python 2/3 codebase
from __future__ import print_function, division, absolute_import
from future import standard_library
standard_library.install_aliases()
# External
import numpy as np
# Internal
from odl.space.base_ntuples import FnBase
from odl.set.pspace import ProductSpace
def matrix_representation(op):
"""Returns a matrix representation of a linear operator.
Parameters
----------
op : :class:`~odl.Operator`
The linear operator of which one wants a matrix representation.
Returns
----------
matrix : `numpy.ndarray`
The matrix representation of the operator.
Notes
----------
The algorithm works by letting the operator act on all unit vectors, and
stacking the output as a matrix.
"""
if not op.is_linear:
print('WARNING: The operator is not linear; cannot produce matrix',
'representation of it.')
return
if not isinstance(op.domain, FnBase) or isinstance(op.domain, ProductSpace):
print('WARNING: The operator domain is not discreate or produc space;',
'cannot produce matrix representation of it.')
return
if not isinstance(op.range, FnBase):
print('WARNING: The operator range is not discreate; cannot produce',
'matrix representation of it.')
return
n = op.range.size
m = op.domain.size
matrix = np.zeros([n, m])
v = op.domain.element()
tmp = op.range.element()
for i in range(m):
v.set_zero()
v[i] = 1.0
matrix[:,i] = op(v, out=tmp)
return matrix
|
<commit_before><commit_msg>DEV: Add a function for matrix representation.
See issue #49.<commit_after># Copyright 2014, 2015 The ODL development group
#
# This file is part of ODL.
#
# ODL is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# ODL is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with ODL. If not, see <http://www.gnu.org/licenses/>.
# Imports for common Python 2/3 codebase
from __future__ import print_function, division, absolute_import
from future import standard_library
standard_library.install_aliases()
# External
import numpy as np
# Internal
from odl.space.base_ntuples import FnBase
from odl.set.pspace import ProductSpace
def matrix_representation(op):
"""Returns a matrix representation of a linear operator.
Parameters
----------
op : :class:`~odl.Operator`
The linear operator of which one wants a matrix representation.
Returns
----------
matrix : `numpy.ndarray`
The matrix representation of the operator.
Notes
----------
The algorithm works by letting the operator act on all unit vectors, and
stacking the output as a matrix.
"""
if not op.is_linear:
print('WARNING: The operator is not linear; cannot produce matrix',
'representation of it.')
return
if not isinstance(op.domain, FnBase) or isinstance(op.domain, ProductSpace):
print('WARNING: The operator domain is not discreate or produc space;',
'cannot produce matrix representation of it.')
return
if not isinstance(op.range, FnBase):
print('WARNING: The operator range is not discreate; cannot produce',
'matrix representation of it.')
return
n = op.range.size
m = op.domain.size
matrix = np.zeros([n, m])
v = op.domain.element()
tmp = op.range.element()
for i in range(m):
v.set_zero()
v[i] = 1.0
matrix[:,i] = op(v, out=tmp)
return matrix
|
|
2b51cbe6204abf12d1eca023a0784a1c903c244d
|
data/scripts/load_chuls.py
|
data/scripts/load_chuls.py
|
import os
import csv
import json
from django.conf import settings
chul_file = os.path.join(
settings.BASE_DIR, 'data/csvs/chul.csv')
table_columns = [
"CommUnitId", "Cu_code", "CommUnitName", "Date_CU_Established",
"Date_CU_Operational", "CuLocation", "Link_Facility_Code",
"CU_OfficialMobile", "CU_OfficialEmail", "Chew_Facility", "Chew_In_Charge",
"NumHouseholds", "CUStatus", "Approved", "ApprovedDate", "ApprovedBy",
"ApprovalComments", "isEdit", "UnitRecordArchived", "Date_added",
"Date_modified", "Delete_comments"]
def create_chuls_file():
chul = []
chews = []
chu_contacts = []
with open(chul_file, 'r') as csv_file:
chul_reader = csv.reader(csv_file)
for row in chul_reader:
code = row[1]
name = row[2]
date_established = row[3]
facility_code = row[7]
households_monitored = row[11]
status = row[12]
# approval = row[13]
# contacts
mobile = row[7]
mobile_dict = {
"contact": mobile,
"contact_type": {
"PHONE"
}
}
email = row[8]
email_dict = {
"contact": email,
"contact_type": {
"EMAIL"
}
}
if email_dict not in chu_contacts:
chu_contacts.append(email_dict)
if mobile_dict not in chu_contacts:
chu_contacts.append(email_dict)
# chew
first_name = row[10]
chu = {
"code": code,
"name": name,
"date_established": date_established,
"facility": {
"code": facility_code
},
"households_monitored": households_monitored,
"status": status,
}
if chu not in chul:
chul.append(chu)
chew = {
"first_name": first_name,
"heath_unit": {
"code": code
}
}
if chew not in chews:
chews.append(chew)
return chul, chews
def write_file(file_name, data):
if os.path.exists(file_name):
os.remove(file_name)
fac_file = open(file_name, 'w+')
del data[0]
dumped_data = ""
try:
dumped_data = json.dumps(data)
except:
print data
raise
fac_file.write(dumped_data)
def write_chuls_and_chews():
chus, chews = create_chuls_file()
write_file('chul.txt', chus)
write_file('chew.txt', chews)
|
Add script to read and load chus and chews
|
Add script to read and load chus and chews
|
Python
|
mit
|
MasterFacilityList/mfl_api,MasterFacilityList/mfl_api,MasterFacilityList/mfl_api,urandu/mfl_api,MasterFacilityList/mfl_api,urandu/mfl_api,urandu/mfl_api,MasterFacilityList/mfl_api,urandu/mfl_api
|
Add script to read and load chus and chews
|
import os
import csv
import json
from django.conf import settings
chul_file = os.path.join(
settings.BASE_DIR, 'data/csvs/chul.csv')
table_columns = [
"CommUnitId", "Cu_code", "CommUnitName", "Date_CU_Established",
"Date_CU_Operational", "CuLocation", "Link_Facility_Code",
"CU_OfficialMobile", "CU_OfficialEmail", "Chew_Facility", "Chew_In_Charge",
"NumHouseholds", "CUStatus", "Approved", "ApprovedDate", "ApprovedBy",
"ApprovalComments", "isEdit", "UnitRecordArchived", "Date_added",
"Date_modified", "Delete_comments"]
def create_chuls_file():
chul = []
chews = []
chu_contacts = []
with open(chul_file, 'r') as csv_file:
chul_reader = csv.reader(csv_file)
for row in chul_reader:
code = row[1]
name = row[2]
date_established = row[3]
facility_code = row[7]
households_monitored = row[11]
status = row[12]
# approval = row[13]
# contacts
mobile = row[7]
mobile_dict = {
"contact": mobile,
"contact_type": {
"PHONE"
}
}
email = row[8]
email_dict = {
"contact": email,
"contact_type": {
"EMAIL"
}
}
if email_dict not in chu_contacts:
chu_contacts.append(email_dict)
if mobile_dict not in chu_contacts:
chu_contacts.append(email_dict)
# chew
first_name = row[10]
chu = {
"code": code,
"name": name,
"date_established": date_established,
"facility": {
"code": facility_code
},
"households_monitored": households_monitored,
"status": status,
}
if chu not in chul:
chul.append(chu)
chew = {
"first_name": first_name,
"heath_unit": {
"code": code
}
}
if chew not in chews:
chews.append(chew)
return chul, chews
def write_file(file_name, data):
if os.path.exists(file_name):
os.remove(file_name)
fac_file = open(file_name, 'w+')
del data[0]
dumped_data = ""
try:
dumped_data = json.dumps(data)
except:
print data
raise
fac_file.write(dumped_data)
def write_chuls_and_chews():
chus, chews = create_chuls_file()
write_file('chul.txt', chus)
write_file('chew.txt', chews)
|
<commit_before><commit_msg>Add script to read and load chus and chews<commit_after>
|
import os
import csv
import json
from django.conf import settings
chul_file = os.path.join(
settings.BASE_DIR, 'data/csvs/chul.csv')
table_columns = [
"CommUnitId", "Cu_code", "CommUnitName", "Date_CU_Established",
"Date_CU_Operational", "CuLocation", "Link_Facility_Code",
"CU_OfficialMobile", "CU_OfficialEmail", "Chew_Facility", "Chew_In_Charge",
"NumHouseholds", "CUStatus", "Approved", "ApprovedDate", "ApprovedBy",
"ApprovalComments", "isEdit", "UnitRecordArchived", "Date_added",
"Date_modified", "Delete_comments"]
def create_chuls_file():
chul = []
chews = []
chu_contacts = []
with open(chul_file, 'r') as csv_file:
chul_reader = csv.reader(csv_file)
for row in chul_reader:
code = row[1]
name = row[2]
date_established = row[3]
facility_code = row[7]
households_monitored = row[11]
status = row[12]
# approval = row[13]
# contacts
mobile = row[7]
mobile_dict = {
"contact": mobile,
"contact_type": {
"PHONE"
}
}
email = row[8]
email_dict = {
"contact": email,
"contact_type": {
"EMAIL"
}
}
if email_dict not in chu_contacts:
chu_contacts.append(email_dict)
if mobile_dict not in chu_contacts:
chu_contacts.append(email_dict)
# chew
first_name = row[10]
chu = {
"code": code,
"name": name,
"date_established": date_established,
"facility": {
"code": facility_code
},
"households_monitored": households_monitored,
"status": status,
}
if chu not in chul:
chul.append(chu)
chew = {
"first_name": first_name,
"heath_unit": {
"code": code
}
}
if chew not in chews:
chews.append(chew)
return chul, chews
def write_file(file_name, data):
if os.path.exists(file_name):
os.remove(file_name)
fac_file = open(file_name, 'w+')
del data[0]
dumped_data = ""
try:
dumped_data = json.dumps(data)
except:
print data
raise
fac_file.write(dumped_data)
def write_chuls_and_chews():
chus, chews = create_chuls_file()
write_file('chul.txt', chus)
write_file('chew.txt', chews)
|
Add script to read and load chus and chewsimport os
import csv
import json
from django.conf import settings
chul_file = os.path.join(
settings.BASE_DIR, 'data/csvs/chul.csv')
table_columns = [
"CommUnitId", "Cu_code", "CommUnitName", "Date_CU_Established",
"Date_CU_Operational", "CuLocation", "Link_Facility_Code",
"CU_OfficialMobile", "CU_OfficialEmail", "Chew_Facility", "Chew_In_Charge",
"NumHouseholds", "CUStatus", "Approved", "ApprovedDate", "ApprovedBy",
"ApprovalComments", "isEdit", "UnitRecordArchived", "Date_added",
"Date_modified", "Delete_comments"]
def create_chuls_file():
chul = []
chews = []
chu_contacts = []
with open(chul_file, 'r') as csv_file:
chul_reader = csv.reader(csv_file)
for row in chul_reader:
code = row[1]
name = row[2]
date_established = row[3]
facility_code = row[7]
households_monitored = row[11]
status = row[12]
# approval = row[13]
# contacts
mobile = row[7]
mobile_dict = {
"contact": mobile,
"contact_type": {
"PHONE"
}
}
email = row[8]
email_dict = {
"contact": email,
"contact_type": {
"EMAIL"
}
}
if email_dict not in chu_contacts:
chu_contacts.append(email_dict)
if mobile_dict not in chu_contacts:
chu_contacts.append(email_dict)
# chew
first_name = row[10]
chu = {
"code": code,
"name": name,
"date_established": date_established,
"facility": {
"code": facility_code
},
"households_monitored": households_monitored,
"status": status,
}
if chu not in chul:
chul.append(chu)
chew = {
"first_name": first_name,
"heath_unit": {
"code": code
}
}
if chew not in chews:
chews.append(chew)
return chul, chews
def write_file(file_name, data):
if os.path.exists(file_name):
os.remove(file_name)
fac_file = open(file_name, 'w+')
del data[0]
dumped_data = ""
try:
dumped_data = json.dumps(data)
except:
print data
raise
fac_file.write(dumped_data)
def write_chuls_and_chews():
chus, chews = create_chuls_file()
write_file('chul.txt', chus)
write_file('chew.txt', chews)
|
<commit_before><commit_msg>Add script to read and load chus and chews<commit_after>import os
import csv
import json
from django.conf import settings
chul_file = os.path.join(
settings.BASE_DIR, 'data/csvs/chul.csv')
table_columns = [
"CommUnitId", "Cu_code", "CommUnitName", "Date_CU_Established",
"Date_CU_Operational", "CuLocation", "Link_Facility_Code",
"CU_OfficialMobile", "CU_OfficialEmail", "Chew_Facility", "Chew_In_Charge",
"NumHouseholds", "CUStatus", "Approved", "ApprovedDate", "ApprovedBy",
"ApprovalComments", "isEdit", "UnitRecordArchived", "Date_added",
"Date_modified", "Delete_comments"]
def create_chuls_file():
chul = []
chews = []
chu_contacts = []
with open(chul_file, 'r') as csv_file:
chul_reader = csv.reader(csv_file)
for row in chul_reader:
code = row[1]
name = row[2]
date_established = row[3]
facility_code = row[7]
households_monitored = row[11]
status = row[12]
# approval = row[13]
# contacts
mobile = row[7]
mobile_dict = {
"contact": mobile,
"contact_type": {
"PHONE"
}
}
email = row[8]
email_dict = {
"contact": email,
"contact_type": {
"EMAIL"
}
}
if email_dict not in chu_contacts:
chu_contacts.append(email_dict)
if mobile_dict not in chu_contacts:
chu_contacts.append(email_dict)
# chew
first_name = row[10]
chu = {
"code": code,
"name": name,
"date_established": date_established,
"facility": {
"code": facility_code
},
"households_monitored": households_monitored,
"status": status,
}
if chu not in chul:
chul.append(chu)
chew = {
"first_name": first_name,
"heath_unit": {
"code": code
}
}
if chew not in chews:
chews.append(chew)
return chul, chews
def write_file(file_name, data):
if os.path.exists(file_name):
os.remove(file_name)
fac_file = open(file_name, 'w+')
del data[0]
dumped_data = ""
try:
dumped_data = json.dumps(data)
except:
print data
raise
fac_file.write(dumped_data)
def write_chuls_and_chews():
chus, chews = create_chuls_file()
write_file('chul.txt', chus)
write_file('chew.txt', chews)
|
|
6b01cfd5b580354de811fa7266811bfb04e31086
|
migrations/versions/0092_data_gov_uk.py
|
migrations/versions/0092_data_gov_uk.py
|
"""empty message
Revision ID: 0092_data_gov_uk
Revises: 0091_letter_billing
Create Date: 2017-06-05 16:15:17.744908
"""
# revision identifiers, used by Alembic.
revision = '0092_data_gov_uk'
down_revision = '0091_letter_billing'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
DATA_GOV_UK_ID = '123496d4-44cb-4324-8e0a-4187101f4bdc'
def upgrade():
op.execute("""INSERT INTO organisation VALUES (
'{}',
'',
'data_gov_uk_x2.png',
''
)""".format(DATA_GOV_UK_ID))
def downgrade():
op.execute("""
DELETE FROM organisation WHERE "id" = '{}'
""".format(DATA_GOV_UK_ID))
|
Add data.gov.uk to the list of organisations
|
Add data.gov.uk to the list of organisations
We need to send an email with data.gov.uk branding.
The image for the logo doesn’t exist yet, but doing this migration so
we’re ready when it the logo does exist.
|
Python
|
mit
|
alphagov/notifications-api,alphagov/notifications-api
|
Add data.gov.uk to the list of organisations
We need to send an email with data.gov.uk branding.
The image for the logo doesn’t exist yet, but doing this migration so
we’re ready when it the logo does exist.
|
"""empty message
Revision ID: 0092_data_gov_uk
Revises: 0091_letter_billing
Create Date: 2017-06-05 16:15:17.744908
"""
# revision identifiers, used by Alembic.
revision = '0092_data_gov_uk'
down_revision = '0091_letter_billing'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
DATA_GOV_UK_ID = '123496d4-44cb-4324-8e0a-4187101f4bdc'
def upgrade():
op.execute("""INSERT INTO organisation VALUES (
'{}',
'',
'data_gov_uk_x2.png',
''
)""".format(DATA_GOV_UK_ID))
def downgrade():
op.execute("""
DELETE FROM organisation WHERE "id" = '{}'
""".format(DATA_GOV_UK_ID))
|
<commit_before><commit_msg>Add data.gov.uk to the list of organisations
We need to send an email with data.gov.uk branding.
The image for the logo doesn’t exist yet, but doing this migration so
we’re ready when it the logo does exist.<commit_after>
|
"""empty message
Revision ID: 0092_data_gov_uk
Revises: 0091_letter_billing
Create Date: 2017-06-05 16:15:17.744908
"""
# revision identifiers, used by Alembic.
revision = '0092_data_gov_uk'
down_revision = '0091_letter_billing'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
DATA_GOV_UK_ID = '123496d4-44cb-4324-8e0a-4187101f4bdc'
def upgrade():
op.execute("""INSERT INTO organisation VALUES (
'{}',
'',
'data_gov_uk_x2.png',
''
)""".format(DATA_GOV_UK_ID))
def downgrade():
op.execute("""
DELETE FROM organisation WHERE "id" = '{}'
""".format(DATA_GOV_UK_ID))
|
Add data.gov.uk to the list of organisations
We need to send an email with data.gov.uk branding.
The image for the logo doesn’t exist yet, but doing this migration so
we’re ready when it the logo does exist."""empty message
Revision ID: 0092_data_gov_uk
Revises: 0091_letter_billing
Create Date: 2017-06-05 16:15:17.744908
"""
# revision identifiers, used by Alembic.
revision = '0092_data_gov_uk'
down_revision = '0091_letter_billing'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
DATA_GOV_UK_ID = '123496d4-44cb-4324-8e0a-4187101f4bdc'
def upgrade():
op.execute("""INSERT INTO organisation VALUES (
'{}',
'',
'data_gov_uk_x2.png',
''
)""".format(DATA_GOV_UK_ID))
def downgrade():
op.execute("""
DELETE FROM organisation WHERE "id" = '{}'
""".format(DATA_GOV_UK_ID))
|
<commit_before><commit_msg>Add data.gov.uk to the list of organisations
We need to send an email with data.gov.uk branding.
The image for the logo doesn’t exist yet, but doing this migration so
we’re ready when it the logo does exist.<commit_after>"""empty message
Revision ID: 0092_data_gov_uk
Revises: 0091_letter_billing
Create Date: 2017-06-05 16:15:17.744908
"""
# revision identifiers, used by Alembic.
revision = '0092_data_gov_uk'
down_revision = '0091_letter_billing'
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
DATA_GOV_UK_ID = '123496d4-44cb-4324-8e0a-4187101f4bdc'
def upgrade():
op.execute("""INSERT INTO organisation VALUES (
'{}',
'',
'data_gov_uk_x2.png',
''
)""".format(DATA_GOV_UK_ID))
def downgrade():
op.execute("""
DELETE FROM organisation WHERE "id" = '{}'
""".format(DATA_GOV_UK_ID))
|
|
8414ff6fa396117cc02dfea08098a9213889baad
|
proselint/checks/inprogress/capitalizationErrors.py
|
proselint/checks/inprogress/capitalizationErrors.py
|
# -*- coding: utf-8 -*-
"""MSC: Password in plain text.
---
layout: post
error_code: MSC
source: ???
source_url: ???
title: Capitalization of abbreviations
date: 2014-06-10 12:31:19
categories: writing
---
In Hybrid Zones, p 255 in a citation Hughes & Huges Systems Experts and Computers: The Systems Approach in Management and Engineering: World War Ii and After
World War Ii should have correct capitalizaiton.
"""
from proselint.tools import blacklist, memoize
@memoize
def check(text):
err = "MSC104"
msg = u"Don't fail to capitalize roman numeral abbreviations."
# pwd_regex = "[:]? [\S]{6,30}"
password = [
"World War{}".format(pwd_regex),
"my password is{}".format(pwd_regex),
"the password's{}".format(pwd_regex),
"my password's{}".format(pwd_regex),
]
return blacklist(text, password, err, msg)
|
Add a capitalization check for world wars inspired by error in a bibliography
|
Add a capitalization check for world wars inspired by error in a bibliography
|
Python
|
bsd-3-clause
|
amperser/proselint,amperser/proselint,jstewmon/proselint,amperser/proselint,jstewmon/proselint,jstewmon/proselint,amperser/proselint,amperser/proselint
|
Add a capitalization check for world wars inspired by error in a bibliography
|
# -*- coding: utf-8 -*-
"""MSC: Password in plain text.
---
layout: post
error_code: MSC
source: ???
source_url: ???
title: Capitalization of abbreviations
date: 2014-06-10 12:31:19
categories: writing
---
In Hybrid Zones, p 255 in a citation Hughes & Huges Systems Experts and Computers: The Systems Approach in Management and Engineering: World War Ii and After
World War Ii should have correct capitalizaiton.
"""
from proselint.tools import blacklist, memoize
@memoize
def check(text):
err = "MSC104"
msg = u"Don't fail to capitalize roman numeral abbreviations."
# pwd_regex = "[:]? [\S]{6,30}"
password = [
"World War{}".format(pwd_regex),
"my password is{}".format(pwd_regex),
"the password's{}".format(pwd_regex),
"my password's{}".format(pwd_regex),
]
return blacklist(text, password, err, msg)
|
<commit_before><commit_msg>Add a capitalization check for world wars inspired by error in a bibliography<commit_after>
|
# -*- coding: utf-8 -*-
"""MSC: Password in plain text.
---
layout: post
error_code: MSC
source: ???
source_url: ???
title: Capitalization of abbreviations
date: 2014-06-10 12:31:19
categories: writing
---
In Hybrid Zones, p 255 in a citation Hughes & Huges Systems Experts and Computers: The Systems Approach in Management and Engineering: World War Ii and After
World War Ii should have correct capitalizaiton.
"""
from proselint.tools import blacklist, memoize
@memoize
def check(text):
err = "MSC104"
msg = u"Don't fail to capitalize roman numeral abbreviations."
# pwd_regex = "[:]? [\S]{6,30}"
password = [
"World War{}".format(pwd_regex),
"my password is{}".format(pwd_regex),
"the password's{}".format(pwd_regex),
"my password's{}".format(pwd_regex),
]
return blacklist(text, password, err, msg)
|
Add a capitalization check for world wars inspired by error in a bibliography# -*- coding: utf-8 -*-
"""MSC: Password in plain text.
---
layout: post
error_code: MSC
source: ???
source_url: ???
title: Capitalization of abbreviations
date: 2014-06-10 12:31:19
categories: writing
---
In Hybrid Zones, p 255 in a citation Hughes & Huges Systems Experts and Computers: The Systems Approach in Management and Engineering: World War Ii and After
World War Ii should have correct capitalizaiton.
"""
from proselint.tools import blacklist, memoize
@memoize
def check(text):
err = "MSC104"
msg = u"Don't fail to capitalize roman numeral abbreviations."
# pwd_regex = "[:]? [\S]{6,30}"
password = [
"World War{}".format(pwd_regex),
"my password is{}".format(pwd_regex),
"the password's{}".format(pwd_regex),
"my password's{}".format(pwd_regex),
]
return blacklist(text, password, err, msg)
|
<commit_before><commit_msg>Add a capitalization check for world wars inspired by error in a bibliography<commit_after># -*- coding: utf-8 -*-
"""MSC: Password in plain text.
---
layout: post
error_code: MSC
source: ???
source_url: ???
title: Capitalization of abbreviations
date: 2014-06-10 12:31:19
categories: writing
---
In Hybrid Zones, p 255 in a citation Hughes & Huges Systems Experts and Computers: The Systems Approach in Management and Engineering: World War Ii and After
World War Ii should have correct capitalizaiton.
"""
from proselint.tools import blacklist, memoize
@memoize
def check(text):
err = "MSC104"
msg = u"Don't fail to capitalize roman numeral abbreviations."
# pwd_regex = "[:]? [\S]{6,30}"
password = [
"World War{}".format(pwd_regex),
"my password is{}".format(pwd_regex),
"the password's{}".format(pwd_regex),
"my password's{}".format(pwd_regex),
]
return blacklist(text, password, err, msg)
|
|
9d3557234419fac0376e07d732f0f6788bf13a55
|
demos/hybrid-mixed/mixed-helmholtz.py
|
demos/hybrid-mixed/mixed-helmholtz.py
|
"""This demonstration generates the relevant code from the Slate
expressions. Note that this is for code only; it's not solving any
particular PDE with given data.
See the main hybrid-mixed folder for an actual solution to a
mixed system using hybridization and static condensation.
"""
from firedrake import *
mesh = UnitSquareMesh(8, 8, quadrilateral=False)
RT = FiniteElement("RT", triangle, 1)
DG = FiniteElement("DG", triangle, 0)
T = FiniteElement("HDiv Trace", triangle, 0)
U = FunctionSpace(mesh, RT)
V = FunctionSpace(mesh, DG)
M = FunctionSpace(mesh, T)
W = U * V
n = FacetNormal(mesh)
p0 = Function(V)
g = Function(V)
f = Function(V)
mu = Function(V)
c = Function(V)
u, p = TrialFunctions(W)
w, phi = TestFunctions(W)
gamma = TestFunction(M)
A = Tensor(dot(w, mu*u)*dx - div(w)*p*dx
+ phi*div(u)*dx + phi*c*p*dx)
C = Tensor(gamma*dot(u, n)*dS + gamma*dot(u, n)*ds(2))
F = Tensor(-dot(w, n)*p0*ds(1) + phi*f*dx)
G = Tensor(gamma*g*ds(2))
S = C * A.inv * C.T
E = C * A.inv * F - G
Smat = assemble(S).force_evaluation()
Evec = assemble(E).dat.data
|
Add simple code generation script
|
Add simple code generation script
|
Python
|
mit
|
thomasgibson/tabula-rasa
|
Add simple code generation script
|
"""This demonstration generates the relevant code from the Slate
expressions. Note that this is for code only; it's not solving any
particular PDE with given data.
See the main hybrid-mixed folder for an actual solution to a
mixed system using hybridization and static condensation.
"""
from firedrake import *
mesh = UnitSquareMesh(8, 8, quadrilateral=False)
RT = FiniteElement("RT", triangle, 1)
DG = FiniteElement("DG", triangle, 0)
T = FiniteElement("HDiv Trace", triangle, 0)
U = FunctionSpace(mesh, RT)
V = FunctionSpace(mesh, DG)
M = FunctionSpace(mesh, T)
W = U * V
n = FacetNormal(mesh)
p0 = Function(V)
g = Function(V)
f = Function(V)
mu = Function(V)
c = Function(V)
u, p = TrialFunctions(W)
w, phi = TestFunctions(W)
gamma = TestFunction(M)
A = Tensor(dot(w, mu*u)*dx - div(w)*p*dx
+ phi*div(u)*dx + phi*c*p*dx)
C = Tensor(gamma*dot(u, n)*dS + gamma*dot(u, n)*ds(2))
F = Tensor(-dot(w, n)*p0*ds(1) + phi*f*dx)
G = Tensor(gamma*g*ds(2))
S = C * A.inv * C.T
E = C * A.inv * F - G
Smat = assemble(S).force_evaluation()
Evec = assemble(E).dat.data
|
<commit_before><commit_msg>Add simple code generation script<commit_after>
|
"""This demonstration generates the relevant code from the Slate
expressions. Note that this is for code only; it's not solving any
particular PDE with given data.
See the main hybrid-mixed folder for an actual solution to a
mixed system using hybridization and static condensation.
"""
from firedrake import *
mesh = UnitSquareMesh(8, 8, quadrilateral=False)
RT = FiniteElement("RT", triangle, 1)
DG = FiniteElement("DG", triangle, 0)
T = FiniteElement("HDiv Trace", triangle, 0)
U = FunctionSpace(mesh, RT)
V = FunctionSpace(mesh, DG)
M = FunctionSpace(mesh, T)
W = U * V
n = FacetNormal(mesh)
p0 = Function(V)
g = Function(V)
f = Function(V)
mu = Function(V)
c = Function(V)
u, p = TrialFunctions(W)
w, phi = TestFunctions(W)
gamma = TestFunction(M)
A = Tensor(dot(w, mu*u)*dx - div(w)*p*dx
+ phi*div(u)*dx + phi*c*p*dx)
C = Tensor(gamma*dot(u, n)*dS + gamma*dot(u, n)*ds(2))
F = Tensor(-dot(w, n)*p0*ds(1) + phi*f*dx)
G = Tensor(gamma*g*ds(2))
S = C * A.inv * C.T
E = C * A.inv * F - G
Smat = assemble(S).force_evaluation()
Evec = assemble(E).dat.data
|
Add simple code generation script"""This demonstration generates the relevant code from the Slate
expressions. Note that this is for code only; it's not solving any
particular PDE with given data.
See the main hybrid-mixed folder for an actual solution to a
mixed system using hybridization and static condensation.
"""
from firedrake import *
mesh = UnitSquareMesh(8, 8, quadrilateral=False)
RT = FiniteElement("RT", triangle, 1)
DG = FiniteElement("DG", triangle, 0)
T = FiniteElement("HDiv Trace", triangle, 0)
U = FunctionSpace(mesh, RT)
V = FunctionSpace(mesh, DG)
M = FunctionSpace(mesh, T)
W = U * V
n = FacetNormal(mesh)
p0 = Function(V)
g = Function(V)
f = Function(V)
mu = Function(V)
c = Function(V)
u, p = TrialFunctions(W)
w, phi = TestFunctions(W)
gamma = TestFunction(M)
A = Tensor(dot(w, mu*u)*dx - div(w)*p*dx
+ phi*div(u)*dx + phi*c*p*dx)
C = Tensor(gamma*dot(u, n)*dS + gamma*dot(u, n)*ds(2))
F = Tensor(-dot(w, n)*p0*ds(1) + phi*f*dx)
G = Tensor(gamma*g*ds(2))
S = C * A.inv * C.T
E = C * A.inv * F - G
Smat = assemble(S).force_evaluation()
Evec = assemble(E).dat.data
|
<commit_before><commit_msg>Add simple code generation script<commit_after>"""This demonstration generates the relevant code from the Slate
expressions. Note that this is for code only; it's not solving any
particular PDE with given data.
See the main hybrid-mixed folder for an actual solution to a
mixed system using hybridization and static condensation.
"""
from firedrake import *
mesh = UnitSquareMesh(8, 8, quadrilateral=False)
RT = FiniteElement("RT", triangle, 1)
DG = FiniteElement("DG", triangle, 0)
T = FiniteElement("HDiv Trace", triangle, 0)
U = FunctionSpace(mesh, RT)
V = FunctionSpace(mesh, DG)
M = FunctionSpace(mesh, T)
W = U * V
n = FacetNormal(mesh)
p0 = Function(V)
g = Function(V)
f = Function(V)
mu = Function(V)
c = Function(V)
u, p = TrialFunctions(W)
w, phi = TestFunctions(W)
gamma = TestFunction(M)
A = Tensor(dot(w, mu*u)*dx - div(w)*p*dx
+ phi*div(u)*dx + phi*c*p*dx)
C = Tensor(gamma*dot(u, n)*dS + gamma*dot(u, n)*ds(2))
F = Tensor(-dot(w, n)*p0*ds(1) + phi*f*dx)
G = Tensor(gamma*g*ds(2))
S = C * A.inv * C.T
E = C * A.inv * F - G
Smat = assemble(S).force_evaluation()
Evec = assemble(E).dat.data
|
|
dc15f8eb74592e0bc26aad2a2850e6a04de1492c
|
scripts/newActivity.py
|
scripts/newActivity.py
|
#!/usr/bin/env python
from datetime import datetime
from pymongo import MongoClient
import re
from subprocess import call
import sys
# minutes
window = 30
if len(sys.argv) != 2:
print 'Usage: %s <logfile>' % sys.argv[0]
sys.exit(1)
now = datetime.now()
logformat = re.compile('(\d{4}-\d\d-\d\d \d\d:\d\d:\d\d).*Created new projection.*?ColumnDescriptor\(/(\d{10})/.*')
active = set()
with open(sys.argv[1]) as input:
for line in input:
hit = logformat.match(line)
if hit:
timestamp = datetime.strptime(hit.group(1), '%Y-%m-%d %H:%M:%S')
if abs((now - timestamp).seconds) < 60*window: # 30 minute window for now
active.add(hit.group(2))
#print 'Activity at %s for %s: %s' % (hit.group(1), hit.group(2), hit.group(0))
if len(active) > 0:
print 'Active accounts with new columns in the last %d minutes:' % window
conn = MongoClient('localhost', 27017)
accounts = {}
for acct in conn['accounts_v1_2_2']['accounts'].find():
accounts[acct['accountId']] = acct['email']
for acct in sorted(list(active)):
print ' %s (%s)' % (acct, accounts[acct])
conn.close()
|
Add in script to track new account activity
|
Add in script to track new account activity
|
Python
|
apache-2.0
|
drostron/quasar,slamdata/slamengine,djspiewak/quasar,slamdata/slamengine,drostron/quasar,slamdata/slamengine,drostron/quasar,quasar-analytics/quasar,quasar-analytics/quasar,slamdata/quasar,jedesah/Quasar,jedesah/Quasar,quasar-analytics/quasar,jedesah/Quasar,jedesah/Quasar,quasar-analytics/quasar,drostron/quasar
|
Add in script to track new account activity
|
#!/usr/bin/env python
from datetime import datetime
from pymongo import MongoClient
import re
from subprocess import call
import sys
# minutes
window = 30
if len(sys.argv) != 2:
print 'Usage: %s <logfile>' % sys.argv[0]
sys.exit(1)
now = datetime.now()
logformat = re.compile('(\d{4}-\d\d-\d\d \d\d:\d\d:\d\d).*Created new projection.*?ColumnDescriptor\(/(\d{10})/.*')
active = set()
with open(sys.argv[1]) as input:
for line in input:
hit = logformat.match(line)
if hit:
timestamp = datetime.strptime(hit.group(1), '%Y-%m-%d %H:%M:%S')
if abs((now - timestamp).seconds) < 60*window: # 30 minute window for now
active.add(hit.group(2))
#print 'Activity at %s for %s: %s' % (hit.group(1), hit.group(2), hit.group(0))
if len(active) > 0:
print 'Active accounts with new columns in the last %d minutes:' % window
conn = MongoClient('localhost', 27017)
accounts = {}
for acct in conn['accounts_v1_2_2']['accounts'].find():
accounts[acct['accountId']] = acct['email']
for acct in sorted(list(active)):
print ' %s (%s)' % (acct, accounts[acct])
conn.close()
|
<commit_before><commit_msg>Add in script to track new account activity<commit_after>
|
#!/usr/bin/env python
from datetime import datetime
from pymongo import MongoClient
import re
from subprocess import call
import sys
# minutes
window = 30
if len(sys.argv) != 2:
print 'Usage: %s <logfile>' % sys.argv[0]
sys.exit(1)
now = datetime.now()
logformat = re.compile('(\d{4}-\d\d-\d\d \d\d:\d\d:\d\d).*Created new projection.*?ColumnDescriptor\(/(\d{10})/.*')
active = set()
with open(sys.argv[1]) as input:
for line in input:
hit = logformat.match(line)
if hit:
timestamp = datetime.strptime(hit.group(1), '%Y-%m-%d %H:%M:%S')
if abs((now - timestamp).seconds) < 60*window: # 30 minute window for now
active.add(hit.group(2))
#print 'Activity at %s for %s: %s' % (hit.group(1), hit.group(2), hit.group(0))
if len(active) > 0:
print 'Active accounts with new columns in the last %d minutes:' % window
conn = MongoClient('localhost', 27017)
accounts = {}
for acct in conn['accounts_v1_2_2']['accounts'].find():
accounts[acct['accountId']] = acct['email']
for acct in sorted(list(active)):
print ' %s (%s)' % (acct, accounts[acct])
conn.close()
|
Add in script to track new account activity#!/usr/bin/env python
from datetime import datetime
from pymongo import MongoClient
import re
from subprocess import call
import sys
# minutes
window = 30
if len(sys.argv) != 2:
print 'Usage: %s <logfile>' % sys.argv[0]
sys.exit(1)
now = datetime.now()
logformat = re.compile('(\d{4}-\d\d-\d\d \d\d:\d\d:\d\d).*Created new projection.*?ColumnDescriptor\(/(\d{10})/.*')
active = set()
with open(sys.argv[1]) as input:
for line in input:
hit = logformat.match(line)
if hit:
timestamp = datetime.strptime(hit.group(1), '%Y-%m-%d %H:%M:%S')
if abs((now - timestamp).seconds) < 60*window: # 30 minute window for now
active.add(hit.group(2))
#print 'Activity at %s for %s: %s' % (hit.group(1), hit.group(2), hit.group(0))
if len(active) > 0:
print 'Active accounts with new columns in the last %d minutes:' % window
conn = MongoClient('localhost', 27017)
accounts = {}
for acct in conn['accounts_v1_2_2']['accounts'].find():
accounts[acct['accountId']] = acct['email']
for acct in sorted(list(active)):
print ' %s (%s)' % (acct, accounts[acct])
conn.close()
|
<commit_before><commit_msg>Add in script to track new account activity<commit_after>#!/usr/bin/env python
from datetime import datetime
from pymongo import MongoClient
import re
from subprocess import call
import sys
# minutes
window = 30
if len(sys.argv) != 2:
print 'Usage: %s <logfile>' % sys.argv[0]
sys.exit(1)
now = datetime.now()
logformat = re.compile('(\d{4}-\d\d-\d\d \d\d:\d\d:\d\d).*Created new projection.*?ColumnDescriptor\(/(\d{10})/.*')
active = set()
with open(sys.argv[1]) as input:
for line in input:
hit = logformat.match(line)
if hit:
timestamp = datetime.strptime(hit.group(1), '%Y-%m-%d %H:%M:%S')
if abs((now - timestamp).seconds) < 60*window: # 30 minute window for now
active.add(hit.group(2))
#print 'Activity at %s for %s: %s' % (hit.group(1), hit.group(2), hit.group(0))
if len(active) > 0:
print 'Active accounts with new columns in the last %d minutes:' % window
conn = MongoClient('localhost', 27017)
accounts = {}
for acct in conn['accounts_v1_2_2']['accounts'].find():
accounts[acct['accountId']] = acct['email']
for acct in sorted(list(active)):
print ' %s (%s)' % (acct, accounts[acct])
conn.close()
|
|
ee54f0ca3317b6f2119f15d42a7dd8d42d4f8059
|
standup/test_settings.py
|
standup/test_settings.py
|
from standup.settings import *
DATABASE_URL = 'sqlite://'
|
from standup.settings import *
# This looks wrong, but actually, it's an in-memory db uri
# and it causes our tests to run super fast!
DATABASE_URL = 'sqlite://'
|
Add comment of vital importance
|
Add comment of vital importance
This bumps me up another shade of green! Yay!
|
Python
|
bsd-3-clause
|
safwanrahman/standup,rehandalal/standup,willkg/standup,rlr/standup,rehandalal/standup,rlr/standup,mozilla/standup,rlr/standup,safwanrahman/standup,mozilla/standup,willkg/standup,willkg/standup,safwanrahman/standup,willkg/standup,safwanrahman/standup,mozilla/standup,rehandalal/standup,mozilla/standup
|
from standup.settings import *
DATABASE_URL = 'sqlite://'
Add comment of vital importance
This bumps me up another shade of green! Yay!
|
from standup.settings import *
# This looks wrong, but actually, it's an in-memory db uri
# and it causes our tests to run super fast!
DATABASE_URL = 'sqlite://'
|
<commit_before>from standup.settings import *
DATABASE_URL = 'sqlite://'
<commit_msg>Add comment of vital importance
This bumps me up another shade of green! Yay!<commit_after>
|
from standup.settings import *
# This looks wrong, but actually, it's an in-memory db uri
# and it causes our tests to run super fast!
DATABASE_URL = 'sqlite://'
|
from standup.settings import *
DATABASE_URL = 'sqlite://'
Add comment of vital importance
This bumps me up another shade of green! Yay!from standup.settings import *
# This looks wrong, but actually, it's an in-memory db uri
# and it causes our tests to run super fast!
DATABASE_URL = 'sqlite://'
|
<commit_before>from standup.settings import *
DATABASE_URL = 'sqlite://'
<commit_msg>Add comment of vital importance
This bumps me up another shade of green! Yay!<commit_after>from standup.settings import *
# This looks wrong, but actually, it's an in-memory db uri
# and it causes our tests to run super fast!
DATABASE_URL = 'sqlite://'
|
23289be44808baf78c01acb93761f661c0908022
|
scripts/retranslate_models.py
|
scripts/retranslate_models.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-06-04 09:55
from __future__ import unicode_literals
from django.db import connection
from django.utils.translation import activate, _trans, ugettext as _
from tenant_extras.middleware import tenant_translation
from bluebottle.clients.utils import LocalTenant
from bluebottle.clients.models import Client
from bluebottle.geo.models import Country
from bluebottle.bb_projects.models import ProjectTheme
from bluebottle.bb_projects.models import ProjectPhase
from bluebottle.tasks.models import Skill
def run():
for tenant in Client.objects.all():
with LocalTenant(tenant):
activate('en')
for model in [Country, ProjectTheme, ProjectPhase, Skill]:
for obj in model.objects.all():
for translation in obj.translations.all():
activate(translation.language_code)
_trans._active.value = tenant_translation(
translation.language_code, connection.tenant.client_name
)
translation.name = _(translation.name)
print translation.name, translation.language_code
translation.save()
|
Add script that re-translates models
|
Add script that re-translates models
|
Python
|
bsd-3-clause
|
onepercentclub/bluebottle,onepercentclub/bluebottle,onepercentclub/bluebottle,onepercentclub/bluebottle,onepercentclub/bluebottle
|
Add script that re-translates models
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-06-04 09:55
from __future__ import unicode_literals
from django.db import connection
from django.utils.translation import activate, _trans, ugettext as _
from tenant_extras.middleware import tenant_translation
from bluebottle.clients.utils import LocalTenant
from bluebottle.clients.models import Client
from bluebottle.geo.models import Country
from bluebottle.bb_projects.models import ProjectTheme
from bluebottle.bb_projects.models import ProjectPhase
from bluebottle.tasks.models import Skill
def run():
for tenant in Client.objects.all():
with LocalTenant(tenant):
activate('en')
for model in [Country, ProjectTheme, ProjectPhase, Skill]:
for obj in model.objects.all():
for translation in obj.translations.all():
activate(translation.language_code)
_trans._active.value = tenant_translation(
translation.language_code, connection.tenant.client_name
)
translation.name = _(translation.name)
print translation.name, translation.language_code
translation.save()
|
<commit_before><commit_msg>Add script that re-translates models<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-06-04 09:55
from __future__ import unicode_literals
from django.db import connection
from django.utils.translation import activate, _trans, ugettext as _
from tenant_extras.middleware import tenant_translation
from bluebottle.clients.utils import LocalTenant
from bluebottle.clients.models import Client
from bluebottle.geo.models import Country
from bluebottle.bb_projects.models import ProjectTheme
from bluebottle.bb_projects.models import ProjectPhase
from bluebottle.tasks.models import Skill
def run():
for tenant in Client.objects.all():
with LocalTenant(tenant):
activate('en')
for model in [Country, ProjectTheme, ProjectPhase, Skill]:
for obj in model.objects.all():
for translation in obj.translations.all():
activate(translation.language_code)
_trans._active.value = tenant_translation(
translation.language_code, connection.tenant.client_name
)
translation.name = _(translation.name)
print translation.name, translation.language_code
translation.save()
|
Add script that re-translates models# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-06-04 09:55
from __future__ import unicode_literals
from django.db import connection
from django.utils.translation import activate, _trans, ugettext as _
from tenant_extras.middleware import tenant_translation
from bluebottle.clients.utils import LocalTenant
from bluebottle.clients.models import Client
from bluebottle.geo.models import Country
from bluebottle.bb_projects.models import ProjectTheme
from bluebottle.bb_projects.models import ProjectPhase
from bluebottle.tasks.models import Skill
def run():
for tenant in Client.objects.all():
with LocalTenant(tenant):
activate('en')
for model in [Country, ProjectTheme, ProjectPhase, Skill]:
for obj in model.objects.all():
for translation in obj.translations.all():
activate(translation.language_code)
_trans._active.value = tenant_translation(
translation.language_code, connection.tenant.client_name
)
translation.name = _(translation.name)
print translation.name, translation.language_code
translation.save()
|
<commit_before><commit_msg>Add script that re-translates models<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-06-04 09:55
from __future__ import unicode_literals
from django.db import connection
from django.utils.translation import activate, _trans, ugettext as _
from tenant_extras.middleware import tenant_translation
from bluebottle.clients.utils import LocalTenant
from bluebottle.clients.models import Client
from bluebottle.geo.models import Country
from bluebottle.bb_projects.models import ProjectTheme
from bluebottle.bb_projects.models import ProjectPhase
from bluebottle.tasks.models import Skill
def run():
for tenant in Client.objects.all():
with LocalTenant(tenant):
activate('en')
for model in [Country, ProjectTheme, ProjectPhase, Skill]:
for obj in model.objects.all():
for translation in obj.translations.all():
activate(translation.language_code)
_trans._active.value = tenant_translation(
translation.language_code, connection.tenant.client_name
)
translation.name = _(translation.name)
print translation.name, translation.language_code
translation.save()
|
|
1734198f0471c55dd872e14b31ce59b98baf576f
|
violations/tests/test_py_unittest.py
|
violations/tests/test_py_unittest.py
|
from django.test import TestCase
from tasks.const import STATUS_SUCCESS, STATUS_FAILED
from ..py_unittest import py_unittest_violation
from .base import get_content
class PyUnittestViolationCase(TestCase):
"""Python unittest violation case"""
def test_success(self):
"""Test success result"""
data = {
'raw': get_content('py_unittest_success.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_SUCCESS)
self.assertEqual(result['plot']['failures'], 0)
self.assertEqual(result['plot']['errors'], 0)
self.assertEqual(result['plot']['test_count'], 50)
def test_fail(self):
"""Test fail result"""
data = {
'raw': get_content('py_unittest_fail.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_FAILED)
self.assertEqual(result['plot']['failures'], 2)
self.assertEqual(result['plot']['errors'], 1)
self.assertEqual(result['plot']['test_count'], 50)
|
Add py unittest violation case
|
Add py unittest violation case
|
Python
|
mit
|
nvbn/coviolations_web,nvbn/coviolations_web
|
Add py unittest violation case
|
from django.test import TestCase
from tasks.const import STATUS_SUCCESS, STATUS_FAILED
from ..py_unittest import py_unittest_violation
from .base import get_content
class PyUnittestViolationCase(TestCase):
"""Python unittest violation case"""
def test_success(self):
"""Test success result"""
data = {
'raw': get_content('py_unittest_success.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_SUCCESS)
self.assertEqual(result['plot']['failures'], 0)
self.assertEqual(result['plot']['errors'], 0)
self.assertEqual(result['plot']['test_count'], 50)
def test_fail(self):
"""Test fail result"""
data = {
'raw': get_content('py_unittest_fail.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_FAILED)
self.assertEqual(result['plot']['failures'], 2)
self.assertEqual(result['plot']['errors'], 1)
self.assertEqual(result['plot']['test_count'], 50)
|
<commit_before><commit_msg>Add py unittest violation case<commit_after>
|
from django.test import TestCase
from tasks.const import STATUS_SUCCESS, STATUS_FAILED
from ..py_unittest import py_unittest_violation
from .base import get_content
class PyUnittestViolationCase(TestCase):
"""Python unittest violation case"""
def test_success(self):
"""Test success result"""
data = {
'raw': get_content('py_unittest_success.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_SUCCESS)
self.assertEqual(result['plot']['failures'], 0)
self.assertEqual(result['plot']['errors'], 0)
self.assertEqual(result['plot']['test_count'], 50)
def test_fail(self):
"""Test fail result"""
data = {
'raw': get_content('py_unittest_fail.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_FAILED)
self.assertEqual(result['plot']['failures'], 2)
self.assertEqual(result['plot']['errors'], 1)
self.assertEqual(result['plot']['test_count'], 50)
|
Add py unittest violation casefrom django.test import TestCase
from tasks.const import STATUS_SUCCESS, STATUS_FAILED
from ..py_unittest import py_unittest_violation
from .base import get_content
class PyUnittestViolationCase(TestCase):
"""Python unittest violation case"""
def test_success(self):
"""Test success result"""
data = {
'raw': get_content('py_unittest_success.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_SUCCESS)
self.assertEqual(result['plot']['failures'], 0)
self.assertEqual(result['plot']['errors'], 0)
self.assertEqual(result['plot']['test_count'], 50)
def test_fail(self):
"""Test fail result"""
data = {
'raw': get_content('py_unittest_fail.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_FAILED)
self.assertEqual(result['plot']['failures'], 2)
self.assertEqual(result['plot']['errors'], 1)
self.assertEqual(result['plot']['test_count'], 50)
|
<commit_before><commit_msg>Add py unittest violation case<commit_after>from django.test import TestCase
from tasks.const import STATUS_SUCCESS, STATUS_FAILED
from ..py_unittest import py_unittest_violation
from .base import get_content
class PyUnittestViolationCase(TestCase):
"""Python unittest violation case"""
def test_success(self):
"""Test success result"""
data = {
'raw': get_content('py_unittest_success.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_SUCCESS)
self.assertEqual(result['plot']['failures'], 0)
self.assertEqual(result['plot']['errors'], 0)
self.assertEqual(result['plot']['test_count'], 50)
def test_fail(self):
"""Test fail result"""
data = {
'raw': get_content('py_unittest_fail.out'),
}
result = py_unittest_violation(data)
self.assertEqual(result['status'], STATUS_FAILED)
self.assertEqual(result['plot']['failures'], 2)
self.assertEqual(result['plot']['errors'], 1)
self.assertEqual(result['plot']['test_count'], 50)
|
|
758e2777310935083760dfb50e97062a93214720
|
tests/changes/api/test_auth_index.py
|
tests/changes/api/test_auth_index.py
|
from changes.testutils import APITestCase
class AuthIndexTest(APITestCase):
path = '/api/0/auth/'
def test_anonymous(self):
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is False
def test_authenticated(self):
self.login_default()
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is True
assert data['user'] == {
'id': self.default_user.id.hex,
'email': self.default_user.email,
}
|
Add tests for auth index
|
Add tests for auth index
|
Python
|
apache-2.0
|
dropbox/changes,bowlofstew/changes,bowlofstew/changes,dropbox/changes,dropbox/changes,dropbox/changes,wfxiang08/changes,wfxiang08/changes,wfxiang08/changes,bowlofstew/changes,bowlofstew/changes,wfxiang08/changes
|
Add tests for auth index
|
from changes.testutils import APITestCase
class AuthIndexTest(APITestCase):
path = '/api/0/auth/'
def test_anonymous(self):
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is False
def test_authenticated(self):
self.login_default()
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is True
assert data['user'] == {
'id': self.default_user.id.hex,
'email': self.default_user.email,
}
|
<commit_before><commit_msg>Add tests for auth index<commit_after>
|
from changes.testutils import APITestCase
class AuthIndexTest(APITestCase):
path = '/api/0/auth/'
def test_anonymous(self):
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is False
def test_authenticated(self):
self.login_default()
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is True
assert data['user'] == {
'id': self.default_user.id.hex,
'email': self.default_user.email,
}
|
Add tests for auth indexfrom changes.testutils import APITestCase
class AuthIndexTest(APITestCase):
path = '/api/0/auth/'
def test_anonymous(self):
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is False
def test_authenticated(self):
self.login_default()
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is True
assert data['user'] == {
'id': self.default_user.id.hex,
'email': self.default_user.email,
}
|
<commit_before><commit_msg>Add tests for auth index<commit_after>from changes.testutils import APITestCase
class AuthIndexTest(APITestCase):
path = '/api/0/auth/'
def test_anonymous(self):
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is False
def test_authenticated(self):
self.login_default()
resp = self.client.get(self.path)
assert resp.status_code == 200
data = self.unserialize(resp)
assert data['authenticated'] is True
assert data['user'] == {
'id': self.default_user.id.hex,
'email': self.default_user.email,
}
|
|
1f2cee72bfec767e0dcdc37b86c8ab745d2b4544
|
project/api/migrations/0017_auto_20180331_1322.py
|
project/api/migrations/0017_auto_20180331_1322.py
|
# Generated by Django 2.0.3 on 2018-03-31 20:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0016_auto_20180328_0601'),
]
operations = [
migrations.AlterField(
model_name='office',
name='code',
field=models.IntegerField(blank=True, choices=[('International', [(100, 'SCJC Chair'), (110, 'SCJC Chair Past'), (120, 'SCJC CA'), (130, 'SCJC MUS'), (140, 'SCJC PER'), (150, 'SCJC SNG'), (160, 'SCJC Chart'), (170, 'SCJC Admin')]), ('District', [(210, 'DRCJ'), (220, 'DRCJ Assistant'), (230, 'JUDGE CA'), (240, 'JUDGE MUS'), (250, 'JUDGE PER'), (260, 'JUDGE SNG'), (270, 'CANDIDATE CA'), (280, 'CANDIDATE MUS'), (290, 'CANDIDATE PER'), (295, 'CANDIDATE SNG')]), ('Group', [(310, 'CPRES'), (320, 'CSEC'), (320, 'CDIR'), (340, 'CASS'), (350, 'CMAN'), (410, 'QADM')])], help_text='\n The short-form office code.', null=True),
),
]
|
Add candidate status to DB field
|
Add candidate status to DB field
|
Python
|
bsd-2-clause
|
dbinetti/barberscore-django,barberscore/barberscore-api,barberscore/barberscore-api,dbinetti/barberscore,dbinetti/barberscore-django,barberscore/barberscore-api,barberscore/barberscore-api,dbinetti/barberscore
|
Add candidate status to DB field
|
# Generated by Django 2.0.3 on 2018-03-31 20:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0016_auto_20180328_0601'),
]
operations = [
migrations.AlterField(
model_name='office',
name='code',
field=models.IntegerField(blank=True, choices=[('International', [(100, 'SCJC Chair'), (110, 'SCJC Chair Past'), (120, 'SCJC CA'), (130, 'SCJC MUS'), (140, 'SCJC PER'), (150, 'SCJC SNG'), (160, 'SCJC Chart'), (170, 'SCJC Admin')]), ('District', [(210, 'DRCJ'), (220, 'DRCJ Assistant'), (230, 'JUDGE CA'), (240, 'JUDGE MUS'), (250, 'JUDGE PER'), (260, 'JUDGE SNG'), (270, 'CANDIDATE CA'), (280, 'CANDIDATE MUS'), (290, 'CANDIDATE PER'), (295, 'CANDIDATE SNG')]), ('Group', [(310, 'CPRES'), (320, 'CSEC'), (320, 'CDIR'), (340, 'CASS'), (350, 'CMAN'), (410, 'QADM')])], help_text='\n The short-form office code.', null=True),
),
]
|
<commit_before><commit_msg>Add candidate status to DB field<commit_after>
|
# Generated by Django 2.0.3 on 2018-03-31 20:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0016_auto_20180328_0601'),
]
operations = [
migrations.AlterField(
model_name='office',
name='code',
field=models.IntegerField(blank=True, choices=[('International', [(100, 'SCJC Chair'), (110, 'SCJC Chair Past'), (120, 'SCJC CA'), (130, 'SCJC MUS'), (140, 'SCJC PER'), (150, 'SCJC SNG'), (160, 'SCJC Chart'), (170, 'SCJC Admin')]), ('District', [(210, 'DRCJ'), (220, 'DRCJ Assistant'), (230, 'JUDGE CA'), (240, 'JUDGE MUS'), (250, 'JUDGE PER'), (260, 'JUDGE SNG'), (270, 'CANDIDATE CA'), (280, 'CANDIDATE MUS'), (290, 'CANDIDATE PER'), (295, 'CANDIDATE SNG')]), ('Group', [(310, 'CPRES'), (320, 'CSEC'), (320, 'CDIR'), (340, 'CASS'), (350, 'CMAN'), (410, 'QADM')])], help_text='\n The short-form office code.', null=True),
),
]
|
Add candidate status to DB field# Generated by Django 2.0.3 on 2018-03-31 20:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0016_auto_20180328_0601'),
]
operations = [
migrations.AlterField(
model_name='office',
name='code',
field=models.IntegerField(blank=True, choices=[('International', [(100, 'SCJC Chair'), (110, 'SCJC Chair Past'), (120, 'SCJC CA'), (130, 'SCJC MUS'), (140, 'SCJC PER'), (150, 'SCJC SNG'), (160, 'SCJC Chart'), (170, 'SCJC Admin')]), ('District', [(210, 'DRCJ'), (220, 'DRCJ Assistant'), (230, 'JUDGE CA'), (240, 'JUDGE MUS'), (250, 'JUDGE PER'), (260, 'JUDGE SNG'), (270, 'CANDIDATE CA'), (280, 'CANDIDATE MUS'), (290, 'CANDIDATE PER'), (295, 'CANDIDATE SNG')]), ('Group', [(310, 'CPRES'), (320, 'CSEC'), (320, 'CDIR'), (340, 'CASS'), (350, 'CMAN'), (410, 'QADM')])], help_text='\n The short-form office code.', null=True),
),
]
|
<commit_before><commit_msg>Add candidate status to DB field<commit_after># Generated by Django 2.0.3 on 2018-03-31 20:22
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('api', '0016_auto_20180328_0601'),
]
operations = [
migrations.AlterField(
model_name='office',
name='code',
field=models.IntegerField(blank=True, choices=[('International', [(100, 'SCJC Chair'), (110, 'SCJC Chair Past'), (120, 'SCJC CA'), (130, 'SCJC MUS'), (140, 'SCJC PER'), (150, 'SCJC SNG'), (160, 'SCJC Chart'), (170, 'SCJC Admin')]), ('District', [(210, 'DRCJ'), (220, 'DRCJ Assistant'), (230, 'JUDGE CA'), (240, 'JUDGE MUS'), (250, 'JUDGE PER'), (260, 'JUDGE SNG'), (270, 'CANDIDATE CA'), (280, 'CANDIDATE MUS'), (290, 'CANDIDATE PER'), (295, 'CANDIDATE SNG')]), ('Group', [(310, 'CPRES'), (320, 'CSEC'), (320, 'CDIR'), (340, 'CASS'), (350, 'CMAN'), (410, 'QADM')])], help_text='\n The short-form office code.', null=True),
),
]
|
|
ad3dbee189ac900baf3a84f3b0e843d202f369f8
|
tests/unit/test_test_module_names.py
|
tests/unit/test_test_module_names.py
|
# -*- coding: utf-8 -*-
'''
tests.unit.doc_test
~~~~~~~~~~~~~~~~~~~~
'''
# Import Python libs
from __future__ import absolute_import
import os
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt libs
import integration
EXCLUDED_DIRS = [
'tests/pkg',
'tests/perf',
'tests/unit/utils/cache_mods',
'tests/unit/modules/inspectlib',
'tests/unit/modules/zypp/',
'tests/unit/templates/files',
'tests/integration/files/',
'tests/integration/cloud/helpers',
]
EXCLUDED_FILES = [
'tests/eventlisten.py',
'tests/buildpackage.py',
'tests/saltsh.py',
'tests/minionswarm.py',
'tests/wheeltest.py',
'tests/runtests.py',
'tests/jenkins.py',
'tests/salt-tcpdump.py',
'tests/conftest.py',
'tests/packdump.py',
'tests/consist.py',
'tests/modparser.py',
'tests/committer_parser.py',
'tests/integration/utils/testprogram.py',
'tests/utils/cptestcase.py'
]
class BadTestModuleNamesTestCase(TestCase):
'''
Unit test case for testing bad names for test modules
'''
maxDiff = None
def test_module_name(self):
'''
Make sure all test modules conform to the test_*.py naming scheme
'''
excluded_dirs = tuple(EXCLUDED_DIRS)
code_dir = integration.CODE_DIR
tests_dir = os.path.join(code_dir, 'tests')
bad_names = []
for root, dirs, files in os.walk(tests_dir):
reldir = os.path.relpath(root, code_dir)
if reldir.startswith(excluded_dirs) or reldir.endswith('__pycache__'):
continue
for fname in files:
if fname == '__init__.py' or not fname.endswith('.py'):
continue
relpath = os.path.join(reldir, fname)
if relpath in EXCLUDED_FILES:
continue
if not fname.startswith('test_'):
bad_names.append(relpath)
error_msg = '\n\nPlease rename the following files:\n'
for path in bad_names:
directory, filename = path.rsplit(os.sep, 1)
filename, ext = os.path.splitext(filename)
error_msg += ' {} -> {}/test_{}.py\n'.format(path, directory, filename.split('_test')[0])
error_msg += '\nIf you believe one of the entries above should be ignored, please add it to either\n'
error_msg += '\'EXCLUDED_DIRS\' or \'EXCLUDED_FILES\' in \'tests/unit/test_module_names.py\'.\n'
error_msg += 'If it is a tests module, then please rename as suggested.'
self.assertEqual([], bad_names, error_msg)
|
Add test case to make sure we always proper test module names from now on
|
Add test case to make sure we always proper test module names from now on
|
Python
|
apache-2.0
|
saltstack/salt,saltstack/salt,saltstack/salt,saltstack/salt,saltstack/salt
|
Add test case to make sure we always proper test module names from now on
|
# -*- coding: utf-8 -*-
'''
tests.unit.doc_test
~~~~~~~~~~~~~~~~~~~~
'''
# Import Python libs
from __future__ import absolute_import
import os
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt libs
import integration
EXCLUDED_DIRS = [
'tests/pkg',
'tests/perf',
'tests/unit/utils/cache_mods',
'tests/unit/modules/inspectlib',
'tests/unit/modules/zypp/',
'tests/unit/templates/files',
'tests/integration/files/',
'tests/integration/cloud/helpers',
]
EXCLUDED_FILES = [
'tests/eventlisten.py',
'tests/buildpackage.py',
'tests/saltsh.py',
'tests/minionswarm.py',
'tests/wheeltest.py',
'tests/runtests.py',
'tests/jenkins.py',
'tests/salt-tcpdump.py',
'tests/conftest.py',
'tests/packdump.py',
'tests/consist.py',
'tests/modparser.py',
'tests/committer_parser.py',
'tests/integration/utils/testprogram.py',
'tests/utils/cptestcase.py'
]
class BadTestModuleNamesTestCase(TestCase):
'''
Unit test case for testing bad names for test modules
'''
maxDiff = None
def test_module_name(self):
'''
Make sure all test modules conform to the test_*.py naming scheme
'''
excluded_dirs = tuple(EXCLUDED_DIRS)
code_dir = integration.CODE_DIR
tests_dir = os.path.join(code_dir, 'tests')
bad_names = []
for root, dirs, files in os.walk(tests_dir):
reldir = os.path.relpath(root, code_dir)
if reldir.startswith(excluded_dirs) or reldir.endswith('__pycache__'):
continue
for fname in files:
if fname == '__init__.py' or not fname.endswith('.py'):
continue
relpath = os.path.join(reldir, fname)
if relpath in EXCLUDED_FILES:
continue
if not fname.startswith('test_'):
bad_names.append(relpath)
error_msg = '\n\nPlease rename the following files:\n'
for path in bad_names:
directory, filename = path.rsplit(os.sep, 1)
filename, ext = os.path.splitext(filename)
error_msg += ' {} -> {}/test_{}.py\n'.format(path, directory, filename.split('_test')[0])
error_msg += '\nIf you believe one of the entries above should be ignored, please add it to either\n'
error_msg += '\'EXCLUDED_DIRS\' or \'EXCLUDED_FILES\' in \'tests/unit/test_module_names.py\'.\n'
error_msg += 'If it is a tests module, then please rename as suggested.'
self.assertEqual([], bad_names, error_msg)
|
<commit_before><commit_msg>Add test case to make sure we always proper test module names from now on<commit_after>
|
# -*- coding: utf-8 -*-
'''
tests.unit.doc_test
~~~~~~~~~~~~~~~~~~~~
'''
# Import Python libs
from __future__ import absolute_import
import os
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt libs
import integration
EXCLUDED_DIRS = [
'tests/pkg',
'tests/perf',
'tests/unit/utils/cache_mods',
'tests/unit/modules/inspectlib',
'tests/unit/modules/zypp/',
'tests/unit/templates/files',
'tests/integration/files/',
'tests/integration/cloud/helpers',
]
EXCLUDED_FILES = [
'tests/eventlisten.py',
'tests/buildpackage.py',
'tests/saltsh.py',
'tests/minionswarm.py',
'tests/wheeltest.py',
'tests/runtests.py',
'tests/jenkins.py',
'tests/salt-tcpdump.py',
'tests/conftest.py',
'tests/packdump.py',
'tests/consist.py',
'tests/modparser.py',
'tests/committer_parser.py',
'tests/integration/utils/testprogram.py',
'tests/utils/cptestcase.py'
]
class BadTestModuleNamesTestCase(TestCase):
'''
Unit test case for testing bad names for test modules
'''
maxDiff = None
def test_module_name(self):
'''
Make sure all test modules conform to the test_*.py naming scheme
'''
excluded_dirs = tuple(EXCLUDED_DIRS)
code_dir = integration.CODE_DIR
tests_dir = os.path.join(code_dir, 'tests')
bad_names = []
for root, dirs, files in os.walk(tests_dir):
reldir = os.path.relpath(root, code_dir)
if reldir.startswith(excluded_dirs) or reldir.endswith('__pycache__'):
continue
for fname in files:
if fname == '__init__.py' or not fname.endswith('.py'):
continue
relpath = os.path.join(reldir, fname)
if relpath in EXCLUDED_FILES:
continue
if not fname.startswith('test_'):
bad_names.append(relpath)
error_msg = '\n\nPlease rename the following files:\n'
for path in bad_names:
directory, filename = path.rsplit(os.sep, 1)
filename, ext = os.path.splitext(filename)
error_msg += ' {} -> {}/test_{}.py\n'.format(path, directory, filename.split('_test')[0])
error_msg += '\nIf you believe one of the entries above should be ignored, please add it to either\n'
error_msg += '\'EXCLUDED_DIRS\' or \'EXCLUDED_FILES\' in \'tests/unit/test_module_names.py\'.\n'
error_msg += 'If it is a tests module, then please rename as suggested.'
self.assertEqual([], bad_names, error_msg)
|
Add test case to make sure we always proper test module names from now on# -*- coding: utf-8 -*-
'''
tests.unit.doc_test
~~~~~~~~~~~~~~~~~~~~
'''
# Import Python libs
from __future__ import absolute_import
import os
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt libs
import integration
EXCLUDED_DIRS = [
'tests/pkg',
'tests/perf',
'tests/unit/utils/cache_mods',
'tests/unit/modules/inspectlib',
'tests/unit/modules/zypp/',
'tests/unit/templates/files',
'tests/integration/files/',
'tests/integration/cloud/helpers',
]
EXCLUDED_FILES = [
'tests/eventlisten.py',
'tests/buildpackage.py',
'tests/saltsh.py',
'tests/minionswarm.py',
'tests/wheeltest.py',
'tests/runtests.py',
'tests/jenkins.py',
'tests/salt-tcpdump.py',
'tests/conftest.py',
'tests/packdump.py',
'tests/consist.py',
'tests/modparser.py',
'tests/committer_parser.py',
'tests/integration/utils/testprogram.py',
'tests/utils/cptestcase.py'
]
class BadTestModuleNamesTestCase(TestCase):
'''
Unit test case for testing bad names for test modules
'''
maxDiff = None
def test_module_name(self):
'''
Make sure all test modules conform to the test_*.py naming scheme
'''
excluded_dirs = tuple(EXCLUDED_DIRS)
code_dir = integration.CODE_DIR
tests_dir = os.path.join(code_dir, 'tests')
bad_names = []
for root, dirs, files in os.walk(tests_dir):
reldir = os.path.relpath(root, code_dir)
if reldir.startswith(excluded_dirs) or reldir.endswith('__pycache__'):
continue
for fname in files:
if fname == '__init__.py' or not fname.endswith('.py'):
continue
relpath = os.path.join(reldir, fname)
if relpath in EXCLUDED_FILES:
continue
if not fname.startswith('test_'):
bad_names.append(relpath)
error_msg = '\n\nPlease rename the following files:\n'
for path in bad_names:
directory, filename = path.rsplit(os.sep, 1)
filename, ext = os.path.splitext(filename)
error_msg += ' {} -> {}/test_{}.py\n'.format(path, directory, filename.split('_test')[0])
error_msg += '\nIf you believe one of the entries above should be ignored, please add it to either\n'
error_msg += '\'EXCLUDED_DIRS\' or \'EXCLUDED_FILES\' in \'tests/unit/test_module_names.py\'.\n'
error_msg += 'If it is a tests module, then please rename as suggested.'
self.assertEqual([], bad_names, error_msg)
|
<commit_before><commit_msg>Add test case to make sure we always proper test module names from now on<commit_after># -*- coding: utf-8 -*-
'''
tests.unit.doc_test
~~~~~~~~~~~~~~~~~~~~
'''
# Import Python libs
from __future__ import absolute_import
import os
# Import Salt Testing libs
from salttesting import TestCase
# Import Salt libs
import integration
EXCLUDED_DIRS = [
'tests/pkg',
'tests/perf',
'tests/unit/utils/cache_mods',
'tests/unit/modules/inspectlib',
'tests/unit/modules/zypp/',
'tests/unit/templates/files',
'tests/integration/files/',
'tests/integration/cloud/helpers',
]
EXCLUDED_FILES = [
'tests/eventlisten.py',
'tests/buildpackage.py',
'tests/saltsh.py',
'tests/minionswarm.py',
'tests/wheeltest.py',
'tests/runtests.py',
'tests/jenkins.py',
'tests/salt-tcpdump.py',
'tests/conftest.py',
'tests/packdump.py',
'tests/consist.py',
'tests/modparser.py',
'tests/committer_parser.py',
'tests/integration/utils/testprogram.py',
'tests/utils/cptestcase.py'
]
class BadTestModuleNamesTestCase(TestCase):
'''
Unit test case for testing bad names for test modules
'''
maxDiff = None
def test_module_name(self):
'''
Make sure all test modules conform to the test_*.py naming scheme
'''
excluded_dirs = tuple(EXCLUDED_DIRS)
code_dir = integration.CODE_DIR
tests_dir = os.path.join(code_dir, 'tests')
bad_names = []
for root, dirs, files in os.walk(tests_dir):
reldir = os.path.relpath(root, code_dir)
if reldir.startswith(excluded_dirs) or reldir.endswith('__pycache__'):
continue
for fname in files:
if fname == '__init__.py' or not fname.endswith('.py'):
continue
relpath = os.path.join(reldir, fname)
if relpath in EXCLUDED_FILES:
continue
if not fname.startswith('test_'):
bad_names.append(relpath)
error_msg = '\n\nPlease rename the following files:\n'
for path in bad_names:
directory, filename = path.rsplit(os.sep, 1)
filename, ext = os.path.splitext(filename)
error_msg += ' {} -> {}/test_{}.py\n'.format(path, directory, filename.split('_test')[0])
error_msg += '\nIf you believe one of the entries above should be ignored, please add it to either\n'
error_msg += '\'EXCLUDED_DIRS\' or \'EXCLUDED_FILES\' in \'tests/unit/test_module_names.py\'.\n'
error_msg += 'If it is a tests module, then please rename as suggested.'
self.assertEqual([], bad_names, error_msg)
|
|
75942935696a7aee2ee16013fc7341728fb8f18d
|
__init__.py
|
__init__.py
|
"""
This file is part of library PyRandLib.
It is provided under MIT License.
Please see files README.md and LICENSE.
Copyright (c) 2017 Philippe Schmouker
"""
from .baselcg import BaseLCG
from .baselfib64 import BaseLFib64
from .basemrg import BaseMRG
from .baserandom import BaseRandom
from .fastrand32 import FastRand32
from .fastrand63 import FastRand63
from .lfib78 import LFib78
from .lfib116 import LFib116
from .lfib668 import LFib668
from .lfib1340 import LFib1340
from .mrgrand287 import MRGRand287
from .mrgrand1457 import MRGRand1457
from .mrgrand49507 import MRGRand49507
|
Package init module now added.
|
Package init module now added.
|
Python
|
mit
|
schmouk/PyRandLib
|
Package init module now added.
|
"""
This file is part of library PyRandLib.
It is provided under MIT License.
Please see files README.md and LICENSE.
Copyright (c) 2017 Philippe Schmouker
"""
from .baselcg import BaseLCG
from .baselfib64 import BaseLFib64
from .basemrg import BaseMRG
from .baserandom import BaseRandom
from .fastrand32 import FastRand32
from .fastrand63 import FastRand63
from .lfib78 import LFib78
from .lfib116 import LFib116
from .lfib668 import LFib668
from .lfib1340 import LFib1340
from .mrgrand287 import MRGRand287
from .mrgrand1457 import MRGRand1457
from .mrgrand49507 import MRGRand49507
|
<commit_before><commit_msg>Package init module now added.<commit_after>
|
"""
This file is part of library PyRandLib.
It is provided under MIT License.
Please see files README.md and LICENSE.
Copyright (c) 2017 Philippe Schmouker
"""
from .baselcg import BaseLCG
from .baselfib64 import BaseLFib64
from .basemrg import BaseMRG
from .baserandom import BaseRandom
from .fastrand32 import FastRand32
from .fastrand63 import FastRand63
from .lfib78 import LFib78
from .lfib116 import LFib116
from .lfib668 import LFib668
from .lfib1340 import LFib1340
from .mrgrand287 import MRGRand287
from .mrgrand1457 import MRGRand1457
from .mrgrand49507 import MRGRand49507
|
Package init module now added."""
This file is part of library PyRandLib.
It is provided under MIT License.
Please see files README.md and LICENSE.
Copyright (c) 2017 Philippe Schmouker
"""
from .baselcg import BaseLCG
from .baselfib64 import BaseLFib64
from .basemrg import BaseMRG
from .baserandom import BaseRandom
from .fastrand32 import FastRand32
from .fastrand63 import FastRand63
from .lfib78 import LFib78
from .lfib116 import LFib116
from .lfib668 import LFib668
from .lfib1340 import LFib1340
from .mrgrand287 import MRGRand287
from .mrgrand1457 import MRGRand1457
from .mrgrand49507 import MRGRand49507
|
<commit_before><commit_msg>Package init module now added.<commit_after>"""
This file is part of library PyRandLib.
It is provided under MIT License.
Please see files README.md and LICENSE.
Copyright (c) 2017 Philippe Schmouker
"""
from .baselcg import BaseLCG
from .baselfib64 import BaseLFib64
from .basemrg import BaseMRG
from .baserandom import BaseRandom
from .fastrand32 import FastRand32
from .fastrand63 import FastRand63
from .lfib78 import LFib78
from .lfib116 import LFib116
from .lfib668 import LFib668
from .lfib1340 import LFib1340
from .mrgrand287 import MRGRand287
from .mrgrand1457 import MRGRand1457
from .mrgrand49507 import MRGRand49507
|
|
5ddbb8dd77c4be14cc459640324d449850984d3b
|
zou/app/utils/api.py
|
zou/app/utils/api.py
|
from flask_restful import Api, output_json
def configure_api_from_blueprint(blueprint, route_tuples):
api = Api(blueprint, catch_all_404s=True)
api.representations = {
'application/json; charset=utf-8': output_json,
'application/json': output_json,
}
for route_tuple in route_tuples:
(path, resource) = route_tuple
api.add_resource(resource, path)
return api
|
Add utils to manage blueprints
|
Add utils to manage blueprints
|
Python
|
agpl-3.0
|
cgwire/zou
|
Add utils to manage blueprints
|
from flask_restful import Api, output_json
def configure_api_from_blueprint(blueprint, route_tuples):
api = Api(blueprint, catch_all_404s=True)
api.representations = {
'application/json; charset=utf-8': output_json,
'application/json': output_json,
}
for route_tuple in route_tuples:
(path, resource) = route_tuple
api.add_resource(resource, path)
return api
|
<commit_before><commit_msg>Add utils to manage blueprints<commit_after>
|
from flask_restful import Api, output_json
def configure_api_from_blueprint(blueprint, route_tuples):
api = Api(blueprint, catch_all_404s=True)
api.representations = {
'application/json; charset=utf-8': output_json,
'application/json': output_json,
}
for route_tuple in route_tuples:
(path, resource) = route_tuple
api.add_resource(resource, path)
return api
|
Add utils to manage blueprintsfrom flask_restful import Api, output_json
def configure_api_from_blueprint(blueprint, route_tuples):
api = Api(blueprint, catch_all_404s=True)
api.representations = {
'application/json; charset=utf-8': output_json,
'application/json': output_json,
}
for route_tuple in route_tuples:
(path, resource) = route_tuple
api.add_resource(resource, path)
return api
|
<commit_before><commit_msg>Add utils to manage blueprints<commit_after>from flask_restful import Api, output_json
def configure_api_from_blueprint(blueprint, route_tuples):
api = Api(blueprint, catch_all_404s=True)
api.representations = {
'application/json; charset=utf-8': output_json,
'application/json': output_json,
}
for route_tuple in route_tuples:
(path, resource) = route_tuple
api.add_resource(resource, path)
return api
|
|
cb1fe63325ec06d2c0ec94ac605e32d48ea8c8db
|
scripts/estad_semana.py
|
scripts/estad_semana.py
|
# -*- coding:utf-8 -*-
import valoratweets
import mongoengine
from tweemanager.tweetdocument import TweetDocument
import pprint
import re
import datetime as dt
from bson import json_util as json
mongoengine.connect(host="mongodb://192.168.80.221:27017/tweets")
cosas = TweetDocument._get_collection().aggregate(
[
{ '$project' : { "source": 1, "valoration": 1, '_id': 0, 'week': { '$week': "$created_at" } } },
{ '$group' : { '_id': "$week",
"total_positivos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","positivo"]},1,0]}},
"total_negativos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","negativo"]},1,0]}},
"total": {"$sum": 1},
}
},
{ '$project' : { "semana": "$_id", "positivos": "$total_positivos", "negativos":"$total_negativos", "total":"$total","por_valorar":{'$subtract':["$total",{'$add':["$total_positivos", "$total_negativos"]}]} } },
{ '$sort': {"semana": 1}}
]
)
outfile = open('semanas.csv', 'w')
outfile.write('semana\ttotal\tpositivos\tnegativos\tsin_valorar\n')
for valor in cosas:
print(json.dumps(valor))
semana = valor['semana']
positivos = valor['positivos']
negativos = valor['negativos']
sin_valorar = valor['por_valorar']
total = valor['total']
outfile.write('%s\t%s\t%s\t%s\t%s\n' % (semana,
total,
positivos,
negativos,
sin_valorar))
outfile.close()
|
Add script for counting valorated tweets per week
|
Add script for counting valorated tweets per week
|
Python
|
agpl-3.0
|
nfqsolutions/tweemanager
|
Add script for counting valorated tweets per week
|
# -*- coding:utf-8 -*-
import valoratweets
import mongoengine
from tweemanager.tweetdocument import TweetDocument
import pprint
import re
import datetime as dt
from bson import json_util as json
mongoengine.connect(host="mongodb://192.168.80.221:27017/tweets")
cosas = TweetDocument._get_collection().aggregate(
[
{ '$project' : { "source": 1, "valoration": 1, '_id': 0, 'week': { '$week': "$created_at" } } },
{ '$group' : { '_id': "$week",
"total_positivos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","positivo"]},1,0]}},
"total_negativos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","negativo"]},1,0]}},
"total": {"$sum": 1},
}
},
{ '$project' : { "semana": "$_id", "positivos": "$total_positivos", "negativos":"$total_negativos", "total":"$total","por_valorar":{'$subtract':["$total",{'$add':["$total_positivos", "$total_negativos"]}]} } },
{ '$sort': {"semana": 1}}
]
)
outfile = open('semanas.csv', 'w')
outfile.write('semana\ttotal\tpositivos\tnegativos\tsin_valorar\n')
for valor in cosas:
print(json.dumps(valor))
semana = valor['semana']
positivos = valor['positivos']
negativos = valor['negativos']
sin_valorar = valor['por_valorar']
total = valor['total']
outfile.write('%s\t%s\t%s\t%s\t%s\n' % (semana,
total,
positivos,
negativos,
sin_valorar))
outfile.close()
|
<commit_before><commit_msg>Add script for counting valorated tweets per week<commit_after>
|
# -*- coding:utf-8 -*-
import valoratweets
import mongoengine
from tweemanager.tweetdocument import TweetDocument
import pprint
import re
import datetime as dt
from bson import json_util as json
mongoengine.connect(host="mongodb://192.168.80.221:27017/tweets")
cosas = TweetDocument._get_collection().aggregate(
[
{ '$project' : { "source": 1, "valoration": 1, '_id': 0, 'week': { '$week': "$created_at" } } },
{ '$group' : { '_id': "$week",
"total_positivos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","positivo"]},1,0]}},
"total_negativos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","negativo"]},1,0]}},
"total": {"$sum": 1},
}
},
{ '$project' : { "semana": "$_id", "positivos": "$total_positivos", "negativos":"$total_negativos", "total":"$total","por_valorar":{'$subtract':["$total",{'$add':["$total_positivos", "$total_negativos"]}]} } },
{ '$sort': {"semana": 1}}
]
)
outfile = open('semanas.csv', 'w')
outfile.write('semana\ttotal\tpositivos\tnegativos\tsin_valorar\n')
for valor in cosas:
print(json.dumps(valor))
semana = valor['semana']
positivos = valor['positivos']
negativos = valor['negativos']
sin_valorar = valor['por_valorar']
total = valor['total']
outfile.write('%s\t%s\t%s\t%s\t%s\n' % (semana,
total,
positivos,
negativos,
sin_valorar))
outfile.close()
|
Add script for counting valorated tweets per week# -*- coding:utf-8 -*-
import valoratweets
import mongoengine
from tweemanager.tweetdocument import TweetDocument
import pprint
import re
import datetime as dt
from bson import json_util as json
mongoengine.connect(host="mongodb://192.168.80.221:27017/tweets")
cosas = TweetDocument._get_collection().aggregate(
[
{ '$project' : { "source": 1, "valoration": 1, '_id': 0, 'week': { '$week': "$created_at" } } },
{ '$group' : { '_id': "$week",
"total_positivos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","positivo"]},1,0]}},
"total_negativos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","negativo"]},1,0]}},
"total": {"$sum": 1},
}
},
{ '$project' : { "semana": "$_id", "positivos": "$total_positivos", "negativos":"$total_negativos", "total":"$total","por_valorar":{'$subtract':["$total",{'$add':["$total_positivos", "$total_negativos"]}]} } },
{ '$sort': {"semana": 1}}
]
)
outfile = open('semanas.csv', 'w')
outfile.write('semana\ttotal\tpositivos\tnegativos\tsin_valorar\n')
for valor in cosas:
print(json.dumps(valor))
semana = valor['semana']
positivos = valor['positivos']
negativos = valor['negativos']
sin_valorar = valor['por_valorar']
total = valor['total']
outfile.write('%s\t%s\t%s\t%s\t%s\n' % (semana,
total,
positivos,
negativos,
sin_valorar))
outfile.close()
|
<commit_before><commit_msg>Add script for counting valorated tweets per week<commit_after># -*- coding:utf-8 -*-
import valoratweets
import mongoengine
from tweemanager.tweetdocument import TweetDocument
import pprint
import re
import datetime as dt
from bson import json_util as json
mongoengine.connect(host="mongodb://192.168.80.221:27017/tweets")
cosas = TweetDocument._get_collection().aggregate(
[
{ '$project' : { "source": 1, "valoration": 1, '_id': 0, 'week': { '$week': "$created_at" } } },
{ '$group' : { '_id': "$week",
"total_positivos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","positivo"]},1,0]}},
"total_negativos": {"$sum": {"$cond":[{"$eq":["$valoration.algoritmo_1.clasificado","negativo"]},1,0]}},
"total": {"$sum": 1},
}
},
{ '$project' : { "semana": "$_id", "positivos": "$total_positivos", "negativos":"$total_negativos", "total":"$total","por_valorar":{'$subtract':["$total",{'$add':["$total_positivos", "$total_negativos"]}]} } },
{ '$sort': {"semana": 1}}
]
)
outfile = open('semanas.csv', 'w')
outfile.write('semana\ttotal\tpositivos\tnegativos\tsin_valorar\n')
for valor in cosas:
print(json.dumps(valor))
semana = valor['semana']
positivos = valor['positivos']
negativos = valor['negativos']
sin_valorar = valor['por_valorar']
total = valor['total']
outfile.write('%s\t%s\t%s\t%s\t%s\n' % (semana,
total,
positivos,
negativos,
sin_valorar))
outfile.close()
|
|
99e52824426195d1f23a72c04b900d2aed2946c9
|
home/migrations/0019_auto_20190319_1438.py
|
home/migrations/0019_auto_20190319_1438.py
|
# Generated by Django 2.0.13 on 2019-03-19 14:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0018_auto_20171005_0247'),
]
operations = [
migrations.AlterField(
model_name='formfield',
name='field_type',
field=models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type'),
),
]
|
Make migrations for Wagtail 2.3
|
Make migrations for Wagtail 2.3
This commit is the result of running
```
docker-compose exec django /bin/bash -c "./manage.py makemigrations"
```
|
Python
|
agpl-3.0
|
freedomofpress/securethenews,freedomofpress/securethenews,freedomofpress/securethenews,freedomofpress/securethenews
|
Make migrations for Wagtail 2.3
This commit is the result of running
```
docker-compose exec django /bin/bash -c "./manage.py makemigrations"
```
|
# Generated by Django 2.0.13 on 2019-03-19 14:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0018_auto_20171005_0247'),
]
operations = [
migrations.AlterField(
model_name='formfield',
name='field_type',
field=models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type'),
),
]
|
<commit_before><commit_msg>Make migrations for Wagtail 2.3
This commit is the result of running
```
docker-compose exec django /bin/bash -c "./manage.py makemigrations"
```<commit_after>
|
# Generated by Django 2.0.13 on 2019-03-19 14:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0018_auto_20171005_0247'),
]
operations = [
migrations.AlterField(
model_name='formfield',
name='field_type',
field=models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type'),
),
]
|
Make migrations for Wagtail 2.3
This commit is the result of running
```
docker-compose exec django /bin/bash -c "./manage.py makemigrations"
```# Generated by Django 2.0.13 on 2019-03-19 14:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0018_auto_20171005_0247'),
]
operations = [
migrations.AlterField(
model_name='formfield',
name='field_type',
field=models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type'),
),
]
|
<commit_before><commit_msg>Make migrations for Wagtail 2.3
This commit is the result of running
```
docker-compose exec django /bin/bash -c "./manage.py makemigrations"
```<commit_after># Generated by Django 2.0.13 on 2019-03-19 14:38
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('home', '0018_auto_20171005_0247'),
]
operations = [
migrations.AlterField(
model_name='formfield',
name='field_type',
field=models.CharField(choices=[('singleline', 'Single line text'), ('multiline', 'Multi-line text'), ('email', 'Email'), ('number', 'Number'), ('url', 'URL'), ('checkbox', 'Checkbox'), ('checkboxes', 'Checkboxes'), ('dropdown', 'Drop down'), ('multiselect', 'Multiple select'), ('radio', 'Radio buttons'), ('date', 'Date'), ('datetime', 'Date/time'), ('hidden', 'Hidden field')], max_length=16, verbose_name='field type'),
),
]
|
|
e0651992608e08204b3812b3e8a30f235157c780
|
gnome_terminal_tabs.py
|
gnome_terminal_tabs.py
|
#!/usr/bin/env python
"""
Run a command on multiple servers via SSH, each in a GNOME Terminal tab.
See http://exyr.org/2011/gnome-terminal-tabs/
"""
import subprocess
command = 'sudo aptitude update && sudo aptitude safe-upgrade'
terminal = ['gnome-terminal']
for host in ('cartonbox', 'hako'):
terminal.extend(['--tab', '-e', '''
bash -c '
echo "%(host)s$ %(command)s"
ssh -t %(host)s "%(command)s"
read
'
''' % locals()])
subprocess.call(terminal)
|
Add GNOME Terminal tabs script.
|
Add GNOME Terminal tabs script.
|
Python
|
bsd-3-clause
|
SimonSapin/snippets,SimonSapin/snippets
|
Add GNOME Terminal tabs script.
|
#!/usr/bin/env python
"""
Run a command on multiple servers via SSH, each in a GNOME Terminal tab.
See http://exyr.org/2011/gnome-terminal-tabs/
"""
import subprocess
command = 'sudo aptitude update && sudo aptitude safe-upgrade'
terminal = ['gnome-terminal']
for host in ('cartonbox', 'hako'):
terminal.extend(['--tab', '-e', '''
bash -c '
echo "%(host)s$ %(command)s"
ssh -t %(host)s "%(command)s"
read
'
''' % locals()])
subprocess.call(terminal)
|
<commit_before><commit_msg>Add GNOME Terminal tabs script.<commit_after>
|
#!/usr/bin/env python
"""
Run a command on multiple servers via SSH, each in a GNOME Terminal tab.
See http://exyr.org/2011/gnome-terminal-tabs/
"""
import subprocess
command = 'sudo aptitude update && sudo aptitude safe-upgrade'
terminal = ['gnome-terminal']
for host in ('cartonbox', 'hako'):
terminal.extend(['--tab', '-e', '''
bash -c '
echo "%(host)s$ %(command)s"
ssh -t %(host)s "%(command)s"
read
'
''' % locals()])
subprocess.call(terminal)
|
Add GNOME Terminal tabs script.#!/usr/bin/env python
"""
Run a command on multiple servers via SSH, each in a GNOME Terminal tab.
See http://exyr.org/2011/gnome-terminal-tabs/
"""
import subprocess
command = 'sudo aptitude update && sudo aptitude safe-upgrade'
terminal = ['gnome-terminal']
for host in ('cartonbox', 'hako'):
terminal.extend(['--tab', '-e', '''
bash -c '
echo "%(host)s$ %(command)s"
ssh -t %(host)s "%(command)s"
read
'
''' % locals()])
subprocess.call(terminal)
|
<commit_before><commit_msg>Add GNOME Terminal tabs script.<commit_after>#!/usr/bin/env python
"""
Run a command on multiple servers via SSH, each in a GNOME Terminal tab.
See http://exyr.org/2011/gnome-terminal-tabs/
"""
import subprocess
command = 'sudo aptitude update && sudo aptitude safe-upgrade'
terminal = ['gnome-terminal']
for host in ('cartonbox', 'hako'):
terminal.extend(['--tab', '-e', '''
bash -c '
echo "%(host)s$ %(command)s"
ssh -t %(host)s "%(command)s"
read
'
''' % locals()])
subprocess.call(terminal)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.