commit
stringlengths 40
40
| old_file
stringlengths 4
118
| new_file
stringlengths 4
118
| old_contents
stringlengths 0
2.94k
| new_contents
stringlengths 1
4.43k
| subject
stringlengths 15
444
| message
stringlengths 16
3.45k
| lang
stringclasses 1
value | license
stringclasses 13
values | repos
stringlengths 5
43.2k
| prompt
stringlengths 17
4.58k
| response
stringlengths 1
4.43k
| prompt_tagged
stringlengths 58
4.62k
| response_tagged
stringlengths 1
4.43k
| text
stringlengths 132
7.29k
| text_tagged
stringlengths 173
7.33k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dd1c20c26dc959ec68180eadd324e1b2edfa4aef
|
mpinterfaces/tests/main_recipes/test_main_interface.py
|
mpinterfaces/tests/main_recipes/test_main_interface.py
|
from subprocess import call
# This test is simply calling a shell script, which calls a python main recipe
# (Originally a function used for ad-hoc manual testing) and verifies it behaved
# correctly. The reason for using a python file to call a shell script is so
# automatic python testing tools, like nose2, will automatially run it.
def test_main_interface():
assert(call("./test_interface_main.sh") == 0)
if __name__ == '__main__':
test_main_interface()
|
Use Python to call our shell script test.
|
Use Python to call our shell script test.
This allows automated tools like nose2 to run it.
|
Python
|
mit
|
joshgabriel/MPInterfaces,henniggroup/MPInterfaces,joshgabriel/MPInterfaces,henniggroup/MPInterfaces
|
Use Python to call our shell script test.
This allows automated tools like nose2 to run it.
|
from subprocess import call
# This test is simply calling a shell script, which calls a python main recipe
# (Originally a function used for ad-hoc manual testing) and verifies it behaved
# correctly. The reason for using a python file to call a shell script is so
# automatic python testing tools, like nose2, will automatially run it.
def test_main_interface():
assert(call("./test_interface_main.sh") == 0)
if __name__ == '__main__':
test_main_interface()
|
<commit_before><commit_msg>Use Python to call our shell script test.
This allows automated tools like nose2 to run it.<commit_after>
|
from subprocess import call
# This test is simply calling a shell script, which calls a python main recipe
# (Originally a function used for ad-hoc manual testing) and verifies it behaved
# correctly. The reason for using a python file to call a shell script is so
# automatic python testing tools, like nose2, will automatially run it.
def test_main_interface():
assert(call("./test_interface_main.sh") == 0)
if __name__ == '__main__':
test_main_interface()
|
Use Python to call our shell script test.
This allows automated tools like nose2 to run it.from subprocess import call
# This test is simply calling a shell script, which calls a python main recipe
# (Originally a function used for ad-hoc manual testing) and verifies it behaved
# correctly. The reason for using a python file to call a shell script is so
# automatic python testing tools, like nose2, will automatially run it.
def test_main_interface():
assert(call("./test_interface_main.sh") == 0)
if __name__ == '__main__':
test_main_interface()
|
<commit_before><commit_msg>Use Python to call our shell script test.
This allows automated tools like nose2 to run it.<commit_after>from subprocess import call
# This test is simply calling a shell script, which calls a python main recipe
# (Originally a function used for ad-hoc manual testing) and verifies it behaved
# correctly. The reason for using a python file to call a shell script is so
# automatic python testing tools, like nose2, will automatially run it.
def test_main_interface():
assert(call("./test_interface_main.sh") == 0)
if __name__ == '__main__':
test_main_interface()
|
|
bfcc3a90bbd90c6a9a3beed4c94be99f68d51be8
|
modules/bilingual_generator/bilingual-generator.py
|
modules/bilingual_generator/bilingual-generator.py
|
import os
import codecs
import json
from gensim.models import Word2Vec
import random
from modules.preprocessor import yandex_api as yandex
def create_bilingual_dictionary(clusters_file_path, sample_size, model):
bilingual_dictionary = []
with codecs.open(cluster_groups, 'r', encoding='utf-8') as file:
clusters = json.load(file)
for cluster in clusters:
no_of_words = 0
print "Cluster"
if len(cluster) >= sample_size:
selected_words = set()
count = 0
while no_of_words < sample_size:
word = random.choice(cluster)
print "Random Word", word
if count == len(cluster):
raise ValueError('No Valid words obtained')
if word not in selected_words:
count += 1
tr = yandex.translate(word)['text'][0]
try:
model[tr]
selected_words.add(word)
bilingual_dictionary.append((word, tr))
no_of_words += 1
except:
print "Not Valid"
else:
print cluster
raise ValueError("Sample Size too small")
return bilingual_dictionary
if __name__ == '__main__':
model = Word2Vec.load(
os.path.join('data', 'models', 't-vex-hindi-model')
)
cluster_groups=os.path.join(
'data', 'vectors', 'english_clusters.json'
)
bilingual_dictionary = create_bilingual_dictionary(cluster_groups,1,model)
bilingual_dict_path=os.path.join(
'data', 'bilingual_dictionary', 'english_hindi_new.txt'
)
file = codecs.open(bilingual_dict_path, 'w', encoding='utf-8')
for tup in bilingual_dictionary:
file.write(tup[0]+ " " + tup[1] + "\n")
|
Add script to generate a bilingual dictionary from vector list.
|
BilingualGenerator: Add script to generate a bilingual dictionary from vector list.
Select a specified number of words from each cluster, translate the words, and check if they exist in language model.
Input hard coded for clusters file path, model path and sample size.
Output file path hard coded.
Pair programmed with Prateeksha.
|
Python
|
mit
|
KshitijKarthick/tvecs,KshitijKarthick/tvecs,KshitijKarthick/tvecs
|
BilingualGenerator: Add script to generate a bilingual dictionary from vector list.
Select a specified number of words from each cluster, translate the words, and check if they exist in language model.
Input hard coded for clusters file path, model path and sample size.
Output file path hard coded.
Pair programmed with Prateeksha.
|
import os
import codecs
import json
from gensim.models import Word2Vec
import random
from modules.preprocessor import yandex_api as yandex
def create_bilingual_dictionary(clusters_file_path, sample_size, model):
bilingual_dictionary = []
with codecs.open(cluster_groups, 'r', encoding='utf-8') as file:
clusters = json.load(file)
for cluster in clusters:
no_of_words = 0
print "Cluster"
if len(cluster) >= sample_size:
selected_words = set()
count = 0
while no_of_words < sample_size:
word = random.choice(cluster)
print "Random Word", word
if count == len(cluster):
raise ValueError('No Valid words obtained')
if word not in selected_words:
count += 1
tr = yandex.translate(word)['text'][0]
try:
model[tr]
selected_words.add(word)
bilingual_dictionary.append((word, tr))
no_of_words += 1
except:
print "Not Valid"
else:
print cluster
raise ValueError("Sample Size too small")
return bilingual_dictionary
if __name__ == '__main__':
model = Word2Vec.load(
os.path.join('data', 'models', 't-vex-hindi-model')
)
cluster_groups=os.path.join(
'data', 'vectors', 'english_clusters.json'
)
bilingual_dictionary = create_bilingual_dictionary(cluster_groups,1,model)
bilingual_dict_path=os.path.join(
'data', 'bilingual_dictionary', 'english_hindi_new.txt'
)
file = codecs.open(bilingual_dict_path, 'w', encoding='utf-8')
for tup in bilingual_dictionary:
file.write(tup[0]+ " " + tup[1] + "\n")
|
<commit_before><commit_msg>BilingualGenerator: Add script to generate a bilingual dictionary from vector list.
Select a specified number of words from each cluster, translate the words, and check if they exist in language model.
Input hard coded for clusters file path, model path and sample size.
Output file path hard coded.
Pair programmed with Prateeksha.<commit_after>
|
import os
import codecs
import json
from gensim.models import Word2Vec
import random
from modules.preprocessor import yandex_api as yandex
def create_bilingual_dictionary(clusters_file_path, sample_size, model):
bilingual_dictionary = []
with codecs.open(cluster_groups, 'r', encoding='utf-8') as file:
clusters = json.load(file)
for cluster in clusters:
no_of_words = 0
print "Cluster"
if len(cluster) >= sample_size:
selected_words = set()
count = 0
while no_of_words < sample_size:
word = random.choice(cluster)
print "Random Word", word
if count == len(cluster):
raise ValueError('No Valid words obtained')
if word not in selected_words:
count += 1
tr = yandex.translate(word)['text'][0]
try:
model[tr]
selected_words.add(word)
bilingual_dictionary.append((word, tr))
no_of_words += 1
except:
print "Not Valid"
else:
print cluster
raise ValueError("Sample Size too small")
return bilingual_dictionary
if __name__ == '__main__':
model = Word2Vec.load(
os.path.join('data', 'models', 't-vex-hindi-model')
)
cluster_groups=os.path.join(
'data', 'vectors', 'english_clusters.json'
)
bilingual_dictionary = create_bilingual_dictionary(cluster_groups,1,model)
bilingual_dict_path=os.path.join(
'data', 'bilingual_dictionary', 'english_hindi_new.txt'
)
file = codecs.open(bilingual_dict_path, 'w', encoding='utf-8')
for tup in bilingual_dictionary:
file.write(tup[0]+ " " + tup[1] + "\n")
|
BilingualGenerator: Add script to generate a bilingual dictionary from vector list.
Select a specified number of words from each cluster, translate the words, and check if they exist in language model.
Input hard coded for clusters file path, model path and sample size.
Output file path hard coded.
Pair programmed with Prateeksha.import os
import codecs
import json
from gensim.models import Word2Vec
import random
from modules.preprocessor import yandex_api as yandex
def create_bilingual_dictionary(clusters_file_path, sample_size, model):
bilingual_dictionary = []
with codecs.open(cluster_groups, 'r', encoding='utf-8') as file:
clusters = json.load(file)
for cluster in clusters:
no_of_words = 0
print "Cluster"
if len(cluster) >= sample_size:
selected_words = set()
count = 0
while no_of_words < sample_size:
word = random.choice(cluster)
print "Random Word", word
if count == len(cluster):
raise ValueError('No Valid words obtained')
if word not in selected_words:
count += 1
tr = yandex.translate(word)['text'][0]
try:
model[tr]
selected_words.add(word)
bilingual_dictionary.append((word, tr))
no_of_words += 1
except:
print "Not Valid"
else:
print cluster
raise ValueError("Sample Size too small")
return bilingual_dictionary
if __name__ == '__main__':
model = Word2Vec.load(
os.path.join('data', 'models', 't-vex-hindi-model')
)
cluster_groups=os.path.join(
'data', 'vectors', 'english_clusters.json'
)
bilingual_dictionary = create_bilingual_dictionary(cluster_groups,1,model)
bilingual_dict_path=os.path.join(
'data', 'bilingual_dictionary', 'english_hindi_new.txt'
)
file = codecs.open(bilingual_dict_path, 'w', encoding='utf-8')
for tup in bilingual_dictionary:
file.write(tup[0]+ " " + tup[1] + "\n")
|
<commit_before><commit_msg>BilingualGenerator: Add script to generate a bilingual dictionary from vector list.
Select a specified number of words from each cluster, translate the words, and check if they exist in language model.
Input hard coded for clusters file path, model path and sample size.
Output file path hard coded.
Pair programmed with Prateeksha.<commit_after>import os
import codecs
import json
from gensim.models import Word2Vec
import random
from modules.preprocessor import yandex_api as yandex
def create_bilingual_dictionary(clusters_file_path, sample_size, model):
bilingual_dictionary = []
with codecs.open(cluster_groups, 'r', encoding='utf-8') as file:
clusters = json.load(file)
for cluster in clusters:
no_of_words = 0
print "Cluster"
if len(cluster) >= sample_size:
selected_words = set()
count = 0
while no_of_words < sample_size:
word = random.choice(cluster)
print "Random Word", word
if count == len(cluster):
raise ValueError('No Valid words obtained')
if word not in selected_words:
count += 1
tr = yandex.translate(word)['text'][0]
try:
model[tr]
selected_words.add(word)
bilingual_dictionary.append((word, tr))
no_of_words += 1
except:
print "Not Valid"
else:
print cluster
raise ValueError("Sample Size too small")
return bilingual_dictionary
if __name__ == '__main__':
model = Word2Vec.load(
os.path.join('data', 'models', 't-vex-hindi-model')
)
cluster_groups=os.path.join(
'data', 'vectors', 'english_clusters.json'
)
bilingual_dictionary = create_bilingual_dictionary(cluster_groups,1,model)
bilingual_dict_path=os.path.join(
'data', 'bilingual_dictionary', 'english_hindi_new.txt'
)
file = codecs.open(bilingual_dict_path, 'w', encoding='utf-8')
for tup in bilingual_dictionary:
file.write(tup[0]+ " " + tup[1] + "\n")
|
|
087e72f1d44d1b4f1aca72414c45aede5a2b6d25
|
bluebottle/payouts/migrations/0014_auto_20181211_0938.py
|
bluebottle/payouts/migrations/0014_auto_20181211_0938.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-12-11 08:38
from __future__ import unicode_literals
from django.db import migrations
from bluebottle.utils.utils import update_group_permissions
def add_group_permissions(apps, schema_editor):
group_perms = {
'Staff': {
'perms': (
'add_stripepayoutaccount', 'change_stripepayoutaccount',
'delete_stripepayoutaccount',
'add_plainpayoutaccount', 'change_plainpayoutaccount',
'delete_plainpayoutaccount',
)
},
}
update_group_permissions('payouts', group_perms, apps)
class Migration(migrations.Migration):
dependencies = [
('payouts', '0013_auto_20181207_1340'),
]
operations = [
migrations.RunPython(add_group_permissions)
]
|
Add migration that correctly sets payout account permissions for staff.
|
Add migration that correctly sets payout account permissions for staff.
BB-13581 #resolve
|
Python
|
bsd-3-clause
|
onepercentclub/bluebottle,onepercentclub/bluebottle,onepercentclub/bluebottle,onepercentclub/bluebottle,onepercentclub/bluebottle
|
Add migration that correctly sets payout account permissions for staff.
BB-13581 #resolve
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-12-11 08:38
from __future__ import unicode_literals
from django.db import migrations
from bluebottle.utils.utils import update_group_permissions
def add_group_permissions(apps, schema_editor):
group_perms = {
'Staff': {
'perms': (
'add_stripepayoutaccount', 'change_stripepayoutaccount',
'delete_stripepayoutaccount',
'add_plainpayoutaccount', 'change_plainpayoutaccount',
'delete_plainpayoutaccount',
)
},
}
update_group_permissions('payouts', group_perms, apps)
class Migration(migrations.Migration):
dependencies = [
('payouts', '0013_auto_20181207_1340'),
]
operations = [
migrations.RunPython(add_group_permissions)
]
|
<commit_before><commit_msg>Add migration that correctly sets payout account permissions for staff.
BB-13581 #resolve<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-12-11 08:38
from __future__ import unicode_literals
from django.db import migrations
from bluebottle.utils.utils import update_group_permissions
def add_group_permissions(apps, schema_editor):
group_perms = {
'Staff': {
'perms': (
'add_stripepayoutaccount', 'change_stripepayoutaccount',
'delete_stripepayoutaccount',
'add_plainpayoutaccount', 'change_plainpayoutaccount',
'delete_plainpayoutaccount',
)
},
}
update_group_permissions('payouts', group_perms, apps)
class Migration(migrations.Migration):
dependencies = [
('payouts', '0013_auto_20181207_1340'),
]
operations = [
migrations.RunPython(add_group_permissions)
]
|
Add migration that correctly sets payout account permissions for staff.
BB-13581 #resolve# -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-12-11 08:38
from __future__ import unicode_literals
from django.db import migrations
from bluebottle.utils.utils import update_group_permissions
def add_group_permissions(apps, schema_editor):
group_perms = {
'Staff': {
'perms': (
'add_stripepayoutaccount', 'change_stripepayoutaccount',
'delete_stripepayoutaccount',
'add_plainpayoutaccount', 'change_plainpayoutaccount',
'delete_plainpayoutaccount',
)
},
}
update_group_permissions('payouts', group_perms, apps)
class Migration(migrations.Migration):
dependencies = [
('payouts', '0013_auto_20181207_1340'),
]
operations = [
migrations.RunPython(add_group_permissions)
]
|
<commit_before><commit_msg>Add migration that correctly sets payout account permissions for staff.
BB-13581 #resolve<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.10.8 on 2018-12-11 08:38
from __future__ import unicode_literals
from django.db import migrations
from bluebottle.utils.utils import update_group_permissions
def add_group_permissions(apps, schema_editor):
group_perms = {
'Staff': {
'perms': (
'add_stripepayoutaccount', 'change_stripepayoutaccount',
'delete_stripepayoutaccount',
'add_plainpayoutaccount', 'change_plainpayoutaccount',
'delete_plainpayoutaccount',
)
},
}
update_group_permissions('payouts', group_perms, apps)
class Migration(migrations.Migration):
dependencies = [
('payouts', '0013_auto_20181207_1340'),
]
operations = [
migrations.RunPython(add_group_permissions)
]
|
|
29bc92fbd129a071992a021752fbe9801f3847e4
|
python/pygtk/python_gtk3_pygobject/tree_view.py
|
python/pygtk/python_gtk3_pygobject/tree_view.py
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
"""
This is the simplest Python GTK+3 snippet.
See: http://python-gtk-3-tutorial.readthedocs.org/en/latest/treeview.html
"""
from gi.repository import Gtk as gtk
# Countries, population (as in 2015) and continent.
DATA_LIST = [("China", 1370130000, "Asia"),
("India", 1271980000, "Asia"),
("United States", 321107000, "America"),
("Indonesia", 255461700, "Asia"),
("Brazil", 204388000, "America"),
("Pakistan", 189936000, "Asia"),
("Nigeria", 183523000, "Africa"),
("Bangladesh", 158425000, "Asia"),
("Russia", 146267288, "Eurasia"),
("Japan", 126880000, "Asia")]
def main():
window = gtk.Window()
window.set_default_size(300, 450)
window.set_border_width(18)
# Creating the ListStore model
liststore = gtk.ListStore(str, int, str)
for item in DATA_LIST:
liststore.append(list(item))
# Creating the treeview, making it use the filter as a model, and adding the columns
treeview = gtk.TreeView(liststore)
for column_index, column_title in enumerate(["Country", "Population", "Continent"]):
renderer = gtk.CellRendererText()
column = gtk.TreeViewColumn(column_title, renderer, text=column_index)
treeview.append_column(column)
# Scrolled window
scrolled_window = gtk.ScrolledWindow()
scrolled_window.set_border_width(0)
scrolled_window.set_shadow_type(gtk.ShadowType.IN) # should be gtk.ShadowType.IN, gtk.ShadowType.OUT, gtk.ShadowType.ETCHED_IN or gtk.ShadowType.ETCHED_OUT
scrolled_window.set_policy(gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS) # should be gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS or gtk.PolicyType.NEVER
scrolled_window.add(treeview)
window.add(scrolled_window)
window.connect("delete-event", gtk.main_quit) # ask to quit the application when the close button is clicked
window.show_all() # display the window
gtk.main() # GTK+ main loop
if __name__ == '__main__':
main()
|
Add a snippet (Python GTK+3).
|
Add a snippet (Python GTK+3).
|
Python
|
mit
|
jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets,jeremiedecock/snippets
|
Add a snippet (Python GTK+3).
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
"""
This is the simplest Python GTK+3 snippet.
See: http://python-gtk-3-tutorial.readthedocs.org/en/latest/treeview.html
"""
from gi.repository import Gtk as gtk
# Countries, population (as in 2015) and continent.
DATA_LIST = [("China", 1370130000, "Asia"),
("India", 1271980000, "Asia"),
("United States", 321107000, "America"),
("Indonesia", 255461700, "Asia"),
("Brazil", 204388000, "America"),
("Pakistan", 189936000, "Asia"),
("Nigeria", 183523000, "Africa"),
("Bangladesh", 158425000, "Asia"),
("Russia", 146267288, "Eurasia"),
("Japan", 126880000, "Asia")]
def main():
window = gtk.Window()
window.set_default_size(300, 450)
window.set_border_width(18)
# Creating the ListStore model
liststore = gtk.ListStore(str, int, str)
for item in DATA_LIST:
liststore.append(list(item))
# Creating the treeview, making it use the filter as a model, and adding the columns
treeview = gtk.TreeView(liststore)
for column_index, column_title in enumerate(["Country", "Population", "Continent"]):
renderer = gtk.CellRendererText()
column = gtk.TreeViewColumn(column_title, renderer, text=column_index)
treeview.append_column(column)
# Scrolled window
scrolled_window = gtk.ScrolledWindow()
scrolled_window.set_border_width(0)
scrolled_window.set_shadow_type(gtk.ShadowType.IN) # should be gtk.ShadowType.IN, gtk.ShadowType.OUT, gtk.ShadowType.ETCHED_IN or gtk.ShadowType.ETCHED_OUT
scrolled_window.set_policy(gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS) # should be gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS or gtk.PolicyType.NEVER
scrolled_window.add(treeview)
window.add(scrolled_window)
window.connect("delete-event", gtk.main_quit) # ask to quit the application when the close button is clicked
window.show_all() # display the window
gtk.main() # GTK+ main loop
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add a snippet (Python GTK+3).<commit_after>
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
"""
This is the simplest Python GTK+3 snippet.
See: http://python-gtk-3-tutorial.readthedocs.org/en/latest/treeview.html
"""
from gi.repository import Gtk as gtk
# Countries, population (as in 2015) and continent.
DATA_LIST = [("China", 1370130000, "Asia"),
("India", 1271980000, "Asia"),
("United States", 321107000, "America"),
("Indonesia", 255461700, "Asia"),
("Brazil", 204388000, "America"),
("Pakistan", 189936000, "Asia"),
("Nigeria", 183523000, "Africa"),
("Bangladesh", 158425000, "Asia"),
("Russia", 146267288, "Eurasia"),
("Japan", 126880000, "Asia")]
def main():
window = gtk.Window()
window.set_default_size(300, 450)
window.set_border_width(18)
# Creating the ListStore model
liststore = gtk.ListStore(str, int, str)
for item in DATA_LIST:
liststore.append(list(item))
# Creating the treeview, making it use the filter as a model, and adding the columns
treeview = gtk.TreeView(liststore)
for column_index, column_title in enumerate(["Country", "Population", "Continent"]):
renderer = gtk.CellRendererText()
column = gtk.TreeViewColumn(column_title, renderer, text=column_index)
treeview.append_column(column)
# Scrolled window
scrolled_window = gtk.ScrolledWindow()
scrolled_window.set_border_width(0)
scrolled_window.set_shadow_type(gtk.ShadowType.IN) # should be gtk.ShadowType.IN, gtk.ShadowType.OUT, gtk.ShadowType.ETCHED_IN or gtk.ShadowType.ETCHED_OUT
scrolled_window.set_policy(gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS) # should be gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS or gtk.PolicyType.NEVER
scrolled_window.add(treeview)
window.add(scrolled_window)
window.connect("delete-event", gtk.main_quit) # ask to quit the application when the close button is clicked
window.show_all() # display the window
gtk.main() # GTK+ main loop
if __name__ == '__main__':
main()
|
Add a snippet (Python GTK+3).#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
"""
This is the simplest Python GTK+3 snippet.
See: http://python-gtk-3-tutorial.readthedocs.org/en/latest/treeview.html
"""
from gi.repository import Gtk as gtk
# Countries, population (as in 2015) and continent.
DATA_LIST = [("China", 1370130000, "Asia"),
("India", 1271980000, "Asia"),
("United States", 321107000, "America"),
("Indonesia", 255461700, "Asia"),
("Brazil", 204388000, "America"),
("Pakistan", 189936000, "Asia"),
("Nigeria", 183523000, "Africa"),
("Bangladesh", 158425000, "Asia"),
("Russia", 146267288, "Eurasia"),
("Japan", 126880000, "Asia")]
def main():
window = gtk.Window()
window.set_default_size(300, 450)
window.set_border_width(18)
# Creating the ListStore model
liststore = gtk.ListStore(str, int, str)
for item in DATA_LIST:
liststore.append(list(item))
# Creating the treeview, making it use the filter as a model, and adding the columns
treeview = gtk.TreeView(liststore)
for column_index, column_title in enumerate(["Country", "Population", "Continent"]):
renderer = gtk.CellRendererText()
column = gtk.TreeViewColumn(column_title, renderer, text=column_index)
treeview.append_column(column)
# Scrolled window
scrolled_window = gtk.ScrolledWindow()
scrolled_window.set_border_width(0)
scrolled_window.set_shadow_type(gtk.ShadowType.IN) # should be gtk.ShadowType.IN, gtk.ShadowType.OUT, gtk.ShadowType.ETCHED_IN or gtk.ShadowType.ETCHED_OUT
scrolled_window.set_policy(gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS) # should be gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS or gtk.PolicyType.NEVER
scrolled_window.add(treeview)
window.add(scrolled_window)
window.connect("delete-event", gtk.main_quit) # ask to quit the application when the close button is clicked
window.show_all() # display the window
gtk.main() # GTK+ main loop
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add a snippet (Python GTK+3).<commit_after>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (c) 2015 Jérémie DECOCK (http://www.jdhp.org)
"""
This is the simplest Python GTK+3 snippet.
See: http://python-gtk-3-tutorial.readthedocs.org/en/latest/treeview.html
"""
from gi.repository import Gtk as gtk
# Countries, population (as in 2015) and continent.
DATA_LIST = [("China", 1370130000, "Asia"),
("India", 1271980000, "Asia"),
("United States", 321107000, "America"),
("Indonesia", 255461700, "Asia"),
("Brazil", 204388000, "America"),
("Pakistan", 189936000, "Asia"),
("Nigeria", 183523000, "Africa"),
("Bangladesh", 158425000, "Asia"),
("Russia", 146267288, "Eurasia"),
("Japan", 126880000, "Asia")]
def main():
window = gtk.Window()
window.set_default_size(300, 450)
window.set_border_width(18)
# Creating the ListStore model
liststore = gtk.ListStore(str, int, str)
for item in DATA_LIST:
liststore.append(list(item))
# Creating the treeview, making it use the filter as a model, and adding the columns
treeview = gtk.TreeView(liststore)
for column_index, column_title in enumerate(["Country", "Population", "Continent"]):
renderer = gtk.CellRendererText()
column = gtk.TreeViewColumn(column_title, renderer, text=column_index)
treeview.append_column(column)
# Scrolled window
scrolled_window = gtk.ScrolledWindow()
scrolled_window.set_border_width(0)
scrolled_window.set_shadow_type(gtk.ShadowType.IN) # should be gtk.ShadowType.IN, gtk.ShadowType.OUT, gtk.ShadowType.ETCHED_IN or gtk.ShadowType.ETCHED_OUT
scrolled_window.set_policy(gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS) # should be gtk.PolicyType.AUTOMATIC, gtk.PolicyType.ALWAYS or gtk.PolicyType.NEVER
scrolled_window.add(treeview)
window.add(scrolled_window)
window.connect("delete-event", gtk.main_quit) # ask to quit the application when the close button is clicked
window.show_all() # display the window
gtk.main() # GTK+ main loop
if __name__ == '__main__':
main()
|
|
c8eff207b551c66d5a976740ade93487d4a2e040
|
plugins/openstack/nova/nova-hypervisor-metrics.py
|
plugins/openstack/nova/nova-hypervisor-metrics.py
|
#!/usr/bin/env python
from argparse import ArgumentParser
import socket
import time
from novaclient.v3 import Client
DEFAULT_SCHEME = '{}.nova.hypervisors'.format(socket.gethostname())
METRIC_KEYS = (
'current_workload',
'disk_available_least',
'local_gb',
'local_gb_used',
'memory_mb',
'memory_mb_used',
'running_vms',
'vcpus',
'vcpus_used',
)
def output_metric(name, value):
print '{}\t{}\t{}'.format(name, value, int(time.time()))
def main():
parser = ArgumentParser()
parser.add_argument('-u', '--user', default='admin')
parser.add_argument('-p', '--password', default='admin')
parser.add_argument('-t', '--tenant', default='admin')
parser.add_argument('-a', '--auth-url', default='http://localhost:5000/v2.0')
parser.add_argument('-S', '--service-type', default='compute')
parser.add_argument('-H', '--host')
parser.add_argument('-s', '--scheme', default=DEFAULT_SCHEME)
args = parser.parse_args()
client = Client(args.user, args.password, args.tenant, args.auth_url, service_type=args.service_type)
if args.host:
hypervisors = client.hypervisors.search(args.host)
else:
hypervisors = client.hypervisors.list()
for hv in hypervisors:
for key, value in hv.to_dict().iteritems():
if key in METRIC_KEYS:
output_metric('{}.{}.{}'.format(args.scheme, hv.hypervisor_hostname, key), value)
if __name__ == '__main__':
main()
|
Add plugin for Nova hypervisor metrics
|
Add plugin for Nova hypervisor metrics
|
Python
|
mit
|
circleback/sensu-community-plugins,JonathanHuot/sensu-community-plugins,JonathanHuot/sensu-community-plugins,JonathanHuot/sensu-community-plugins,pkaeding/sensu-community-plugins,intoximeters/sensu-community-plugins,tuenti/sensu-community-plugins,Seraf/sensu-community-plugins,zerOnepal/sensu-community-plugins,ideais/sensu-community-plugins,pkaeding/sensu-community-plugins,giorgiosironi/sensu-community-plugins,Squarespace/sensu-community-plugins,circleback/sensu-community-plugins,maoe/sensu-community-plugins,giorgiosironi/sensu-community-plugins,justanshulsharma/sensu-community-plugins,madAndroid/sensu-community-plugins,plasticbrain/sensu-community-plugins,giorgiosironi/sensu-community-plugins,cread/sensu-community-plugins,alexhjlee/sensu-community-plugins,jbehrends/sensu-community-plugins,lfdesousa/sensu-community-plugins,himyouten/sensu-community-plugins,plasticbrain/sensu-community-plugins,cotocisternas/sensu-community-plugins,justanshulsharma/sensu-community-plugins,madAndroid/sensu-community-plugins,maoe/sensu-community-plugins,estately/sensu-community-plugins,thehyve/sensu-community-plugins,emillion/sensu-community-plugins,gferguson-gd/sensu-community-plugins,justanshulsharma/sensu-community-plugins,klangrud/sensu-community-plugins,alexhjlee/sensu-community-plugins,petere/sensu-community-plugins,maoe/sensu-community-plugins,cmattoon/sensu-community-plugins,warmfusion/sensu-community-plugins,royalj/sensu-community-plugins,aryeguy/sensu-community-plugins,julienba/sensu-community-plugins,plasticbrain/sensu-community-plugins,alertlogic/sensu-community-plugins,PerfectMemory/sensu-community-plugins,cotocisternas/sensu-community-plugins,ideais/sensu-community-plugins,klangrud/sensu-community-plugins,cread/sensu-community-plugins,shnmorimoto/sensu-community-plugins,emillion/sensu-community-plugins,klangrud/sensu-community-plugins,Squarespace/sensu-community-plugins,rikaard-groupby/sensu-community-plugins,julienba/sensu-community-plugins,leedm777/sensu-community-plugins,jennytoo/sensu-community-plugins,royalj/sensu-community-plugins,new23d/sensu-community-plugins,lfdesousa/sensu-community-plugins,alertlogic/sensu-community-plugins,luisdalves/sensu-community-plugins,rikaard-groupby/sensu-community-plugins,PerfectMemory/sensu-community-plugins,FlorinAndrei/sensu-community-plugins,pkaeding/sensu-community-plugins,thehyve/sensu-community-plugins,royalj/sensu-community-plugins,reevoo/sensu-community-plugins,khuongdp/sensu-community-plugins,Squarespace/sensu-community-plugins,intoximeters/sensu-community-plugins,ideais/sensu-community-plugins,FlorinAndrei/sensu-community-plugins,luisdalves/sensu-community-plugins,cmattoon/sensu-community-plugins,shnmorimoto/sensu-community-plugins,royalj/sensu-community-plugins,cmattoon/sensu-community-plugins,new23d/sensu-community-plugins,shnmorimoto/sensu-community-plugins,PerfectMemory/sensu-community-plugins,cotocisternas/sensu-community-plugins,aryeguy/sensu-community-plugins,jennytoo/sensu-community-plugins,cotocisternas/sensu-community-plugins,tuenti/sensu-community-plugins,loveholidays/sensu-plugins,khuongdp/sensu-community-plugins,new23d/sensu-community-plugins,nilroy/sensu-community-plugins,zerOnepal/sensu-community-plugins,petere/sensu-community-plugins,gferguson-gd/sensu-community-plugins,jbehrends/sensu-community-plugins,mecavity/sensu-community-plugins,nilroy/sensu-community-plugins,justanshulsharma/sensu-community-plugins,emillion/sensu-community-plugins,petere/sensu-community-plugins,reevoo/sensu-community-plugins,leedm777/sensu-community-plugins,mecavity/sensu-community-plugins,reevoo/sensu-community-plugins,nagas/sensu-community-plugins,estately/sensu-community-plugins,jbehrends/sensu-community-plugins,alexhjlee/sensu-community-plugins,Seraf/sensu-community-plugins,lenfree/sensu-community-plugins,madAndroid/sensu-community-plugins,aryeguy/sensu-community-plugins,tuenti/sensu-community-plugins,reevoo/sensu-community-plugins,alexhjlee/sensu-community-plugins,himyouten/sensu-community-plugins,warmfusion/sensu-community-plugins,lenfree/sensu-community-plugins,madAndroid/sensu-community-plugins,nilroy/sensu-community-plugins,loveholidays/sensu-plugins,tuenti/sensu-community-plugins,loveholidays/sensu-plugins,luisdalves/sensu-community-plugins,khuongdp/sensu-community-plugins,cread/sensu-community-plugins,Seraf/sensu-community-plugins,thehyve/sensu-community-plugins,lfdesousa/sensu-community-plugins,luisdalves/sensu-community-plugins,giorgiosironi/sensu-community-plugins,jennytoo/sensu-community-plugins,klangrud/sensu-community-plugins,leedm777/sensu-community-plugins,lfdesousa/sensu-community-plugins,intoximeters/sensu-community-plugins,himyouten/sensu-community-plugins,leedm777/sensu-community-plugins,JonathanHuot/sensu-community-plugins,lenfree/sensu-community-plugins,Squarespace/sensu-community-plugins,PerfectMemory/sensu-community-plugins,Seraf/sensu-community-plugins,pkaeding/sensu-community-plugins,zerOnepal/sensu-community-plugins,nagas/sensu-community-plugins,warmfusion/sensu-community-plugins,nagas/sensu-community-plugins,aryeguy/sensu-community-plugins,gferguson-gd/sensu-community-plugins,rikaard-groupby/sensu-community-plugins,lenfree/sensu-community-plugins,intoximeters/sensu-community-plugins,himyouten/sensu-community-plugins,zerOnepal/sensu-community-plugins,FlorinAndrei/sensu-community-plugins,circleback/sensu-community-plugins,jbehrends/sensu-community-plugins,loveholidays/sensu-plugins,rikaard-groupby/sensu-community-plugins,plasticbrain/sensu-community-plugins,emillion/sensu-community-plugins,mecavity/sensu-community-plugins,ideais/sensu-community-plugins,warmfusion/sensu-community-plugins,alertlogic/sensu-community-plugins,maoe/sensu-community-plugins,jennytoo/sensu-community-plugins,petere/sensu-community-plugins,estately/sensu-community-plugins,cmattoon/sensu-community-plugins,new23d/sensu-community-plugins,alertlogic/sensu-community-plugins,estately/sensu-community-plugins,tuenti/sensu-community-plugins,khuongdp/sensu-community-plugins,gferguson-gd/sensu-community-plugins,julienba/sensu-community-plugins,julienba/sensu-community-plugins,nilroy/sensu-community-plugins,cread/sensu-community-plugins,thehyve/sensu-community-plugins,mecavity/sensu-community-plugins,nagas/sensu-community-plugins,shnmorimoto/sensu-community-plugins,circleback/sensu-community-plugins
|
Add plugin for Nova hypervisor metrics
|
#!/usr/bin/env python
from argparse import ArgumentParser
import socket
import time
from novaclient.v3 import Client
DEFAULT_SCHEME = '{}.nova.hypervisors'.format(socket.gethostname())
METRIC_KEYS = (
'current_workload',
'disk_available_least',
'local_gb',
'local_gb_used',
'memory_mb',
'memory_mb_used',
'running_vms',
'vcpus',
'vcpus_used',
)
def output_metric(name, value):
print '{}\t{}\t{}'.format(name, value, int(time.time()))
def main():
parser = ArgumentParser()
parser.add_argument('-u', '--user', default='admin')
parser.add_argument('-p', '--password', default='admin')
parser.add_argument('-t', '--tenant', default='admin')
parser.add_argument('-a', '--auth-url', default='http://localhost:5000/v2.0')
parser.add_argument('-S', '--service-type', default='compute')
parser.add_argument('-H', '--host')
parser.add_argument('-s', '--scheme', default=DEFAULT_SCHEME)
args = parser.parse_args()
client = Client(args.user, args.password, args.tenant, args.auth_url, service_type=args.service_type)
if args.host:
hypervisors = client.hypervisors.search(args.host)
else:
hypervisors = client.hypervisors.list()
for hv in hypervisors:
for key, value in hv.to_dict().iteritems():
if key in METRIC_KEYS:
output_metric('{}.{}.{}'.format(args.scheme, hv.hypervisor_hostname, key), value)
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add plugin for Nova hypervisor metrics<commit_after>
|
#!/usr/bin/env python
from argparse import ArgumentParser
import socket
import time
from novaclient.v3 import Client
DEFAULT_SCHEME = '{}.nova.hypervisors'.format(socket.gethostname())
METRIC_KEYS = (
'current_workload',
'disk_available_least',
'local_gb',
'local_gb_used',
'memory_mb',
'memory_mb_used',
'running_vms',
'vcpus',
'vcpus_used',
)
def output_metric(name, value):
print '{}\t{}\t{}'.format(name, value, int(time.time()))
def main():
parser = ArgumentParser()
parser.add_argument('-u', '--user', default='admin')
parser.add_argument('-p', '--password', default='admin')
parser.add_argument('-t', '--tenant', default='admin')
parser.add_argument('-a', '--auth-url', default='http://localhost:5000/v2.0')
parser.add_argument('-S', '--service-type', default='compute')
parser.add_argument('-H', '--host')
parser.add_argument('-s', '--scheme', default=DEFAULT_SCHEME)
args = parser.parse_args()
client = Client(args.user, args.password, args.tenant, args.auth_url, service_type=args.service_type)
if args.host:
hypervisors = client.hypervisors.search(args.host)
else:
hypervisors = client.hypervisors.list()
for hv in hypervisors:
for key, value in hv.to_dict().iteritems():
if key in METRIC_KEYS:
output_metric('{}.{}.{}'.format(args.scheme, hv.hypervisor_hostname, key), value)
if __name__ == '__main__':
main()
|
Add plugin for Nova hypervisor metrics#!/usr/bin/env python
from argparse import ArgumentParser
import socket
import time
from novaclient.v3 import Client
DEFAULT_SCHEME = '{}.nova.hypervisors'.format(socket.gethostname())
METRIC_KEYS = (
'current_workload',
'disk_available_least',
'local_gb',
'local_gb_used',
'memory_mb',
'memory_mb_used',
'running_vms',
'vcpus',
'vcpus_used',
)
def output_metric(name, value):
print '{}\t{}\t{}'.format(name, value, int(time.time()))
def main():
parser = ArgumentParser()
parser.add_argument('-u', '--user', default='admin')
parser.add_argument('-p', '--password', default='admin')
parser.add_argument('-t', '--tenant', default='admin')
parser.add_argument('-a', '--auth-url', default='http://localhost:5000/v2.0')
parser.add_argument('-S', '--service-type', default='compute')
parser.add_argument('-H', '--host')
parser.add_argument('-s', '--scheme', default=DEFAULT_SCHEME)
args = parser.parse_args()
client = Client(args.user, args.password, args.tenant, args.auth_url, service_type=args.service_type)
if args.host:
hypervisors = client.hypervisors.search(args.host)
else:
hypervisors = client.hypervisors.list()
for hv in hypervisors:
for key, value in hv.to_dict().iteritems():
if key in METRIC_KEYS:
output_metric('{}.{}.{}'.format(args.scheme, hv.hypervisor_hostname, key), value)
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add plugin for Nova hypervisor metrics<commit_after>#!/usr/bin/env python
from argparse import ArgumentParser
import socket
import time
from novaclient.v3 import Client
DEFAULT_SCHEME = '{}.nova.hypervisors'.format(socket.gethostname())
METRIC_KEYS = (
'current_workload',
'disk_available_least',
'local_gb',
'local_gb_used',
'memory_mb',
'memory_mb_used',
'running_vms',
'vcpus',
'vcpus_used',
)
def output_metric(name, value):
print '{}\t{}\t{}'.format(name, value, int(time.time()))
def main():
parser = ArgumentParser()
parser.add_argument('-u', '--user', default='admin')
parser.add_argument('-p', '--password', default='admin')
parser.add_argument('-t', '--tenant', default='admin')
parser.add_argument('-a', '--auth-url', default='http://localhost:5000/v2.0')
parser.add_argument('-S', '--service-type', default='compute')
parser.add_argument('-H', '--host')
parser.add_argument('-s', '--scheme', default=DEFAULT_SCHEME)
args = parser.parse_args()
client = Client(args.user, args.password, args.tenant, args.auth_url, service_type=args.service_type)
if args.host:
hypervisors = client.hypervisors.search(args.host)
else:
hypervisors = client.hypervisors.list()
for hv in hypervisors:
for key, value in hv.to_dict().iteritems():
if key in METRIC_KEYS:
output_metric('{}.{}.{}'.format(args.scheme, hv.hypervisor_hostname, key), value)
if __name__ == '__main__':
main()
|
|
ef94b41a77b37a42f63dea3d014c5cb278f800aa
|
InvenTree/InvenTree/test_views.py
|
InvenTree/InvenTree/test_views.py
|
""" Unit tests for the main web views """
from django.test import TestCase
from django.urls import reverse
from django.contrib.auth import get_user_model
import os
class ViewTests(TestCase):
""" Tests for various top-level views """
def setUp(self):
# Create a user
User = get_user_model()
User.objects.create_user('username', 'user@email.com', 'password')
self.client.login(username='username', password='password')
def test_api_doc(self):
""" Test that the api-doc view works """
api_url = os.path.join(reverse('index'), 'api-doc') + '/'
response = self.client.get(api_url)
self.assertEqual(response.status_code, 200)
|
Add test for api-doc view
|
Add test for api-doc view
|
Python
|
mit
|
SchrodingersGat/InvenTree,inventree/InvenTree,SchrodingersGat/InvenTree,inventree/InvenTree,inventree/InvenTree,SchrodingersGat/InvenTree,SchrodingersGat/InvenTree,inventree/InvenTree
|
Add test for api-doc view
|
""" Unit tests for the main web views """
from django.test import TestCase
from django.urls import reverse
from django.contrib.auth import get_user_model
import os
class ViewTests(TestCase):
""" Tests for various top-level views """
def setUp(self):
# Create a user
User = get_user_model()
User.objects.create_user('username', 'user@email.com', 'password')
self.client.login(username='username', password='password')
def test_api_doc(self):
""" Test that the api-doc view works """
api_url = os.path.join(reverse('index'), 'api-doc') + '/'
response = self.client.get(api_url)
self.assertEqual(response.status_code, 200)
|
<commit_before><commit_msg>Add test for api-doc view<commit_after>
|
""" Unit tests for the main web views """
from django.test import TestCase
from django.urls import reverse
from django.contrib.auth import get_user_model
import os
class ViewTests(TestCase):
""" Tests for various top-level views """
def setUp(self):
# Create a user
User = get_user_model()
User.objects.create_user('username', 'user@email.com', 'password')
self.client.login(username='username', password='password')
def test_api_doc(self):
""" Test that the api-doc view works """
api_url = os.path.join(reverse('index'), 'api-doc') + '/'
response = self.client.get(api_url)
self.assertEqual(response.status_code, 200)
|
Add test for api-doc view""" Unit tests for the main web views """
from django.test import TestCase
from django.urls import reverse
from django.contrib.auth import get_user_model
import os
class ViewTests(TestCase):
""" Tests for various top-level views """
def setUp(self):
# Create a user
User = get_user_model()
User.objects.create_user('username', 'user@email.com', 'password')
self.client.login(username='username', password='password')
def test_api_doc(self):
""" Test that the api-doc view works """
api_url = os.path.join(reverse('index'), 'api-doc') + '/'
response = self.client.get(api_url)
self.assertEqual(response.status_code, 200)
|
<commit_before><commit_msg>Add test for api-doc view<commit_after>""" Unit tests for the main web views """
from django.test import TestCase
from django.urls import reverse
from django.contrib.auth import get_user_model
import os
class ViewTests(TestCase):
""" Tests for various top-level views """
def setUp(self):
# Create a user
User = get_user_model()
User.objects.create_user('username', 'user@email.com', 'password')
self.client.login(username='username', password='password')
def test_api_doc(self):
""" Test that the api-doc view works """
api_url = os.path.join(reverse('index'), 'api-doc') + '/'
response = self.client.get(api_url)
self.assertEqual(response.status_code, 200)
|
|
a222b37f17ababdac028a202dccbf0a990dad055
|
src/robot_motion_control/scripts/node_gpio_output.py
|
src/robot_motion_control/scripts/node_gpio_output.py
|
#!/usr/bin/env python
import rospy
from std_msgs.msg import String
def motion_topic_callback(data):
rospy.loginfo(rospy.get_caller_id() + "Moving %s", data.data)
def motion_topic_listener():
rospy.init_node('GpioOutput', anonymous=True)
rospy.Subscriber("MOTION_TOPIC", String, motion_topic_callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
motion_topic_listener()
|
Add a node for simple gpio control
|
Add a node for simple gpio control
|
Python
|
mit
|
Aditya90/zeroborgRobot,Aditya90/zeroborgRobot
|
Add a node for simple gpio control
|
#!/usr/bin/env python
import rospy
from std_msgs.msg import String
def motion_topic_callback(data):
rospy.loginfo(rospy.get_caller_id() + "Moving %s", data.data)
def motion_topic_listener():
rospy.init_node('GpioOutput', anonymous=True)
rospy.Subscriber("MOTION_TOPIC", String, motion_topic_callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
motion_topic_listener()
|
<commit_before><commit_msg>Add a node for simple gpio control<commit_after>
|
#!/usr/bin/env python
import rospy
from std_msgs.msg import String
def motion_topic_callback(data):
rospy.loginfo(rospy.get_caller_id() + "Moving %s", data.data)
def motion_topic_listener():
rospy.init_node('GpioOutput', anonymous=True)
rospy.Subscriber("MOTION_TOPIC", String, motion_topic_callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
motion_topic_listener()
|
Add a node for simple gpio control#!/usr/bin/env python
import rospy
from std_msgs.msg import String
def motion_topic_callback(data):
rospy.loginfo(rospy.get_caller_id() + "Moving %s", data.data)
def motion_topic_listener():
rospy.init_node('GpioOutput', anonymous=True)
rospy.Subscriber("MOTION_TOPIC", String, motion_topic_callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
motion_topic_listener()
|
<commit_before><commit_msg>Add a node for simple gpio control<commit_after>#!/usr/bin/env python
import rospy
from std_msgs.msg import String
def motion_topic_callback(data):
rospy.loginfo(rospy.get_caller_id() + "Moving %s", data.data)
def motion_topic_listener():
rospy.init_node('GpioOutput', anonymous=True)
rospy.Subscriber("MOTION_TOPIC", String, motion_topic_callback)
# spin() simply keeps python from exiting until this node is stopped
rospy.spin()
if __name__ == '__main__':
motion_topic_listener()
|
|
071e345543a9e11c8e8af65b782c3044d1f5a6fa
|
snippets/base/migrations/0059_auto_20181120_1137.py
|
snippets/base/migrations/0059_auto_20181120_1137.py
|
# Generated by Django 2.1.3 on 2018-11-20 11:37
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0058_snippet_creator'),
]
operations = [
migrations.AlterField(
model_name='asrsnippet',
name='publish_end',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Ends'),
),
migrations.AlterField(
model_name='asrsnippet',
name='publish_start',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Starts'),
),
]
|
Add missing publish_start and publish_end migration.
|
Add missing publish_start and publish_end migration.
|
Python
|
mpl-2.0
|
mozilla/snippets-service,mozilla/snippets-service,mozilla/snippets-service,glogiotatidis/snippets-service,glogiotatidis/snippets-service,glogiotatidis/snippets-service,glogiotatidis/snippets-service,mozmar/snippets-service,mozmar/snippets-service,mozilla/snippets-service,mozmar/snippets-service,mozmar/snippets-service
|
Add missing publish_start and publish_end migration.
|
# Generated by Django 2.1.3 on 2018-11-20 11:37
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0058_snippet_creator'),
]
operations = [
migrations.AlterField(
model_name='asrsnippet',
name='publish_end',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Ends'),
),
migrations.AlterField(
model_name='asrsnippet',
name='publish_start',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Starts'),
),
]
|
<commit_before><commit_msg>Add missing publish_start and publish_end migration.<commit_after>
|
# Generated by Django 2.1.3 on 2018-11-20 11:37
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0058_snippet_creator'),
]
operations = [
migrations.AlterField(
model_name='asrsnippet',
name='publish_end',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Ends'),
),
migrations.AlterField(
model_name='asrsnippet',
name='publish_start',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Starts'),
),
]
|
Add missing publish_start and publish_end migration.# Generated by Django 2.1.3 on 2018-11-20 11:37
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0058_snippet_creator'),
]
operations = [
migrations.AlterField(
model_name='asrsnippet',
name='publish_end',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Ends'),
),
migrations.AlterField(
model_name='asrsnippet',
name='publish_start',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Starts'),
),
]
|
<commit_before><commit_msg>Add missing publish_start and publish_end migration.<commit_after># Generated by Django 2.1.3 on 2018-11-20 11:37
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('base', '0058_snippet_creator'),
]
operations = [
migrations.AlterField(
model_name='asrsnippet',
name='publish_end',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Ends'),
),
migrations.AlterField(
model_name='asrsnippet',
name='publish_start',
field=models.DateTimeField(blank=True, help_text='See the current time in <a target="_blank" href="http://time.is/UTC">UTC</a>', null=True, verbose_name='Publish Starts'),
),
]
|
|
0866706165f9312f91577d716a7bb55551b1097f
|
scripts/SearchAndReplace.py
|
scripts/SearchAndReplace.py
|
# Copyright (c) 2017 Ruben Dulek
# The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
import re #To perform the search and replace.
from ..Script import Script
## Performs a search-and-replace on all g-code.
#
# Due to technical limitations, the search can't cross the border between
# layers.
class SearchAndReplace(Script):
def getSettingDataString(self):
return """{
"name": "Search and Replace",
"key": "SearchAndReplace",
"metadata": {},
"version": 2,
"settings":
{
"search":
{
"label": "Search",
"description": "All occurrences of this text will get replaced by the replacement text.",
"type": "str",
"default_value": ""
},
"replace":
{
"label": "Replace",
"description": "The search text will get replaced by this text.",
"type": "str",
"default_value": ""
},
"is_regex":
{
"label": "Use Regular Expressions",
"description": "When enabled, the search text will be interpreted as a regular expression.",
"type": "bool",
"default_value": false
}
}
}"""
def execute(self, data):
search_string = self.getSettingValueByKey("search")
if not self.getSettingValueByKey("is_regex"):
search_string = re.escape(search_string) #Need to search for the actual string, not as a regex.
search_regex = re.compile(search_string)
replace_string = self.getSettingValueByKey("replace")
for layer_number, layer in enumerate(data):
data[layer_number] = re.sub(search_regex, replace_string, layer) #Replace all.
return data
|
Add search and replace script
|
Add search and replace script
|
Python
|
agpl-3.0
|
nallath/PostProcessingPlugin
|
Add search and replace script
|
# Copyright (c) 2017 Ruben Dulek
# The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
import re #To perform the search and replace.
from ..Script import Script
## Performs a search-and-replace on all g-code.
#
# Due to technical limitations, the search can't cross the border between
# layers.
class SearchAndReplace(Script):
def getSettingDataString(self):
return """{
"name": "Search and Replace",
"key": "SearchAndReplace",
"metadata": {},
"version": 2,
"settings":
{
"search":
{
"label": "Search",
"description": "All occurrences of this text will get replaced by the replacement text.",
"type": "str",
"default_value": ""
},
"replace":
{
"label": "Replace",
"description": "The search text will get replaced by this text.",
"type": "str",
"default_value": ""
},
"is_regex":
{
"label": "Use Regular Expressions",
"description": "When enabled, the search text will be interpreted as a regular expression.",
"type": "bool",
"default_value": false
}
}
}"""
def execute(self, data):
search_string = self.getSettingValueByKey("search")
if not self.getSettingValueByKey("is_regex"):
search_string = re.escape(search_string) #Need to search for the actual string, not as a regex.
search_regex = re.compile(search_string)
replace_string = self.getSettingValueByKey("replace")
for layer_number, layer in enumerate(data):
data[layer_number] = re.sub(search_regex, replace_string, layer) #Replace all.
return data
|
<commit_before><commit_msg>Add search and replace script<commit_after>
|
# Copyright (c) 2017 Ruben Dulek
# The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
import re #To perform the search and replace.
from ..Script import Script
## Performs a search-and-replace on all g-code.
#
# Due to technical limitations, the search can't cross the border between
# layers.
class SearchAndReplace(Script):
def getSettingDataString(self):
return """{
"name": "Search and Replace",
"key": "SearchAndReplace",
"metadata": {},
"version": 2,
"settings":
{
"search":
{
"label": "Search",
"description": "All occurrences of this text will get replaced by the replacement text.",
"type": "str",
"default_value": ""
},
"replace":
{
"label": "Replace",
"description": "The search text will get replaced by this text.",
"type": "str",
"default_value": ""
},
"is_regex":
{
"label": "Use Regular Expressions",
"description": "When enabled, the search text will be interpreted as a regular expression.",
"type": "bool",
"default_value": false
}
}
}"""
def execute(self, data):
search_string = self.getSettingValueByKey("search")
if not self.getSettingValueByKey("is_regex"):
search_string = re.escape(search_string) #Need to search for the actual string, not as a regex.
search_regex = re.compile(search_string)
replace_string = self.getSettingValueByKey("replace")
for layer_number, layer in enumerate(data):
data[layer_number] = re.sub(search_regex, replace_string, layer) #Replace all.
return data
|
Add search and replace script# Copyright (c) 2017 Ruben Dulek
# The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
import re #To perform the search and replace.
from ..Script import Script
## Performs a search-and-replace on all g-code.
#
# Due to technical limitations, the search can't cross the border between
# layers.
class SearchAndReplace(Script):
def getSettingDataString(self):
return """{
"name": "Search and Replace",
"key": "SearchAndReplace",
"metadata": {},
"version": 2,
"settings":
{
"search":
{
"label": "Search",
"description": "All occurrences of this text will get replaced by the replacement text.",
"type": "str",
"default_value": ""
},
"replace":
{
"label": "Replace",
"description": "The search text will get replaced by this text.",
"type": "str",
"default_value": ""
},
"is_regex":
{
"label": "Use Regular Expressions",
"description": "When enabled, the search text will be interpreted as a regular expression.",
"type": "bool",
"default_value": false
}
}
}"""
def execute(self, data):
search_string = self.getSettingValueByKey("search")
if not self.getSettingValueByKey("is_regex"):
search_string = re.escape(search_string) #Need to search for the actual string, not as a regex.
search_regex = re.compile(search_string)
replace_string = self.getSettingValueByKey("replace")
for layer_number, layer in enumerate(data):
data[layer_number] = re.sub(search_regex, replace_string, layer) #Replace all.
return data
|
<commit_before><commit_msg>Add search and replace script<commit_after># Copyright (c) 2017 Ruben Dulek
# The PostProcessingPlugin is released under the terms of the AGPLv3 or higher.
import re #To perform the search and replace.
from ..Script import Script
## Performs a search-and-replace on all g-code.
#
# Due to technical limitations, the search can't cross the border between
# layers.
class SearchAndReplace(Script):
def getSettingDataString(self):
return """{
"name": "Search and Replace",
"key": "SearchAndReplace",
"metadata": {},
"version": 2,
"settings":
{
"search":
{
"label": "Search",
"description": "All occurrences of this text will get replaced by the replacement text.",
"type": "str",
"default_value": ""
},
"replace":
{
"label": "Replace",
"description": "The search text will get replaced by this text.",
"type": "str",
"default_value": ""
},
"is_regex":
{
"label": "Use Regular Expressions",
"description": "When enabled, the search text will be interpreted as a regular expression.",
"type": "bool",
"default_value": false
}
}
}"""
def execute(self, data):
search_string = self.getSettingValueByKey("search")
if not self.getSettingValueByKey("is_regex"):
search_string = re.escape(search_string) #Need to search for the actual string, not as a regex.
search_regex = re.compile(search_string)
replace_string = self.getSettingValueByKey("replace")
for layer_number, layer in enumerate(data):
data[layer_number] = re.sub(search_regex, replace_string, layer) #Replace all.
return data
|
|
8e96e9c8f2cb19ad6d2febaa081d8198368431a7
|
candidates/migrations/0016_migrate_data_to_extra_fields.py
|
candidates/migrations/0016_migrate_data_to_extra_fields.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
def from_person_extra_to_generic_fields(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
PersonExtra = apps.get_model('candidates', 'PersonExtra')
if settings.ELECTION_APP == 'cr':
p_field = ExtraField.objects.create(
key='profession',
type='line',
label=u'Profession',
)
elif settings.ELECTION_APP == 'bf_elections_2015':
c_field = ExtraField.objects.create(
key='cv',
type='longer-text',
label=u'CV or Résumé',
)
p_field = ExtraField.objects.create(
key='program',
type='longer-text',
label=u'Program',
)
for pe in PersonExtra.objects.all():
person = pe.base
PersonExtraFieldValue.objects.create(
person=person,
field=c_field,
value=pe.cv
)
PersonExtraFieldValue.objects.create(
person=person,
field=p_field,
value=pe.program
)
def from_generic_fields_to_person_extra(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
if settings.ELECTION_APP == 'bf_elections_2015':
for pefv in PersonExtraFieldValue.objects.select_related('field'):
pe = pefv.person.extra
if pefv.field.key == 'cv':
pe.cv = pefv.value
pe.save()
elif pefv.field.key == 'program':
pe.program = pefv.value
pe.save()
else:
print "Ignoring field with unknown key:", pefv.field.key
PersonExtraFieldValue.objects.all().delete()
ExtraField.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('candidates', '0015_add_configurable_extra_fields'),
]
operations = [
migrations.RunPython(
from_person_extra_to_generic_fields,
from_generic_fields_to_person_extra
)
]
|
Add a data migration for extra fields for BF and CR
|
Add a data migration for extra fields for BF and CR
|
Python
|
agpl-3.0
|
mysociety/yournextmp-popit,DemocracyClub/yournextrepresentative,mysociety/yournextmp-popit,DemocracyClub/yournextrepresentative,neavouli/yournextrepresentative,mysociety/yournextrepresentative,neavouli/yournextrepresentative,neavouli/yournextrepresentative,DemocracyClub/yournextrepresentative,mysociety/yournextrepresentative,mysociety/yournextmp-popit,mysociety/yournextmp-popit,mysociety/yournextrepresentative,neavouli/yournextrepresentative,mysociety/yournextrepresentative,mysociety/yournextmp-popit,neavouli/yournextrepresentative,mysociety/yournextrepresentative
|
Add a data migration for extra fields for BF and CR
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
def from_person_extra_to_generic_fields(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
PersonExtra = apps.get_model('candidates', 'PersonExtra')
if settings.ELECTION_APP == 'cr':
p_field = ExtraField.objects.create(
key='profession',
type='line',
label=u'Profession',
)
elif settings.ELECTION_APP == 'bf_elections_2015':
c_field = ExtraField.objects.create(
key='cv',
type='longer-text',
label=u'CV or Résumé',
)
p_field = ExtraField.objects.create(
key='program',
type='longer-text',
label=u'Program',
)
for pe in PersonExtra.objects.all():
person = pe.base
PersonExtraFieldValue.objects.create(
person=person,
field=c_field,
value=pe.cv
)
PersonExtraFieldValue.objects.create(
person=person,
field=p_field,
value=pe.program
)
def from_generic_fields_to_person_extra(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
if settings.ELECTION_APP == 'bf_elections_2015':
for pefv in PersonExtraFieldValue.objects.select_related('field'):
pe = pefv.person.extra
if pefv.field.key == 'cv':
pe.cv = pefv.value
pe.save()
elif pefv.field.key == 'program':
pe.program = pefv.value
pe.save()
else:
print "Ignoring field with unknown key:", pefv.field.key
PersonExtraFieldValue.objects.all().delete()
ExtraField.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('candidates', '0015_add_configurable_extra_fields'),
]
operations = [
migrations.RunPython(
from_person_extra_to_generic_fields,
from_generic_fields_to_person_extra
)
]
|
<commit_before><commit_msg>Add a data migration for extra fields for BF and CR<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
def from_person_extra_to_generic_fields(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
PersonExtra = apps.get_model('candidates', 'PersonExtra')
if settings.ELECTION_APP == 'cr':
p_field = ExtraField.objects.create(
key='profession',
type='line',
label=u'Profession',
)
elif settings.ELECTION_APP == 'bf_elections_2015':
c_field = ExtraField.objects.create(
key='cv',
type='longer-text',
label=u'CV or Résumé',
)
p_field = ExtraField.objects.create(
key='program',
type='longer-text',
label=u'Program',
)
for pe in PersonExtra.objects.all():
person = pe.base
PersonExtraFieldValue.objects.create(
person=person,
field=c_field,
value=pe.cv
)
PersonExtraFieldValue.objects.create(
person=person,
field=p_field,
value=pe.program
)
def from_generic_fields_to_person_extra(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
if settings.ELECTION_APP == 'bf_elections_2015':
for pefv in PersonExtraFieldValue.objects.select_related('field'):
pe = pefv.person.extra
if pefv.field.key == 'cv':
pe.cv = pefv.value
pe.save()
elif pefv.field.key == 'program':
pe.program = pefv.value
pe.save()
else:
print "Ignoring field with unknown key:", pefv.field.key
PersonExtraFieldValue.objects.all().delete()
ExtraField.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('candidates', '0015_add_configurable_extra_fields'),
]
operations = [
migrations.RunPython(
from_person_extra_to_generic_fields,
from_generic_fields_to_person_extra
)
]
|
Add a data migration for extra fields for BF and CR# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
def from_person_extra_to_generic_fields(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
PersonExtra = apps.get_model('candidates', 'PersonExtra')
if settings.ELECTION_APP == 'cr':
p_field = ExtraField.objects.create(
key='profession',
type='line',
label=u'Profession',
)
elif settings.ELECTION_APP == 'bf_elections_2015':
c_field = ExtraField.objects.create(
key='cv',
type='longer-text',
label=u'CV or Résumé',
)
p_field = ExtraField.objects.create(
key='program',
type='longer-text',
label=u'Program',
)
for pe in PersonExtra.objects.all():
person = pe.base
PersonExtraFieldValue.objects.create(
person=person,
field=c_field,
value=pe.cv
)
PersonExtraFieldValue.objects.create(
person=person,
field=p_field,
value=pe.program
)
def from_generic_fields_to_person_extra(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
if settings.ELECTION_APP == 'bf_elections_2015':
for pefv in PersonExtraFieldValue.objects.select_related('field'):
pe = pefv.person.extra
if pefv.field.key == 'cv':
pe.cv = pefv.value
pe.save()
elif pefv.field.key == 'program':
pe.program = pefv.value
pe.save()
else:
print "Ignoring field with unknown key:", pefv.field.key
PersonExtraFieldValue.objects.all().delete()
ExtraField.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('candidates', '0015_add_configurable_extra_fields'),
]
operations = [
migrations.RunPython(
from_person_extra_to_generic_fields,
from_generic_fields_to_person_extra
)
]
|
<commit_before><commit_msg>Add a data migration for extra fields for BF and CR<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
from django.conf import settings
def from_person_extra_to_generic_fields(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
PersonExtra = apps.get_model('candidates', 'PersonExtra')
if settings.ELECTION_APP == 'cr':
p_field = ExtraField.objects.create(
key='profession',
type='line',
label=u'Profession',
)
elif settings.ELECTION_APP == 'bf_elections_2015':
c_field = ExtraField.objects.create(
key='cv',
type='longer-text',
label=u'CV or Résumé',
)
p_field = ExtraField.objects.create(
key='program',
type='longer-text',
label=u'Program',
)
for pe in PersonExtra.objects.all():
person = pe.base
PersonExtraFieldValue.objects.create(
person=person,
field=c_field,
value=pe.cv
)
PersonExtraFieldValue.objects.create(
person=person,
field=p_field,
value=pe.program
)
def from_generic_fields_to_person_extra(apps, schema_editor):
ExtraField = apps.get_model('candidates', 'ExtraField')
PersonExtraFieldValue = apps.get_model('candidates', 'PersonExtraFieldValue')
if settings.ELECTION_APP == 'bf_elections_2015':
for pefv in PersonExtraFieldValue.objects.select_related('field'):
pe = pefv.person.extra
if pefv.field.key == 'cv':
pe.cv = pefv.value
pe.save()
elif pefv.field.key == 'program':
pe.program = pefv.value
pe.save()
else:
print "Ignoring field with unknown key:", pefv.field.key
PersonExtraFieldValue.objects.all().delete()
ExtraField.objects.all().delete()
class Migration(migrations.Migration):
dependencies = [
('candidates', '0015_add_configurable_extra_fields'),
]
operations = [
migrations.RunPython(
from_person_extra_to_generic_fields,
from_generic_fields_to_person_extra
)
]
|
|
5b4d2832fca8e8eb6ffa299f477e00315ac233c9
|
zerver/management/commands/convert_bot_to_outgoing_webhook.py
|
zerver/management/commands/convert_bot_to_outgoing_webhook.py
|
from __future__ import absolute_import
from __future__ import print_function
from typing import Any
from argparse import ArgumentParser
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_rename_stream
from zerver.lib.str_utils import force_text
from zerver.models import Realm, Service, UserProfile, get_realm
import sys
class Command(BaseCommand):
help = """Given an existing bot, converts it into an outgoing webhook bot."""
def add_arguments(self, parser):
# type: (ArgumentParser) -> None
parser.add_argument('string_id', metavar='<string_id>', type=str,
help='subdomain or string_id of bot')
parser.add_argument('bot_email', metavar='<bot_email>', type=str,
help='email of bot')
parser.add_argument('service_name', metavar='<service_name>', type=str,
help='name of Service object to create')
parser.add_argument('base_url', metavar='<base_url>', type=str,
help='base url of outgoing webhook')
# TODO: Add token and interface as arguments once OutgoingWebhookWorker
# uses these fields on the Service object.
def handle(self, *args, **options):
# type: (*Any, **str) -> None
string_id = options['string_id']
bot_email = options['bot_email']
service_name = options['service_name']
base_url = options['base_url']
encoding = sys.getfilesystemencoding()
realm = get_realm(force_text(string_id, encoding))
if realm is None:
print('Unknown subdomain or string_id %s' % (string_id,))
exit(1)
if not bot_email:
print('Email of existing bot must be provided')
exit(1)
if not service_name:
print('Name for Service object must be provided')
exit(1)
if not base_url:
print('Base URL of outgoing webhook must be provided')
exit(1)
# TODO: Normalize email?
bot_profile = UserProfile.objects.get(email=bot_email)
if not bot_profile:
print('User %s does not exist' % (bot_email,))
exit(1)
if not bot_profile.is_bot:
print('User %s is not a bot' % (bot_email,))
exit(1)
if bot_profile.is_outgoing_webhook_bot:
print('%s is already marked as an outgoing webhook bot' % (bot_email,))
exit(1)
Service.objects.create(name=service_name,
user_profile=bot_profile,
base_url=base_url,
token='',
interface=1)
bot_profile.bot_type = UserProfile.OUTGOING_WEBHOOK_BOT
bot_profile.save()
print('Successfully converted %s into an outgoing webhook bot' % (bot_email,))
|
Add management command for making outgoing webhook bot.
|
bots: Add management command for making outgoing webhook bot.
|
Python
|
apache-2.0
|
jackrzhang/zulip,shubhamdhama/zulip,shubhamdhama/zulip,vabs22/zulip,rishig/zulip,j831/zulip,rishig/zulip,Galexrt/zulip,tommyip/zulip,vaidap/zulip,rishig/zulip,j831/zulip,Galexrt/zulip,hackerkid/zulip,hackerkid/zulip,andersk/zulip,zulip/zulip,punchagan/zulip,Galexrt/zulip,rht/zulip,shubhamdhama/zulip,verma-varsha/zulip,zulip/zulip,kou/zulip,synicalsyntax/zulip,dhcrzf/zulip,showell/zulip,eeshangarg/zulip,hackerkid/zulip,mahim97/zulip,timabbott/zulip,showell/zulip,andersk/zulip,amanharitsh123/zulip,shubhamdhama/zulip,rht/zulip,dhcrzf/zulip,brockwhittaker/zulip,brainwane/zulip,vaidap/zulip,kou/zulip,verma-varsha/zulip,verma-varsha/zulip,andersk/zulip,punchagan/zulip,dhcrzf/zulip,jackrzhang/zulip,jrowan/zulip,tommyip/zulip,mahim97/zulip,andersk/zulip,jrowan/zulip,tommyip/zulip,jrowan/zulip,brockwhittaker/zulip,brockwhittaker/zulip,punchagan/zulip,mahim97/zulip,vabs22/zulip,dhcrzf/zulip,eeshangarg/zulip,zulip/zulip,vabs22/zulip,jackrzhang/zulip,tommyip/zulip,kou/zulip,eeshangarg/zulip,jackrzhang/zulip,rht/zulip,amanharitsh123/zulip,hackerkid/zulip,verma-varsha/zulip,jackrzhang/zulip,showell/zulip,timabbott/zulip,dhcrzf/zulip,timabbott/zulip,rht/zulip,showell/zulip,rishig/zulip,timabbott/zulip,shubhamdhama/zulip,vaidap/zulip,jackrzhang/zulip,j831/zulip,brainwane/zulip,vaidap/zulip,jackrzhang/zulip,j831/zulip,andersk/zulip,zulip/zulip,showell/zulip,dhcrzf/zulip,punchagan/zulip,synicalsyntax/zulip,vaidap/zulip,brainwane/zulip,vabs22/zulip,amanharitsh123/zulip,brockwhittaker/zulip,j831/zulip,hackerkid/zulip,kou/zulip,verma-varsha/zulip,vabs22/zulip,Galexrt/zulip,jrowan/zulip,rishig/zulip,Galexrt/zulip,rishig/zulip,synicalsyntax/zulip,brainwane/zulip,jrowan/zulip,brainwane/zulip,rht/zulip,shubhamdhama/zulip,vabs22/zulip,j831/zulip,punchagan/zulip,mahim97/zulip,synicalsyntax/zulip,zulip/zulip,brainwane/zulip,tommyip/zulip,zulip/zulip,jrowan/zulip,timabbott/zulip,kou/zulip,Galexrt/zulip,amanharitsh123/zulip,zulip/zulip,hackerkid/zulip,timabbott/zulip,kou/zulip,verma-varsha/zulip,synicalsyntax/zulip,amanharitsh123/zulip,synicalsyntax/zulip,amanharitsh123/zulip,showell/zulip,eeshangarg/zulip,timabbott/zulip,punchagan/zulip,shubhamdhama/zulip,brainwane/zulip,eeshangarg/zulip,synicalsyntax/zulip,tommyip/zulip,brockwhittaker/zulip,brockwhittaker/zulip,dhcrzf/zulip,mahim97/zulip,mahim97/zulip,tommyip/zulip,vaidap/zulip,Galexrt/zulip,eeshangarg/zulip,andersk/zulip,rht/zulip,kou/zulip,hackerkid/zulip,rishig/zulip,andersk/zulip,eeshangarg/zulip,showell/zulip,punchagan/zulip,rht/zulip
|
bots: Add management command for making outgoing webhook bot.
|
from __future__ import absolute_import
from __future__ import print_function
from typing import Any
from argparse import ArgumentParser
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_rename_stream
from zerver.lib.str_utils import force_text
from zerver.models import Realm, Service, UserProfile, get_realm
import sys
class Command(BaseCommand):
help = """Given an existing bot, converts it into an outgoing webhook bot."""
def add_arguments(self, parser):
# type: (ArgumentParser) -> None
parser.add_argument('string_id', metavar='<string_id>', type=str,
help='subdomain or string_id of bot')
parser.add_argument('bot_email', metavar='<bot_email>', type=str,
help='email of bot')
parser.add_argument('service_name', metavar='<service_name>', type=str,
help='name of Service object to create')
parser.add_argument('base_url', metavar='<base_url>', type=str,
help='base url of outgoing webhook')
# TODO: Add token and interface as arguments once OutgoingWebhookWorker
# uses these fields on the Service object.
def handle(self, *args, **options):
# type: (*Any, **str) -> None
string_id = options['string_id']
bot_email = options['bot_email']
service_name = options['service_name']
base_url = options['base_url']
encoding = sys.getfilesystemencoding()
realm = get_realm(force_text(string_id, encoding))
if realm is None:
print('Unknown subdomain or string_id %s' % (string_id,))
exit(1)
if not bot_email:
print('Email of existing bot must be provided')
exit(1)
if not service_name:
print('Name for Service object must be provided')
exit(1)
if not base_url:
print('Base URL of outgoing webhook must be provided')
exit(1)
# TODO: Normalize email?
bot_profile = UserProfile.objects.get(email=bot_email)
if not bot_profile:
print('User %s does not exist' % (bot_email,))
exit(1)
if not bot_profile.is_bot:
print('User %s is not a bot' % (bot_email,))
exit(1)
if bot_profile.is_outgoing_webhook_bot:
print('%s is already marked as an outgoing webhook bot' % (bot_email,))
exit(1)
Service.objects.create(name=service_name,
user_profile=bot_profile,
base_url=base_url,
token='',
interface=1)
bot_profile.bot_type = UserProfile.OUTGOING_WEBHOOK_BOT
bot_profile.save()
print('Successfully converted %s into an outgoing webhook bot' % (bot_email,))
|
<commit_before><commit_msg>bots: Add management command for making outgoing webhook bot.<commit_after>
|
from __future__ import absolute_import
from __future__ import print_function
from typing import Any
from argparse import ArgumentParser
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_rename_stream
from zerver.lib.str_utils import force_text
from zerver.models import Realm, Service, UserProfile, get_realm
import sys
class Command(BaseCommand):
help = """Given an existing bot, converts it into an outgoing webhook bot."""
def add_arguments(self, parser):
# type: (ArgumentParser) -> None
parser.add_argument('string_id', metavar='<string_id>', type=str,
help='subdomain or string_id of bot')
parser.add_argument('bot_email', metavar='<bot_email>', type=str,
help='email of bot')
parser.add_argument('service_name', metavar='<service_name>', type=str,
help='name of Service object to create')
parser.add_argument('base_url', metavar='<base_url>', type=str,
help='base url of outgoing webhook')
# TODO: Add token and interface as arguments once OutgoingWebhookWorker
# uses these fields on the Service object.
def handle(self, *args, **options):
# type: (*Any, **str) -> None
string_id = options['string_id']
bot_email = options['bot_email']
service_name = options['service_name']
base_url = options['base_url']
encoding = sys.getfilesystemencoding()
realm = get_realm(force_text(string_id, encoding))
if realm is None:
print('Unknown subdomain or string_id %s' % (string_id,))
exit(1)
if not bot_email:
print('Email of existing bot must be provided')
exit(1)
if not service_name:
print('Name for Service object must be provided')
exit(1)
if not base_url:
print('Base URL of outgoing webhook must be provided')
exit(1)
# TODO: Normalize email?
bot_profile = UserProfile.objects.get(email=bot_email)
if not bot_profile:
print('User %s does not exist' % (bot_email,))
exit(1)
if not bot_profile.is_bot:
print('User %s is not a bot' % (bot_email,))
exit(1)
if bot_profile.is_outgoing_webhook_bot:
print('%s is already marked as an outgoing webhook bot' % (bot_email,))
exit(1)
Service.objects.create(name=service_name,
user_profile=bot_profile,
base_url=base_url,
token='',
interface=1)
bot_profile.bot_type = UserProfile.OUTGOING_WEBHOOK_BOT
bot_profile.save()
print('Successfully converted %s into an outgoing webhook bot' % (bot_email,))
|
bots: Add management command for making outgoing webhook bot.from __future__ import absolute_import
from __future__ import print_function
from typing import Any
from argparse import ArgumentParser
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_rename_stream
from zerver.lib.str_utils import force_text
from zerver.models import Realm, Service, UserProfile, get_realm
import sys
class Command(BaseCommand):
help = """Given an existing bot, converts it into an outgoing webhook bot."""
def add_arguments(self, parser):
# type: (ArgumentParser) -> None
parser.add_argument('string_id', metavar='<string_id>', type=str,
help='subdomain or string_id of bot')
parser.add_argument('bot_email', metavar='<bot_email>', type=str,
help='email of bot')
parser.add_argument('service_name', metavar='<service_name>', type=str,
help='name of Service object to create')
parser.add_argument('base_url', metavar='<base_url>', type=str,
help='base url of outgoing webhook')
# TODO: Add token and interface as arguments once OutgoingWebhookWorker
# uses these fields on the Service object.
def handle(self, *args, **options):
# type: (*Any, **str) -> None
string_id = options['string_id']
bot_email = options['bot_email']
service_name = options['service_name']
base_url = options['base_url']
encoding = sys.getfilesystemencoding()
realm = get_realm(force_text(string_id, encoding))
if realm is None:
print('Unknown subdomain or string_id %s' % (string_id,))
exit(1)
if not bot_email:
print('Email of existing bot must be provided')
exit(1)
if not service_name:
print('Name for Service object must be provided')
exit(1)
if not base_url:
print('Base URL of outgoing webhook must be provided')
exit(1)
# TODO: Normalize email?
bot_profile = UserProfile.objects.get(email=bot_email)
if not bot_profile:
print('User %s does not exist' % (bot_email,))
exit(1)
if not bot_profile.is_bot:
print('User %s is not a bot' % (bot_email,))
exit(1)
if bot_profile.is_outgoing_webhook_bot:
print('%s is already marked as an outgoing webhook bot' % (bot_email,))
exit(1)
Service.objects.create(name=service_name,
user_profile=bot_profile,
base_url=base_url,
token='',
interface=1)
bot_profile.bot_type = UserProfile.OUTGOING_WEBHOOK_BOT
bot_profile.save()
print('Successfully converted %s into an outgoing webhook bot' % (bot_email,))
|
<commit_before><commit_msg>bots: Add management command for making outgoing webhook bot.<commit_after>from __future__ import absolute_import
from __future__ import print_function
from typing import Any
from argparse import ArgumentParser
from django.core.management.base import BaseCommand
from zerver.lib.actions import do_rename_stream
from zerver.lib.str_utils import force_text
from zerver.models import Realm, Service, UserProfile, get_realm
import sys
class Command(BaseCommand):
help = """Given an existing bot, converts it into an outgoing webhook bot."""
def add_arguments(self, parser):
# type: (ArgumentParser) -> None
parser.add_argument('string_id', metavar='<string_id>', type=str,
help='subdomain or string_id of bot')
parser.add_argument('bot_email', metavar='<bot_email>', type=str,
help='email of bot')
parser.add_argument('service_name', metavar='<service_name>', type=str,
help='name of Service object to create')
parser.add_argument('base_url', metavar='<base_url>', type=str,
help='base url of outgoing webhook')
# TODO: Add token and interface as arguments once OutgoingWebhookWorker
# uses these fields on the Service object.
def handle(self, *args, **options):
# type: (*Any, **str) -> None
string_id = options['string_id']
bot_email = options['bot_email']
service_name = options['service_name']
base_url = options['base_url']
encoding = sys.getfilesystemencoding()
realm = get_realm(force_text(string_id, encoding))
if realm is None:
print('Unknown subdomain or string_id %s' % (string_id,))
exit(1)
if not bot_email:
print('Email of existing bot must be provided')
exit(1)
if not service_name:
print('Name for Service object must be provided')
exit(1)
if not base_url:
print('Base URL of outgoing webhook must be provided')
exit(1)
# TODO: Normalize email?
bot_profile = UserProfile.objects.get(email=bot_email)
if not bot_profile:
print('User %s does not exist' % (bot_email,))
exit(1)
if not bot_profile.is_bot:
print('User %s is not a bot' % (bot_email,))
exit(1)
if bot_profile.is_outgoing_webhook_bot:
print('%s is already marked as an outgoing webhook bot' % (bot_email,))
exit(1)
Service.objects.create(name=service_name,
user_profile=bot_profile,
base_url=base_url,
token='',
interface=1)
bot_profile.bot_type = UserProfile.OUTGOING_WEBHOOK_BOT
bot_profile.save()
print('Successfully converted %s into an outgoing webhook bot' % (bot_email,))
|
|
f9815ac660263f04463a1d15e8c0a87edca518ca
|
senlin/tests/tempest/api/profiles/test_profile_update.py
|
senlin/tests/tempest/api/profiles/test_profile_update.py
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib import decorators
from senlin.tests.tempest.api import base
from senlin.tests.tempest.common import constants
class TestProfileUpdate(base.BaseSenlinTest):
@classmethod
def resource_setup(cls):
super(TestProfileUpdate, cls).resource_setup()
# Create profile
cls.profile = cls.create_profile(constants.spec_nova_server)
@classmethod
def resource_cleanup(cls):
# Delete profile
cls.client.delete_obj('profiles', cls.profile['id'])
@decorators.idempotent_id('d7efdd92-1687-444e-afcc-b7f9c7e37478')
def test_update_profile(self):
params = {
'profile': {
'name': 'updated-profile-name',
'metadata': {'bar': 'foo'}
}
}
res = self.client.update_obj('profiles', self.profile['id'], params)
# Verify resp of profile update API
self.assertEqual(200, res['status'])
self.assertIsNotNone(res['body'])
profile = res['body']
for key in ['created_at', 'domain', 'id', 'metadata', 'name',
'project', 'spec', 'type', 'updated_at', 'user']:
self.assertIn(key, profile)
self.assertEqual('updated-profile-name', profile['name'])
self.assertEqual({'bar': 'foo'}, profile['metadata'])
|
Add API test for profile update
|
Add API test for profile update
Add API test for profile update
Change-Id: I82a15c72409013c67a1f99fadeea45c52977588d
|
Python
|
apache-2.0
|
openstack/senlin,openstack/senlin,openstack/senlin,stackforge/senlin,stackforge/senlin
|
Add API test for profile update
Add API test for profile update
Change-Id: I82a15c72409013c67a1f99fadeea45c52977588d
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib import decorators
from senlin.tests.tempest.api import base
from senlin.tests.tempest.common import constants
class TestProfileUpdate(base.BaseSenlinTest):
@classmethod
def resource_setup(cls):
super(TestProfileUpdate, cls).resource_setup()
# Create profile
cls.profile = cls.create_profile(constants.spec_nova_server)
@classmethod
def resource_cleanup(cls):
# Delete profile
cls.client.delete_obj('profiles', cls.profile['id'])
@decorators.idempotent_id('d7efdd92-1687-444e-afcc-b7f9c7e37478')
def test_update_profile(self):
params = {
'profile': {
'name': 'updated-profile-name',
'metadata': {'bar': 'foo'}
}
}
res = self.client.update_obj('profiles', self.profile['id'], params)
# Verify resp of profile update API
self.assertEqual(200, res['status'])
self.assertIsNotNone(res['body'])
profile = res['body']
for key in ['created_at', 'domain', 'id', 'metadata', 'name',
'project', 'spec', 'type', 'updated_at', 'user']:
self.assertIn(key, profile)
self.assertEqual('updated-profile-name', profile['name'])
self.assertEqual({'bar': 'foo'}, profile['metadata'])
|
<commit_before><commit_msg>Add API test for profile update
Add API test for profile update
Change-Id: I82a15c72409013c67a1f99fadeea45c52977588d<commit_after>
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib import decorators
from senlin.tests.tempest.api import base
from senlin.tests.tempest.common import constants
class TestProfileUpdate(base.BaseSenlinTest):
@classmethod
def resource_setup(cls):
super(TestProfileUpdate, cls).resource_setup()
# Create profile
cls.profile = cls.create_profile(constants.spec_nova_server)
@classmethod
def resource_cleanup(cls):
# Delete profile
cls.client.delete_obj('profiles', cls.profile['id'])
@decorators.idempotent_id('d7efdd92-1687-444e-afcc-b7f9c7e37478')
def test_update_profile(self):
params = {
'profile': {
'name': 'updated-profile-name',
'metadata': {'bar': 'foo'}
}
}
res = self.client.update_obj('profiles', self.profile['id'], params)
# Verify resp of profile update API
self.assertEqual(200, res['status'])
self.assertIsNotNone(res['body'])
profile = res['body']
for key in ['created_at', 'domain', 'id', 'metadata', 'name',
'project', 'spec', 'type', 'updated_at', 'user']:
self.assertIn(key, profile)
self.assertEqual('updated-profile-name', profile['name'])
self.assertEqual({'bar': 'foo'}, profile['metadata'])
|
Add API test for profile update
Add API test for profile update
Change-Id: I82a15c72409013c67a1f99fadeea45c52977588d# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib import decorators
from senlin.tests.tempest.api import base
from senlin.tests.tempest.common import constants
class TestProfileUpdate(base.BaseSenlinTest):
@classmethod
def resource_setup(cls):
super(TestProfileUpdate, cls).resource_setup()
# Create profile
cls.profile = cls.create_profile(constants.spec_nova_server)
@classmethod
def resource_cleanup(cls):
# Delete profile
cls.client.delete_obj('profiles', cls.profile['id'])
@decorators.idempotent_id('d7efdd92-1687-444e-afcc-b7f9c7e37478')
def test_update_profile(self):
params = {
'profile': {
'name': 'updated-profile-name',
'metadata': {'bar': 'foo'}
}
}
res = self.client.update_obj('profiles', self.profile['id'], params)
# Verify resp of profile update API
self.assertEqual(200, res['status'])
self.assertIsNotNone(res['body'])
profile = res['body']
for key in ['created_at', 'domain', 'id', 'metadata', 'name',
'project', 'spec', 'type', 'updated_at', 'user']:
self.assertIn(key, profile)
self.assertEqual('updated-profile-name', profile['name'])
self.assertEqual({'bar': 'foo'}, profile['metadata'])
|
<commit_before><commit_msg>Add API test for profile update
Add API test for profile update
Change-Id: I82a15c72409013c67a1f99fadeea45c52977588d<commit_after># Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from tempest.lib import decorators
from senlin.tests.tempest.api import base
from senlin.tests.tempest.common import constants
class TestProfileUpdate(base.BaseSenlinTest):
@classmethod
def resource_setup(cls):
super(TestProfileUpdate, cls).resource_setup()
# Create profile
cls.profile = cls.create_profile(constants.spec_nova_server)
@classmethod
def resource_cleanup(cls):
# Delete profile
cls.client.delete_obj('profiles', cls.profile['id'])
@decorators.idempotent_id('d7efdd92-1687-444e-afcc-b7f9c7e37478')
def test_update_profile(self):
params = {
'profile': {
'name': 'updated-profile-name',
'metadata': {'bar': 'foo'}
}
}
res = self.client.update_obj('profiles', self.profile['id'], params)
# Verify resp of profile update API
self.assertEqual(200, res['status'])
self.assertIsNotNone(res['body'])
profile = res['body']
for key in ['created_at', 'domain', 'id', 'metadata', 'name',
'project', 'spec', 'type', 'updated_at', 'user']:
self.assertIn(key, profile)
self.assertEqual('updated-profile-name', profile['name'])
self.assertEqual({'bar': 'foo'}, profile['metadata'])
|
|
4e34bcb117366592722fc69c947ae97f188e0d28
|
tests/test_faker_schema.py
|
tests/test_faker_schema.py
|
import unittest
from faker_schema.faker_schema import FakerSchema
class MockFaker(object):
def name(self):
return 'John Doe'
def address(self):
return '4570 Jaime Plains Suite 188\nNew Johnny, DE 89711-3908'
def email(self):
return 'towen@nelson.biz'
def street_address(self):
return '869 Massey Tunnel'
def city(self):
return 'Copenhagen'
def country(self):
return 'Denmark'
def postalcode(self):
return '17204'
class TestFakerSchema(unittest.TestCase):
def setUp(self):
self.faker_schema = FakerSchema(faker=MockFaker())
def test_generate_fake_flat_schema(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
def test_generate_fake_flat_schema_4_iterations(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema, iterations=4)
self.assertIsInstance(data, list)
self.assertEquals(len(data), 4)
def test_generate_fake_nested_schema(self):
schema = {'Full Name': 'name', 'Location': {'Address': 'street_address', 'City': 'city',
'Country': 'country', 'Postal Code': 'postalcode'}}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['Location'], dict)
def test_generate_fake_schema_with_list(self):
schema = {'Employer': 'name', 'EmployeeList': [{'Employee1': 'name'},
{'Employee2': 'name'}]}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['EmployeeList'], list)
|
Add unit tests for faker_schema module
|
Add unit tests for faker_schema module
|
Python
|
mit
|
ueg1990/faker-schema
|
Add unit tests for faker_schema module
|
import unittest
from faker_schema.faker_schema import FakerSchema
class MockFaker(object):
def name(self):
return 'John Doe'
def address(self):
return '4570 Jaime Plains Suite 188\nNew Johnny, DE 89711-3908'
def email(self):
return 'towen@nelson.biz'
def street_address(self):
return '869 Massey Tunnel'
def city(self):
return 'Copenhagen'
def country(self):
return 'Denmark'
def postalcode(self):
return '17204'
class TestFakerSchema(unittest.TestCase):
def setUp(self):
self.faker_schema = FakerSchema(faker=MockFaker())
def test_generate_fake_flat_schema(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
def test_generate_fake_flat_schema_4_iterations(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema, iterations=4)
self.assertIsInstance(data, list)
self.assertEquals(len(data), 4)
def test_generate_fake_nested_schema(self):
schema = {'Full Name': 'name', 'Location': {'Address': 'street_address', 'City': 'city',
'Country': 'country', 'Postal Code': 'postalcode'}}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['Location'], dict)
def test_generate_fake_schema_with_list(self):
schema = {'Employer': 'name', 'EmployeeList': [{'Employee1': 'name'},
{'Employee2': 'name'}]}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['EmployeeList'], list)
|
<commit_before><commit_msg>Add unit tests for faker_schema module<commit_after>
|
import unittest
from faker_schema.faker_schema import FakerSchema
class MockFaker(object):
def name(self):
return 'John Doe'
def address(self):
return '4570 Jaime Plains Suite 188\nNew Johnny, DE 89711-3908'
def email(self):
return 'towen@nelson.biz'
def street_address(self):
return '869 Massey Tunnel'
def city(self):
return 'Copenhagen'
def country(self):
return 'Denmark'
def postalcode(self):
return '17204'
class TestFakerSchema(unittest.TestCase):
def setUp(self):
self.faker_schema = FakerSchema(faker=MockFaker())
def test_generate_fake_flat_schema(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
def test_generate_fake_flat_schema_4_iterations(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema, iterations=4)
self.assertIsInstance(data, list)
self.assertEquals(len(data), 4)
def test_generate_fake_nested_schema(self):
schema = {'Full Name': 'name', 'Location': {'Address': 'street_address', 'City': 'city',
'Country': 'country', 'Postal Code': 'postalcode'}}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['Location'], dict)
def test_generate_fake_schema_with_list(self):
schema = {'Employer': 'name', 'EmployeeList': [{'Employee1': 'name'},
{'Employee2': 'name'}]}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['EmployeeList'], list)
|
Add unit tests for faker_schema moduleimport unittest
from faker_schema.faker_schema import FakerSchema
class MockFaker(object):
def name(self):
return 'John Doe'
def address(self):
return '4570 Jaime Plains Suite 188\nNew Johnny, DE 89711-3908'
def email(self):
return 'towen@nelson.biz'
def street_address(self):
return '869 Massey Tunnel'
def city(self):
return 'Copenhagen'
def country(self):
return 'Denmark'
def postalcode(self):
return '17204'
class TestFakerSchema(unittest.TestCase):
def setUp(self):
self.faker_schema = FakerSchema(faker=MockFaker())
def test_generate_fake_flat_schema(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
def test_generate_fake_flat_schema_4_iterations(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema, iterations=4)
self.assertIsInstance(data, list)
self.assertEquals(len(data), 4)
def test_generate_fake_nested_schema(self):
schema = {'Full Name': 'name', 'Location': {'Address': 'street_address', 'City': 'city',
'Country': 'country', 'Postal Code': 'postalcode'}}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['Location'], dict)
def test_generate_fake_schema_with_list(self):
schema = {'Employer': 'name', 'EmployeeList': [{'Employee1': 'name'},
{'Employee2': 'name'}]}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['EmployeeList'], list)
|
<commit_before><commit_msg>Add unit tests for faker_schema module<commit_after>import unittest
from faker_schema.faker_schema import FakerSchema
class MockFaker(object):
def name(self):
return 'John Doe'
def address(self):
return '4570 Jaime Plains Suite 188\nNew Johnny, DE 89711-3908'
def email(self):
return 'towen@nelson.biz'
def street_address(self):
return '869 Massey Tunnel'
def city(self):
return 'Copenhagen'
def country(self):
return 'Denmark'
def postalcode(self):
return '17204'
class TestFakerSchema(unittest.TestCase):
def setUp(self):
self.faker_schema = FakerSchema(faker=MockFaker())
def test_generate_fake_flat_schema(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
def test_generate_fake_flat_schema_4_iterations(self):
schema = {'Full Name': 'name', 'Address': 'address', 'Email': 'email'}
data = self.faker_schema.generate_fake(schema, iterations=4)
self.assertIsInstance(data, list)
self.assertEquals(len(data), 4)
def test_generate_fake_nested_schema(self):
schema = {'Full Name': 'name', 'Location': {'Address': 'street_address', 'City': 'city',
'Country': 'country', 'Postal Code': 'postalcode'}}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['Location'], dict)
def test_generate_fake_schema_with_list(self):
schema = {'Employer': 'name', 'EmployeeList': [{'Employee1': 'name'},
{'Employee2': 'name'}]}
data = self.faker_schema.generate_fake(schema)
self.assertIsInstance(data, dict)
self.assertIsInstance(data['EmployeeList'], list)
|
|
d5c5364becb872e4e6508e8e8ad17a9d5fdce627
|
tests/test_slr_conflict.py
|
tests/test_slr_conflict.py
|
import pytest
from parglare import Grammar, Parser
from parglare.exceptions import ShiftReduceConflict
def test_slr_conflict():
"""
Unambiguous grammar which is not SLR(1).
From the Dragon Book.
This grammar has a S/R conflict if SLR tables are used.
"""
grammar = """
S = L '=' R | R;
L = '*' R | 'id';
R = L;
"""
grammar = Grammar.from_string(grammar)
with pytest.raises(ShiftReduceConflict):
Parser(grammar)
|
Test for Shift/Reduce conflict in non-SLR grammar when SLR tables are used.
|
Test for Shift/Reduce conflict in non-SLR grammar when SLR tables are used.
|
Python
|
mit
|
igordejanovic/parglare,igordejanovic/parglare
|
Test for Shift/Reduce conflict in non-SLR grammar when SLR tables are used.
|
import pytest
from parglare import Grammar, Parser
from parglare.exceptions import ShiftReduceConflict
def test_slr_conflict():
"""
Unambiguous grammar which is not SLR(1).
From the Dragon Book.
This grammar has a S/R conflict if SLR tables are used.
"""
grammar = """
S = L '=' R | R;
L = '*' R | 'id';
R = L;
"""
grammar = Grammar.from_string(grammar)
with pytest.raises(ShiftReduceConflict):
Parser(grammar)
|
<commit_before><commit_msg>Test for Shift/Reduce conflict in non-SLR grammar when SLR tables are used.<commit_after>
|
import pytest
from parglare import Grammar, Parser
from parglare.exceptions import ShiftReduceConflict
def test_slr_conflict():
"""
Unambiguous grammar which is not SLR(1).
From the Dragon Book.
This grammar has a S/R conflict if SLR tables are used.
"""
grammar = """
S = L '=' R | R;
L = '*' R | 'id';
R = L;
"""
grammar = Grammar.from_string(grammar)
with pytest.raises(ShiftReduceConflict):
Parser(grammar)
|
Test for Shift/Reduce conflict in non-SLR grammar when SLR tables are used.import pytest
from parglare import Grammar, Parser
from parglare.exceptions import ShiftReduceConflict
def test_slr_conflict():
"""
Unambiguous grammar which is not SLR(1).
From the Dragon Book.
This grammar has a S/R conflict if SLR tables are used.
"""
grammar = """
S = L '=' R | R;
L = '*' R | 'id';
R = L;
"""
grammar = Grammar.from_string(grammar)
with pytest.raises(ShiftReduceConflict):
Parser(grammar)
|
<commit_before><commit_msg>Test for Shift/Reduce conflict in non-SLR grammar when SLR tables are used.<commit_after>import pytest
from parglare import Grammar, Parser
from parglare.exceptions import ShiftReduceConflict
def test_slr_conflict():
"""
Unambiguous grammar which is not SLR(1).
From the Dragon Book.
This grammar has a S/R conflict if SLR tables are used.
"""
grammar = """
S = L '=' R | R;
L = '*' R | 'id';
R = L;
"""
grammar = Grammar.from_string(grammar)
with pytest.raises(ShiftReduceConflict):
Parser(grammar)
|
|
3a1442e680edf41e7952b966afef5ff8a7d5e8f5
|
fa_to_gfa.py
|
fa_to_gfa.py
|
import sys
if __name__ == "__main__":
id_ctr = 0
print "\t".join(["H", "VZ:i:2.0"])
with open(sys.argv[1],"r") as ifi:
for line in ifi:
if not line.startswith(">"):
id_ctr += 1
print "\t".join(["S", str(id_ctr), str(len(line.strip())), line.strip(), ""])
|
Add fa->gfa script for testing assembly stats
|
Add fa->gfa script for testing assembly stats
|
Python
|
mit
|
edawson/gfakluge,edawson/gfakluge,edawson/gfakluge,edawson/gfakluge
|
Add fa->gfa script for testing assembly stats
|
import sys
if __name__ == "__main__":
id_ctr = 0
print "\t".join(["H", "VZ:i:2.0"])
with open(sys.argv[1],"r") as ifi:
for line in ifi:
if not line.startswith(">"):
id_ctr += 1
print "\t".join(["S", str(id_ctr), str(len(line.strip())), line.strip(), ""])
|
<commit_before><commit_msg>Add fa->gfa script for testing assembly stats<commit_after>
|
import sys
if __name__ == "__main__":
id_ctr = 0
print "\t".join(["H", "VZ:i:2.0"])
with open(sys.argv[1],"r") as ifi:
for line in ifi:
if not line.startswith(">"):
id_ctr += 1
print "\t".join(["S", str(id_ctr), str(len(line.strip())), line.strip(), ""])
|
Add fa->gfa script for testing assembly statsimport sys
if __name__ == "__main__":
id_ctr = 0
print "\t".join(["H", "VZ:i:2.0"])
with open(sys.argv[1],"r") as ifi:
for line in ifi:
if not line.startswith(">"):
id_ctr += 1
print "\t".join(["S", str(id_ctr), str(len(line.strip())), line.strip(), ""])
|
<commit_before><commit_msg>Add fa->gfa script for testing assembly stats<commit_after>import sys
if __name__ == "__main__":
id_ctr = 0
print "\t".join(["H", "VZ:i:2.0"])
with open(sys.argv[1],"r") as ifi:
for line in ifi:
if not line.startswith(">"):
id_ctr += 1
print "\t".join(["S", str(id_ctr), str(len(line.strip())), line.strip(), ""])
|
|
bb46ab063cc86525946563c809a896532d87147a
|
wqflask/tests/unit/wqflask/test_markdown_routes.py
|
wqflask/tests/unit/wqflask/test_markdown_routes.py
|
"""Test functions in markdown utils"""
import unittest
from unittest import mock
from wqflask.markdown_routes import render_markdown
class MockRequests404:
@property
def status_code():
return 404
class MockRequests200:
@property
def status_code():
return 200
@property
def content():
return """
# Glossary
This is some content
## Sub-heading
This is another sub-heading
"""
class TestMarkdownRoutesFunctions(unittest.TestCase):
"""Test cases for functions in markdown_routes"""
@mock.patch('wqflask.markdown_routes.requests.get')
def test_render_markdown(self, requests_mock):
requests_mock.return_value = MockRequests404
markdown_content = render_markdown("glossary.md")
requests_mock.assert_called_with(
"https://raw.githubusercontent.com"
"/genenetwork/genenetwork2/"
"wqflask/wqflask/static/"
"glossary.md")
self.assertEqual("<h1>Content</h1>\n",
markdown_content)
|
Add basic tests for rendering_markdown
|
Add basic tests for rendering_markdown
* wqflask/tests/unit/wqflask/test_markdown_routes.py: New tests.
|
Python
|
agpl-3.0
|
zsloan/genenetwork2,pjotrp/genenetwork2,pjotrp/genenetwork2,pjotrp/genenetwork2,genenetwork/genenetwork2,zsloan/genenetwork2,zsloan/genenetwork2,zsloan/genenetwork2,pjotrp/genenetwork2,genenetwork/genenetwork2,genenetwork/genenetwork2,pjotrp/genenetwork2,genenetwork/genenetwork2
|
Add basic tests for rendering_markdown
* wqflask/tests/unit/wqflask/test_markdown_routes.py: New tests.
|
"""Test functions in markdown utils"""
import unittest
from unittest import mock
from wqflask.markdown_routes import render_markdown
class MockRequests404:
@property
def status_code():
return 404
class MockRequests200:
@property
def status_code():
return 200
@property
def content():
return """
# Glossary
This is some content
## Sub-heading
This is another sub-heading
"""
class TestMarkdownRoutesFunctions(unittest.TestCase):
"""Test cases for functions in markdown_routes"""
@mock.patch('wqflask.markdown_routes.requests.get')
def test_render_markdown(self, requests_mock):
requests_mock.return_value = MockRequests404
markdown_content = render_markdown("glossary.md")
requests_mock.assert_called_with(
"https://raw.githubusercontent.com"
"/genenetwork/genenetwork2/"
"wqflask/wqflask/static/"
"glossary.md")
self.assertEqual("<h1>Content</h1>\n",
markdown_content)
|
<commit_before><commit_msg>Add basic tests for rendering_markdown
* wqflask/tests/unit/wqflask/test_markdown_routes.py: New tests.<commit_after>
|
"""Test functions in markdown utils"""
import unittest
from unittest import mock
from wqflask.markdown_routes import render_markdown
class MockRequests404:
@property
def status_code():
return 404
class MockRequests200:
@property
def status_code():
return 200
@property
def content():
return """
# Glossary
This is some content
## Sub-heading
This is another sub-heading
"""
class TestMarkdownRoutesFunctions(unittest.TestCase):
"""Test cases for functions in markdown_routes"""
@mock.patch('wqflask.markdown_routes.requests.get')
def test_render_markdown(self, requests_mock):
requests_mock.return_value = MockRequests404
markdown_content = render_markdown("glossary.md")
requests_mock.assert_called_with(
"https://raw.githubusercontent.com"
"/genenetwork/genenetwork2/"
"wqflask/wqflask/static/"
"glossary.md")
self.assertEqual("<h1>Content</h1>\n",
markdown_content)
|
Add basic tests for rendering_markdown
* wqflask/tests/unit/wqflask/test_markdown_routes.py: New tests."""Test functions in markdown utils"""
import unittest
from unittest import mock
from wqflask.markdown_routes import render_markdown
class MockRequests404:
@property
def status_code():
return 404
class MockRequests200:
@property
def status_code():
return 200
@property
def content():
return """
# Glossary
This is some content
## Sub-heading
This is another sub-heading
"""
class TestMarkdownRoutesFunctions(unittest.TestCase):
"""Test cases for functions in markdown_routes"""
@mock.patch('wqflask.markdown_routes.requests.get')
def test_render_markdown(self, requests_mock):
requests_mock.return_value = MockRequests404
markdown_content = render_markdown("glossary.md")
requests_mock.assert_called_with(
"https://raw.githubusercontent.com"
"/genenetwork/genenetwork2/"
"wqflask/wqflask/static/"
"glossary.md")
self.assertEqual("<h1>Content</h1>\n",
markdown_content)
|
<commit_before><commit_msg>Add basic tests for rendering_markdown
* wqflask/tests/unit/wqflask/test_markdown_routes.py: New tests.<commit_after>"""Test functions in markdown utils"""
import unittest
from unittest import mock
from wqflask.markdown_routes import render_markdown
class MockRequests404:
@property
def status_code():
return 404
class MockRequests200:
@property
def status_code():
return 200
@property
def content():
return """
# Glossary
This is some content
## Sub-heading
This is another sub-heading
"""
class TestMarkdownRoutesFunctions(unittest.TestCase):
"""Test cases for functions in markdown_routes"""
@mock.patch('wqflask.markdown_routes.requests.get')
def test_render_markdown(self, requests_mock):
requests_mock.return_value = MockRequests404
markdown_content = render_markdown("glossary.md")
requests_mock.assert_called_with(
"https://raw.githubusercontent.com"
"/genenetwork/genenetwork2/"
"wqflask/wqflask/static/"
"glossary.md")
self.assertEqual("<h1>Content</h1>\n",
markdown_content)
|
|
e02475a78482d7f298a6c702e0935a8174a295db
|
tests/test_tutorial/test_first_steps/test_tutorial006.py
|
tests/test_tutorial/test_first_steps/test_tutorial006.py
|
import subprocess
import typer
from typer.testing import CliRunner
from first_steps import tutorial006 as mod
runner = CliRunner()
app = typer.Typer()
app.command()(mod.main)
def test_help():
result = runner.invoke(app, ["--help"])
assert result.exit_code == 0
assert "Say hi to NAME, optionally with a --lastname." in result.output
assert "If --formal is used, say hi very formally." in result.output
def test_1():
result = runner.invoke(app, ["Camila"])
assert result.exit_code == 0
assert "Hello Camila" in result.output
def test_option_lastname():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_option_lastname_2():
result = runner.invoke(app, ["--lastname", "Gutiérrez", "Camila"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_formal_1():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez", "--formal"])
assert result.exit_code == 0
assert "Good day Ms. Camila Gutiérrez." in result.output
def test_script():
result = subprocess.run(
["coverage", "run", "--parallel-mode", mod.__file__, "--help"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
)
assert "Usage" in result.stdout
|
Add tests for tutorial006 in First Steps, to test help from docstring
|
:white_check_mark: Add tests for tutorial006 in First Steps, to test help from docstring
|
Python
|
mit
|
tiangolo/typer,tiangolo/typer
|
:white_check_mark: Add tests for tutorial006 in First Steps, to test help from docstring
|
import subprocess
import typer
from typer.testing import CliRunner
from first_steps import tutorial006 as mod
runner = CliRunner()
app = typer.Typer()
app.command()(mod.main)
def test_help():
result = runner.invoke(app, ["--help"])
assert result.exit_code == 0
assert "Say hi to NAME, optionally with a --lastname." in result.output
assert "If --formal is used, say hi very formally." in result.output
def test_1():
result = runner.invoke(app, ["Camila"])
assert result.exit_code == 0
assert "Hello Camila" in result.output
def test_option_lastname():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_option_lastname_2():
result = runner.invoke(app, ["--lastname", "Gutiérrez", "Camila"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_formal_1():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez", "--formal"])
assert result.exit_code == 0
assert "Good day Ms. Camila Gutiérrez." in result.output
def test_script():
result = subprocess.run(
["coverage", "run", "--parallel-mode", mod.__file__, "--help"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
)
assert "Usage" in result.stdout
|
<commit_before><commit_msg>:white_check_mark: Add tests for tutorial006 in First Steps, to test help from docstring<commit_after>
|
import subprocess
import typer
from typer.testing import CliRunner
from first_steps import tutorial006 as mod
runner = CliRunner()
app = typer.Typer()
app.command()(mod.main)
def test_help():
result = runner.invoke(app, ["--help"])
assert result.exit_code == 0
assert "Say hi to NAME, optionally with a --lastname." in result.output
assert "If --formal is used, say hi very formally." in result.output
def test_1():
result = runner.invoke(app, ["Camila"])
assert result.exit_code == 0
assert "Hello Camila" in result.output
def test_option_lastname():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_option_lastname_2():
result = runner.invoke(app, ["--lastname", "Gutiérrez", "Camila"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_formal_1():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez", "--formal"])
assert result.exit_code == 0
assert "Good day Ms. Camila Gutiérrez." in result.output
def test_script():
result = subprocess.run(
["coverage", "run", "--parallel-mode", mod.__file__, "--help"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
)
assert "Usage" in result.stdout
|
:white_check_mark: Add tests for tutorial006 in First Steps, to test help from docstringimport subprocess
import typer
from typer.testing import CliRunner
from first_steps import tutorial006 as mod
runner = CliRunner()
app = typer.Typer()
app.command()(mod.main)
def test_help():
result = runner.invoke(app, ["--help"])
assert result.exit_code == 0
assert "Say hi to NAME, optionally with a --lastname." in result.output
assert "If --formal is used, say hi very formally." in result.output
def test_1():
result = runner.invoke(app, ["Camila"])
assert result.exit_code == 0
assert "Hello Camila" in result.output
def test_option_lastname():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_option_lastname_2():
result = runner.invoke(app, ["--lastname", "Gutiérrez", "Camila"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_formal_1():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez", "--formal"])
assert result.exit_code == 0
assert "Good day Ms. Camila Gutiérrez." in result.output
def test_script():
result = subprocess.run(
["coverage", "run", "--parallel-mode", mod.__file__, "--help"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
)
assert "Usage" in result.stdout
|
<commit_before><commit_msg>:white_check_mark: Add tests for tutorial006 in First Steps, to test help from docstring<commit_after>import subprocess
import typer
from typer.testing import CliRunner
from first_steps import tutorial006 as mod
runner = CliRunner()
app = typer.Typer()
app.command()(mod.main)
def test_help():
result = runner.invoke(app, ["--help"])
assert result.exit_code == 0
assert "Say hi to NAME, optionally with a --lastname." in result.output
assert "If --formal is used, say hi very formally." in result.output
def test_1():
result = runner.invoke(app, ["Camila"])
assert result.exit_code == 0
assert "Hello Camila" in result.output
def test_option_lastname():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_option_lastname_2():
result = runner.invoke(app, ["--lastname", "Gutiérrez", "Camila"])
assert result.exit_code == 0
assert "Hello Camila Gutiérrez" in result.output
def test_formal_1():
result = runner.invoke(app, ["Camila", "--lastname", "Gutiérrez", "--formal"])
assert result.exit_code == 0
assert "Good day Ms. Camila Gutiérrez." in result.output
def test_script():
result = subprocess.run(
["coverage", "run", "--parallel-mode", mod.__file__, "--help"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
encoding="utf-8",
)
assert "Usage" in result.stdout
|
|
02e14627539eb04f49c2679663426eefabbf6e8d
|
src/tf/test_quantization.py
|
src/tf/test_quantization.py
|
import numpy as np
def to_8(M_real, scale, offset):
result = (M_real / scale - offset).round()
result[result < 0] = 0
result[result > 255] = 255
return result.astype(np.uint8)
def gemm8_32(l, r):
assert l.dtype == np.uint8
assert r.dtype == np.uint8
return np.dot(l.astype(np.uint32), r.astype(np.uint32))
def example():
lhs_real = np.array([
[-0.1, 0.4],
[0.1, 0.2],
])
lhs_scale = 1. / 128
lhs_offset = -128
rhs_real = np.array([
[30.],
[5.],
])
rhs_scale = 1.
rhs_offset = 5
result_scale = 1.
result_offset = 0.
lhs8 = to_8(lhs_real, lhs_scale, lhs_offset)
rhs8 = to_8(rhs_real, rhs_scale, rhs_offset)
print("As 8bit: {}, {}".format(lhs8, rhs8))
P = np.ones(lhs8.shape, dtype=np.uint32)
Q = np.ones(rhs8.shape, dtype=np.uint32)
lhs_offset_16 = np.int8(lhs_offset)
rhs_offset_16 = np.int8(rhs_offset)
terms = (
gemm8_32(lhs8, rhs8),
lhs_offset_16 * np.dot(P, rhs8),
np.dot(lhs8, Q * rhs_offset_16),
lhs_offset_16 * (rhs_offset_16 * np.dot(P, Q)))
print("Terms: {}".format(" + ".join(map(str, terms))))
sum_terms = sum(terms)
print("Sum of terms: {}".format(sum_terms))
result_real = (lhs_scale * rhs_scale) * sum_terms
print("(Q result, FP result): {}\n{}".format(result_real, np.dot(lhs_real, rhs_real)))
result = result_offset + (lhs_scale * rhs_scale / result_scale) * sum_terms
print("Final result: {}".format(result))
example()
|
Add code to demonstrate quantization flow
|
Add code to demonstrate quantization flow
|
Python
|
mit
|
kkiningh/cs231n-project,kkiningh/cs231n-project
|
Add code to demonstrate quantization flow
|
import numpy as np
def to_8(M_real, scale, offset):
result = (M_real / scale - offset).round()
result[result < 0] = 0
result[result > 255] = 255
return result.astype(np.uint8)
def gemm8_32(l, r):
assert l.dtype == np.uint8
assert r.dtype == np.uint8
return np.dot(l.astype(np.uint32), r.astype(np.uint32))
def example():
lhs_real = np.array([
[-0.1, 0.4],
[0.1, 0.2],
])
lhs_scale = 1. / 128
lhs_offset = -128
rhs_real = np.array([
[30.],
[5.],
])
rhs_scale = 1.
rhs_offset = 5
result_scale = 1.
result_offset = 0.
lhs8 = to_8(lhs_real, lhs_scale, lhs_offset)
rhs8 = to_8(rhs_real, rhs_scale, rhs_offset)
print("As 8bit: {}, {}".format(lhs8, rhs8))
P = np.ones(lhs8.shape, dtype=np.uint32)
Q = np.ones(rhs8.shape, dtype=np.uint32)
lhs_offset_16 = np.int8(lhs_offset)
rhs_offset_16 = np.int8(rhs_offset)
terms = (
gemm8_32(lhs8, rhs8),
lhs_offset_16 * np.dot(P, rhs8),
np.dot(lhs8, Q * rhs_offset_16),
lhs_offset_16 * (rhs_offset_16 * np.dot(P, Q)))
print("Terms: {}".format(" + ".join(map(str, terms))))
sum_terms = sum(terms)
print("Sum of terms: {}".format(sum_terms))
result_real = (lhs_scale * rhs_scale) * sum_terms
print("(Q result, FP result): {}\n{}".format(result_real, np.dot(lhs_real, rhs_real)))
result = result_offset + (lhs_scale * rhs_scale / result_scale) * sum_terms
print("Final result: {}".format(result))
example()
|
<commit_before><commit_msg>Add code to demonstrate quantization flow<commit_after>
|
import numpy as np
def to_8(M_real, scale, offset):
result = (M_real / scale - offset).round()
result[result < 0] = 0
result[result > 255] = 255
return result.astype(np.uint8)
def gemm8_32(l, r):
assert l.dtype == np.uint8
assert r.dtype == np.uint8
return np.dot(l.astype(np.uint32), r.astype(np.uint32))
def example():
lhs_real = np.array([
[-0.1, 0.4],
[0.1, 0.2],
])
lhs_scale = 1. / 128
lhs_offset = -128
rhs_real = np.array([
[30.],
[5.],
])
rhs_scale = 1.
rhs_offset = 5
result_scale = 1.
result_offset = 0.
lhs8 = to_8(lhs_real, lhs_scale, lhs_offset)
rhs8 = to_8(rhs_real, rhs_scale, rhs_offset)
print("As 8bit: {}, {}".format(lhs8, rhs8))
P = np.ones(lhs8.shape, dtype=np.uint32)
Q = np.ones(rhs8.shape, dtype=np.uint32)
lhs_offset_16 = np.int8(lhs_offset)
rhs_offset_16 = np.int8(rhs_offset)
terms = (
gemm8_32(lhs8, rhs8),
lhs_offset_16 * np.dot(P, rhs8),
np.dot(lhs8, Q * rhs_offset_16),
lhs_offset_16 * (rhs_offset_16 * np.dot(P, Q)))
print("Terms: {}".format(" + ".join(map(str, terms))))
sum_terms = sum(terms)
print("Sum of terms: {}".format(sum_terms))
result_real = (lhs_scale * rhs_scale) * sum_terms
print("(Q result, FP result): {}\n{}".format(result_real, np.dot(lhs_real, rhs_real)))
result = result_offset + (lhs_scale * rhs_scale / result_scale) * sum_terms
print("Final result: {}".format(result))
example()
|
Add code to demonstrate quantization flowimport numpy as np
def to_8(M_real, scale, offset):
result = (M_real / scale - offset).round()
result[result < 0] = 0
result[result > 255] = 255
return result.astype(np.uint8)
def gemm8_32(l, r):
assert l.dtype == np.uint8
assert r.dtype == np.uint8
return np.dot(l.astype(np.uint32), r.astype(np.uint32))
def example():
lhs_real = np.array([
[-0.1, 0.4],
[0.1, 0.2],
])
lhs_scale = 1. / 128
lhs_offset = -128
rhs_real = np.array([
[30.],
[5.],
])
rhs_scale = 1.
rhs_offset = 5
result_scale = 1.
result_offset = 0.
lhs8 = to_8(lhs_real, lhs_scale, lhs_offset)
rhs8 = to_8(rhs_real, rhs_scale, rhs_offset)
print("As 8bit: {}, {}".format(lhs8, rhs8))
P = np.ones(lhs8.shape, dtype=np.uint32)
Q = np.ones(rhs8.shape, dtype=np.uint32)
lhs_offset_16 = np.int8(lhs_offset)
rhs_offset_16 = np.int8(rhs_offset)
terms = (
gemm8_32(lhs8, rhs8),
lhs_offset_16 * np.dot(P, rhs8),
np.dot(lhs8, Q * rhs_offset_16),
lhs_offset_16 * (rhs_offset_16 * np.dot(P, Q)))
print("Terms: {}".format(" + ".join(map(str, terms))))
sum_terms = sum(terms)
print("Sum of terms: {}".format(sum_terms))
result_real = (lhs_scale * rhs_scale) * sum_terms
print("(Q result, FP result): {}\n{}".format(result_real, np.dot(lhs_real, rhs_real)))
result = result_offset + (lhs_scale * rhs_scale / result_scale) * sum_terms
print("Final result: {}".format(result))
example()
|
<commit_before><commit_msg>Add code to demonstrate quantization flow<commit_after>import numpy as np
def to_8(M_real, scale, offset):
result = (M_real / scale - offset).round()
result[result < 0] = 0
result[result > 255] = 255
return result.astype(np.uint8)
def gemm8_32(l, r):
assert l.dtype == np.uint8
assert r.dtype == np.uint8
return np.dot(l.astype(np.uint32), r.astype(np.uint32))
def example():
lhs_real = np.array([
[-0.1, 0.4],
[0.1, 0.2],
])
lhs_scale = 1. / 128
lhs_offset = -128
rhs_real = np.array([
[30.],
[5.],
])
rhs_scale = 1.
rhs_offset = 5
result_scale = 1.
result_offset = 0.
lhs8 = to_8(lhs_real, lhs_scale, lhs_offset)
rhs8 = to_8(rhs_real, rhs_scale, rhs_offset)
print("As 8bit: {}, {}".format(lhs8, rhs8))
P = np.ones(lhs8.shape, dtype=np.uint32)
Q = np.ones(rhs8.shape, dtype=np.uint32)
lhs_offset_16 = np.int8(lhs_offset)
rhs_offset_16 = np.int8(rhs_offset)
terms = (
gemm8_32(lhs8, rhs8),
lhs_offset_16 * np.dot(P, rhs8),
np.dot(lhs8, Q * rhs_offset_16),
lhs_offset_16 * (rhs_offset_16 * np.dot(P, Q)))
print("Terms: {}".format(" + ".join(map(str, terms))))
sum_terms = sum(terms)
print("Sum of terms: {}".format(sum_terms))
result_real = (lhs_scale * rhs_scale) * sum_terms
print("(Q result, FP result): {}\n{}".format(result_real, np.dot(lhs_real, rhs_real)))
result = result_offset + (lhs_scale * rhs_scale / result_scale) * sum_terms
print("Final result: {}".format(result))
example()
|
|
8c85c095dbdbbe76639fd2dd5cfda64eb7057da4
|
AdaptivePELE/analysis/writePrecisePathToSnapshot.py
|
AdaptivePELE/analysis/writePrecisePathToSnapshot.py
|
import os
import sys
import argparse
import glob
import itertools
from AdaptivePELE.utilities import utilities
def parseArguments():
desc = "Write the information related to the conformation network to file\n"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("clusteringObject", type=str, help="Path to the clustering object")
parser.add_argument("trajectory", type=int, help="Trajectory number")
parser.add_argument("snapshot", type=int, help="Snapshot to select (in accepted steps)")
parser.add_argument("epoch", type=str, help="Path to the epoch to search the snapshot")
parser.add_argument("-o", type=str, default=None, help="Output path where to write the files")
args = parser.parse_args()
return args.clusteringObject, args.trajectory, args.snapshot, args.epoch, args.o
if __name__ == "__main__":
clusteringObject, trajectory, snapshot, epoch, outputPath = parseArguments()
if outputPath is not None:
outputPath = os.path.join(outputPath, "")
if not os.path.exists(outputPath):
os.makedirs(outputPath)
else:
outputPath = ""
sys.stderr.write("Reading clustering object...\n")
cl = utilities.readClusteringObject(clusteringObject)
pathway = []
# Strip out trailing backslash if present
pathPrefix, epoch = os.path.split(epoch.rstrip("/"))
sys.stderr.write("Creating pathway...\n")
while epoch != "0":
filename = glob.glob(os.path.join(pathPrefix,epoch,"*traj*_%d.pdb" % trajectory))
snapshots = utilities.getSnapshots(filename[0])
snapshots = snapshots[:snapshot+1]
pathway.insert(0, snapshots)
procMapping = open(os.path.join(pathPrefix, epoch, "processorMapping.txt")).read().rstrip().split(':')
epoch, trajectory, snapshot = map(int, procMapping[trajectory-1][1:-1].split(','))
epoch = str(epoch)
sys.stderr.write("Writing pathway...\n")
with open(outputPath+"pathway.pdb", "a") as f:
f.write("ENDMDL\n".join(itertools.chain.from_iterable(pathway)))
|
Add script to create more precise pathways from simulations
|
Add script to create more precise pathways from simulations
|
Python
|
mit
|
AdaptivePELE/AdaptivePELE,AdaptivePELE/AdaptivePELE,AdaptivePELE/AdaptivePELE,AdaptivePELE/AdaptivePELE
|
Add script to create more precise pathways from simulations
|
import os
import sys
import argparse
import glob
import itertools
from AdaptivePELE.utilities import utilities
def parseArguments():
desc = "Write the information related to the conformation network to file\n"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("clusteringObject", type=str, help="Path to the clustering object")
parser.add_argument("trajectory", type=int, help="Trajectory number")
parser.add_argument("snapshot", type=int, help="Snapshot to select (in accepted steps)")
parser.add_argument("epoch", type=str, help="Path to the epoch to search the snapshot")
parser.add_argument("-o", type=str, default=None, help="Output path where to write the files")
args = parser.parse_args()
return args.clusteringObject, args.trajectory, args.snapshot, args.epoch, args.o
if __name__ == "__main__":
clusteringObject, trajectory, snapshot, epoch, outputPath = parseArguments()
if outputPath is not None:
outputPath = os.path.join(outputPath, "")
if not os.path.exists(outputPath):
os.makedirs(outputPath)
else:
outputPath = ""
sys.stderr.write("Reading clustering object...\n")
cl = utilities.readClusteringObject(clusteringObject)
pathway = []
# Strip out trailing backslash if present
pathPrefix, epoch = os.path.split(epoch.rstrip("/"))
sys.stderr.write("Creating pathway...\n")
while epoch != "0":
filename = glob.glob(os.path.join(pathPrefix,epoch,"*traj*_%d.pdb" % trajectory))
snapshots = utilities.getSnapshots(filename[0])
snapshots = snapshots[:snapshot+1]
pathway.insert(0, snapshots)
procMapping = open(os.path.join(pathPrefix, epoch, "processorMapping.txt")).read().rstrip().split(':')
epoch, trajectory, snapshot = map(int, procMapping[trajectory-1][1:-1].split(','))
epoch = str(epoch)
sys.stderr.write("Writing pathway...\n")
with open(outputPath+"pathway.pdb", "a") as f:
f.write("ENDMDL\n".join(itertools.chain.from_iterable(pathway)))
|
<commit_before><commit_msg>Add script to create more precise pathways from simulations<commit_after>
|
import os
import sys
import argparse
import glob
import itertools
from AdaptivePELE.utilities import utilities
def parseArguments():
desc = "Write the information related to the conformation network to file\n"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("clusteringObject", type=str, help="Path to the clustering object")
parser.add_argument("trajectory", type=int, help="Trajectory number")
parser.add_argument("snapshot", type=int, help="Snapshot to select (in accepted steps)")
parser.add_argument("epoch", type=str, help="Path to the epoch to search the snapshot")
parser.add_argument("-o", type=str, default=None, help="Output path where to write the files")
args = parser.parse_args()
return args.clusteringObject, args.trajectory, args.snapshot, args.epoch, args.o
if __name__ == "__main__":
clusteringObject, trajectory, snapshot, epoch, outputPath = parseArguments()
if outputPath is not None:
outputPath = os.path.join(outputPath, "")
if not os.path.exists(outputPath):
os.makedirs(outputPath)
else:
outputPath = ""
sys.stderr.write("Reading clustering object...\n")
cl = utilities.readClusteringObject(clusteringObject)
pathway = []
# Strip out trailing backslash if present
pathPrefix, epoch = os.path.split(epoch.rstrip("/"))
sys.stderr.write("Creating pathway...\n")
while epoch != "0":
filename = glob.glob(os.path.join(pathPrefix,epoch,"*traj*_%d.pdb" % trajectory))
snapshots = utilities.getSnapshots(filename[0])
snapshots = snapshots[:snapshot+1]
pathway.insert(0, snapshots)
procMapping = open(os.path.join(pathPrefix, epoch, "processorMapping.txt")).read().rstrip().split(':')
epoch, trajectory, snapshot = map(int, procMapping[trajectory-1][1:-1].split(','))
epoch = str(epoch)
sys.stderr.write("Writing pathway...\n")
with open(outputPath+"pathway.pdb", "a") as f:
f.write("ENDMDL\n".join(itertools.chain.from_iterable(pathway)))
|
Add script to create more precise pathways from simulationsimport os
import sys
import argparse
import glob
import itertools
from AdaptivePELE.utilities import utilities
def parseArguments():
desc = "Write the information related to the conformation network to file\n"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("clusteringObject", type=str, help="Path to the clustering object")
parser.add_argument("trajectory", type=int, help="Trajectory number")
parser.add_argument("snapshot", type=int, help="Snapshot to select (in accepted steps)")
parser.add_argument("epoch", type=str, help="Path to the epoch to search the snapshot")
parser.add_argument("-o", type=str, default=None, help="Output path where to write the files")
args = parser.parse_args()
return args.clusteringObject, args.trajectory, args.snapshot, args.epoch, args.o
if __name__ == "__main__":
clusteringObject, trajectory, snapshot, epoch, outputPath = parseArguments()
if outputPath is not None:
outputPath = os.path.join(outputPath, "")
if not os.path.exists(outputPath):
os.makedirs(outputPath)
else:
outputPath = ""
sys.stderr.write("Reading clustering object...\n")
cl = utilities.readClusteringObject(clusteringObject)
pathway = []
# Strip out trailing backslash if present
pathPrefix, epoch = os.path.split(epoch.rstrip("/"))
sys.stderr.write("Creating pathway...\n")
while epoch != "0":
filename = glob.glob(os.path.join(pathPrefix,epoch,"*traj*_%d.pdb" % trajectory))
snapshots = utilities.getSnapshots(filename[0])
snapshots = snapshots[:snapshot+1]
pathway.insert(0, snapshots)
procMapping = open(os.path.join(pathPrefix, epoch, "processorMapping.txt")).read().rstrip().split(':')
epoch, trajectory, snapshot = map(int, procMapping[trajectory-1][1:-1].split(','))
epoch = str(epoch)
sys.stderr.write("Writing pathway...\n")
with open(outputPath+"pathway.pdb", "a") as f:
f.write("ENDMDL\n".join(itertools.chain.from_iterable(pathway)))
|
<commit_before><commit_msg>Add script to create more precise pathways from simulations<commit_after>import os
import sys
import argparse
import glob
import itertools
from AdaptivePELE.utilities import utilities
def parseArguments():
desc = "Write the information related to the conformation network to file\n"
parser = argparse.ArgumentParser(description=desc)
parser.add_argument("clusteringObject", type=str, help="Path to the clustering object")
parser.add_argument("trajectory", type=int, help="Trajectory number")
parser.add_argument("snapshot", type=int, help="Snapshot to select (in accepted steps)")
parser.add_argument("epoch", type=str, help="Path to the epoch to search the snapshot")
parser.add_argument("-o", type=str, default=None, help="Output path where to write the files")
args = parser.parse_args()
return args.clusteringObject, args.trajectory, args.snapshot, args.epoch, args.o
if __name__ == "__main__":
clusteringObject, trajectory, snapshot, epoch, outputPath = parseArguments()
if outputPath is not None:
outputPath = os.path.join(outputPath, "")
if not os.path.exists(outputPath):
os.makedirs(outputPath)
else:
outputPath = ""
sys.stderr.write("Reading clustering object...\n")
cl = utilities.readClusteringObject(clusteringObject)
pathway = []
# Strip out trailing backslash if present
pathPrefix, epoch = os.path.split(epoch.rstrip("/"))
sys.stderr.write("Creating pathway...\n")
while epoch != "0":
filename = glob.glob(os.path.join(pathPrefix,epoch,"*traj*_%d.pdb" % trajectory))
snapshots = utilities.getSnapshots(filename[0])
snapshots = snapshots[:snapshot+1]
pathway.insert(0, snapshots)
procMapping = open(os.path.join(pathPrefix, epoch, "processorMapping.txt")).read().rstrip().split(':')
epoch, trajectory, snapshot = map(int, procMapping[trajectory-1][1:-1].split(','))
epoch = str(epoch)
sys.stderr.write("Writing pathway...\n")
with open(outputPath+"pathway.pdb", "a") as f:
f.write("ENDMDL\n".join(itertools.chain.from_iterable(pathway)))
|
|
07452d9411972f575beadd4bef1799ccdf95968a
|
src/nodeconductor_aws/migrations/0002_remove_awsservice_name.py
|
src/nodeconductor_aws/migrations/0002_remove_awsservice_name.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nodeconductor_aws', '0002_auto_20170210_1345'),
]
operations = [
migrations.RemoveField(
model_name='awsservice',
name='name',
),
]
|
Remove name attribute from service
|
Remove name attribute from service [WAL-496]
Use settings name instead of service name.
|
Python
|
mit
|
opennode/nodeconductor-aws
|
Remove name attribute from service [WAL-496]
Use settings name instead of service name.
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nodeconductor_aws', '0002_auto_20170210_1345'),
]
operations = [
migrations.RemoveField(
model_name='awsservice',
name='name',
),
]
|
<commit_before><commit_msg>Remove name attribute from service [WAL-496]
Use settings name instead of service name.<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nodeconductor_aws', '0002_auto_20170210_1345'),
]
operations = [
migrations.RemoveField(
model_name='awsservice',
name='name',
),
]
|
Remove name attribute from service [WAL-496]
Use settings name instead of service name.# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nodeconductor_aws', '0002_auto_20170210_1345'),
]
operations = [
migrations.RemoveField(
model_name='awsservice',
name='name',
),
]
|
<commit_before><commit_msg>Remove name attribute from service [WAL-496]
Use settings name instead of service name.<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('nodeconductor_aws', '0002_auto_20170210_1345'),
]
operations = [
migrations.RemoveField(
model_name='awsservice',
name='name',
),
]
|
|
7e0346f25bc121b4f01d8e42150ade0ff11cc326
|
tests/test_cli_update.py
|
tests/test_cli_update.py
|
# -*- coding: utf-8 -*-
import pathlib
def test_should_write_json(cli_runner, tmp_rc, tmp_templates_file):
result = cli_runner([
'-c', tmp_rc, 'update'
])
assert result.exit_code == 0
templates = pathlib.Path(tmp_templates_file)
assert templates.exists()
|
Add a basic integration test for the update command
|
Add a basic integration test for the update command
|
Python
|
bsd-3-clause
|
hackebrot/cibopath
|
Add a basic integration test for the update command
|
# -*- coding: utf-8 -*-
import pathlib
def test_should_write_json(cli_runner, tmp_rc, tmp_templates_file):
result = cli_runner([
'-c', tmp_rc, 'update'
])
assert result.exit_code == 0
templates = pathlib.Path(tmp_templates_file)
assert templates.exists()
|
<commit_before><commit_msg>Add a basic integration test for the update command<commit_after>
|
# -*- coding: utf-8 -*-
import pathlib
def test_should_write_json(cli_runner, tmp_rc, tmp_templates_file):
result = cli_runner([
'-c', tmp_rc, 'update'
])
assert result.exit_code == 0
templates = pathlib.Path(tmp_templates_file)
assert templates.exists()
|
Add a basic integration test for the update command# -*- coding: utf-8 -*-
import pathlib
def test_should_write_json(cli_runner, tmp_rc, tmp_templates_file):
result = cli_runner([
'-c', tmp_rc, 'update'
])
assert result.exit_code == 0
templates = pathlib.Path(tmp_templates_file)
assert templates.exists()
|
<commit_before><commit_msg>Add a basic integration test for the update command<commit_after># -*- coding: utf-8 -*-
import pathlib
def test_should_write_json(cli_runner, tmp_rc, tmp_templates_file):
result = cli_runner([
'-c', tmp_rc, 'update'
])
assert result.exit_code == 0
templates = pathlib.Path(tmp_templates_file)
assert templates.exists()
|
|
7bbbc0823a6b4c966f5311f3568241326995eb73
|
test/statements/import1.py
|
test/statements/import1.py
|
from ...foo import bar as spam, baz
import time as ham, datetime
from : keyword.control.flow.python, source.python
.. : source.python
. : source.python
foo : source.python
: source.python
import : keyword.control.flow.python, source.python
bar : source.python
as : keyword.control.flow.python, source.python
spam, baz : source.python
import : keyword.control.flow.python, source.python
time : source.python
as : keyword.control.flow.python, source.python
ham, datetime : source.python
|
Add tests for import statements
|
Add tests for import statements
|
Python
|
mit
|
MagicStack/MagicPython,MagicStack/MagicPython,MagicStack/MagicPython
|
Add tests for import statements
|
from ...foo import bar as spam, baz
import time as ham, datetime
from : keyword.control.flow.python, source.python
.. : source.python
. : source.python
foo : source.python
: source.python
import : keyword.control.flow.python, source.python
bar : source.python
as : keyword.control.flow.python, source.python
spam, baz : source.python
import : keyword.control.flow.python, source.python
time : source.python
as : keyword.control.flow.python, source.python
ham, datetime : source.python
|
<commit_before><commit_msg>Add tests for import statements<commit_after>
|
from ...foo import bar as spam, baz
import time as ham, datetime
from : keyword.control.flow.python, source.python
.. : source.python
. : source.python
foo : source.python
: source.python
import : keyword.control.flow.python, source.python
bar : source.python
as : keyword.control.flow.python, source.python
spam, baz : source.python
import : keyword.control.flow.python, source.python
time : source.python
as : keyword.control.flow.python, source.python
ham, datetime : source.python
|
Add tests for import statementsfrom ...foo import bar as spam, baz
import time as ham, datetime
from : keyword.control.flow.python, source.python
.. : source.python
. : source.python
foo : source.python
: source.python
import : keyword.control.flow.python, source.python
bar : source.python
as : keyword.control.flow.python, source.python
spam, baz : source.python
import : keyword.control.flow.python, source.python
time : source.python
as : keyword.control.flow.python, source.python
ham, datetime : source.python
|
<commit_before><commit_msg>Add tests for import statements<commit_after>from ...foo import bar as spam, baz
import time as ham, datetime
from : keyword.control.flow.python, source.python
.. : source.python
. : source.python
foo : source.python
: source.python
import : keyword.control.flow.python, source.python
bar : source.python
as : keyword.control.flow.python, source.python
spam, baz : source.python
import : keyword.control.flow.python, source.python
time : source.python
as : keyword.control.flow.python, source.python
ham, datetime : source.python
|
|
7a6b3cd831fc3e04b714d54793dae54df59115e1
|
nototools/decompose_ttc.py
|
nototools/decompose_ttc.py
|
#!/usr/bin/python
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decompose a TTC file to its pieces."""
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
import sys
from fontTools import ttLib
from fontTools.ttLib import sfnt
def main(argv):
"""Decompose all fonts provided in the command line."""
for font_file_name in argv[1:]:
with open(font_file_name, 'rb') as font_file:
font = sfnt.SFNTReader(font_file, fontNumber=0)
num_fonts = font.numFonts
for font_number in range(num_fonts):
font = ttLib.TTFont(font_file_name, fontNumber=font_number)
font.save('%s-part%d' % (font_file_name, font_number))
if __name__ == '__main__':
main(sys.argv)
|
Add a tool to decompose TTC files.
|
Add a tool to decompose TTC files.
|
Python
|
apache-2.0
|
pathumego/nototools,anthrotype/nototools,dougfelt/nototools,googlefonts/nototools,moyogo/nototools,pahans/nototools,googlei18n/nototools,googlei18n/nototools,davelab6/nototools,davelab6/nototools,pahans/nototools,namemealrady/nototools,googlefonts/nototools,namemealrady/nototools,anthrotype/nototools,googlefonts/nototools,anthrotype/nototools,davelab6/nototools,pathumego/nototools,moyogo/nototools,pahans/nototools,moyogo/nototools,dougfelt/nototools,namemealrady/nototools,googlefonts/nototools,pathumego/nototools,googlei18n/nototools,googlefonts/nototools,dougfelt/nototools
|
Add a tool to decompose TTC files.
|
#!/usr/bin/python
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decompose a TTC file to its pieces."""
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
import sys
from fontTools import ttLib
from fontTools.ttLib import sfnt
def main(argv):
"""Decompose all fonts provided in the command line."""
for font_file_name in argv[1:]:
with open(font_file_name, 'rb') as font_file:
font = sfnt.SFNTReader(font_file, fontNumber=0)
num_fonts = font.numFonts
for font_number in range(num_fonts):
font = ttLib.TTFont(font_file_name, fontNumber=font_number)
font.save('%s-part%d' % (font_file_name, font_number))
if __name__ == '__main__':
main(sys.argv)
|
<commit_before><commit_msg>Add a tool to decompose TTC files.<commit_after>
|
#!/usr/bin/python
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decompose a TTC file to its pieces."""
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
import sys
from fontTools import ttLib
from fontTools.ttLib import sfnt
def main(argv):
"""Decompose all fonts provided in the command line."""
for font_file_name in argv[1:]:
with open(font_file_name, 'rb') as font_file:
font = sfnt.SFNTReader(font_file, fontNumber=0)
num_fonts = font.numFonts
for font_number in range(num_fonts):
font = ttLib.TTFont(font_file_name, fontNumber=font_number)
font.save('%s-part%d' % (font_file_name, font_number))
if __name__ == '__main__':
main(sys.argv)
|
Add a tool to decompose TTC files.#!/usr/bin/python
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decompose a TTC file to its pieces."""
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
import sys
from fontTools import ttLib
from fontTools.ttLib import sfnt
def main(argv):
"""Decompose all fonts provided in the command line."""
for font_file_name in argv[1:]:
with open(font_file_name, 'rb') as font_file:
font = sfnt.SFNTReader(font_file, fontNumber=0)
num_fonts = font.numFonts
for font_number in range(num_fonts):
font = ttLib.TTFont(font_file_name, fontNumber=font_number)
font.save('%s-part%d' % (font_file_name, font_number))
if __name__ == '__main__':
main(sys.argv)
|
<commit_before><commit_msg>Add a tool to decompose TTC files.<commit_after>#!/usr/bin/python
#
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decompose a TTC file to its pieces."""
__author__ = 'roozbeh@google.com (Roozbeh Pournader)'
import sys
from fontTools import ttLib
from fontTools.ttLib import sfnt
def main(argv):
"""Decompose all fonts provided in the command line."""
for font_file_name in argv[1:]:
with open(font_file_name, 'rb') as font_file:
font = sfnt.SFNTReader(font_file, fontNumber=0)
num_fonts = font.numFonts
for font_number in range(num_fonts):
font = ttLib.TTFont(font_file_name, fontNumber=font_number)
font.save('%s-part%d' % (font_file_name, font_number))
if __name__ == '__main__':
main(sys.argv)
|
|
5ffb60b79505cc3a96fba7403cede78cca36046b
|
numba2/tests/test_debug.py
|
numba2/tests/test_debug.py
|
# -*- coding: utf-8 -*-
"""
Test that the debug flag is disabled, especially for releases.
"""
from __future__ import print_function, division, absolute_import
import unittest
from numba2.config import config
class TestDebugFlag(unittest.TestCase):
def test_debug_flag(self):
self.assertFalse(config.debug)
|
Test that debug flag is off
|
Test that debug flag is off
|
Python
|
bsd-2-clause
|
flypy/flypy,flypy/flypy
|
Test that debug flag is off
|
# -*- coding: utf-8 -*-
"""
Test that the debug flag is disabled, especially for releases.
"""
from __future__ import print_function, division, absolute_import
import unittest
from numba2.config import config
class TestDebugFlag(unittest.TestCase):
def test_debug_flag(self):
self.assertFalse(config.debug)
|
<commit_before><commit_msg>Test that debug flag is off<commit_after>
|
# -*- coding: utf-8 -*-
"""
Test that the debug flag is disabled, especially for releases.
"""
from __future__ import print_function, division, absolute_import
import unittest
from numba2.config import config
class TestDebugFlag(unittest.TestCase):
def test_debug_flag(self):
self.assertFalse(config.debug)
|
Test that debug flag is off# -*- coding: utf-8 -*-
"""
Test that the debug flag is disabled, especially for releases.
"""
from __future__ import print_function, division, absolute_import
import unittest
from numba2.config import config
class TestDebugFlag(unittest.TestCase):
def test_debug_flag(self):
self.assertFalse(config.debug)
|
<commit_before><commit_msg>Test that debug flag is off<commit_after># -*- coding: utf-8 -*-
"""
Test that the debug flag is disabled, especially for releases.
"""
from __future__ import print_function, division, absolute_import
import unittest
from numba2.config import config
class TestDebugFlag(unittest.TestCase):
def test_debug_flag(self):
self.assertFalse(config.debug)
|
|
1af722eb494d1173bfcc9aec1cdb8aab93cf6fc8
|
tests/test_set_config.py
|
tests/test_set_config.py
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import json
import mock
import os.path
import sys
from oslotest import base
# nasty: to import set_config (not a part of the kolla package)
this_dir = os.path.dirname(sys.modules[__name__].__file__)
set_configs_file = os.path.abspath(
os.path.join(this_dir, '..',
'docker', 'base', 'set_configs.py'))
set_configs = imp.load_source('set_configs', set_configs_file)
class LoadFromFile(base.BaseTestCase):
def test_load_ok(self):
in_config = json.dumps({'command': '/bin/true',
'config_files': {}})
mo = mock.mock_open(read_data=in_config)
with mock.patch.object(set_configs, 'open', mo):
set_configs.load_config()
self.assertEqual([
mock.call('/var/lib/kolla/config_files/config.json'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call('/run_command', 'w+'),
mock.call().__enter__(),
mock.call().write(u'/bin/true'),
mock.call().__exit__(None, None, None)], mo.mock_calls)
|
Add a test case for load_config
|
Add a test case for load_config
This is just a basic test to make sure loading from file works.
Change-Id: I074f36023ac4198c436fcee1668d32f9d1f0e61b
|
Python
|
apache-2.0
|
coolsvap/kolla,nihilifer/kolla,tonyli71/kolla,stackforge/kolla,limamauricio/mykolla,rahulunair/kolla,dardelean/kolla-ansible,negronjl/kolla,stackforge/kolla,intel-onp/kolla,dardelean/kolla-ansible,dardelean/kolla-ansible,intel-onp/kolla,mandre/kolla,limamauricio/mykolla,tonyli71/kolla,negronjl/kolla,negronjl/kolla,GalenMa/kolla,rahulunair/kolla,stackforge/kolla,coolsvap/kolla,openstack/kolla,mandre/kolla,GalenMa/kolla,mrangana/kolla,toby82/kolla,limamauricio/mykolla,toby82/kolla,nihilifer/kolla,coolsvap/kolla,mrangana/kolla,openstack/kolla,toby82/kolla,mandre/kolla
|
Add a test case for load_config
This is just a basic test to make sure loading from file works.
Change-Id: I074f36023ac4198c436fcee1668d32f9d1f0e61b
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import json
import mock
import os.path
import sys
from oslotest import base
# nasty: to import set_config (not a part of the kolla package)
this_dir = os.path.dirname(sys.modules[__name__].__file__)
set_configs_file = os.path.abspath(
os.path.join(this_dir, '..',
'docker', 'base', 'set_configs.py'))
set_configs = imp.load_source('set_configs', set_configs_file)
class LoadFromFile(base.BaseTestCase):
def test_load_ok(self):
in_config = json.dumps({'command': '/bin/true',
'config_files': {}})
mo = mock.mock_open(read_data=in_config)
with mock.patch.object(set_configs, 'open', mo):
set_configs.load_config()
self.assertEqual([
mock.call('/var/lib/kolla/config_files/config.json'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call('/run_command', 'w+'),
mock.call().__enter__(),
mock.call().write(u'/bin/true'),
mock.call().__exit__(None, None, None)], mo.mock_calls)
|
<commit_before><commit_msg>Add a test case for load_config
This is just a basic test to make sure loading from file works.
Change-Id: I074f36023ac4198c436fcee1668d32f9d1f0e61b<commit_after>
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import json
import mock
import os.path
import sys
from oslotest import base
# nasty: to import set_config (not a part of the kolla package)
this_dir = os.path.dirname(sys.modules[__name__].__file__)
set_configs_file = os.path.abspath(
os.path.join(this_dir, '..',
'docker', 'base', 'set_configs.py'))
set_configs = imp.load_source('set_configs', set_configs_file)
class LoadFromFile(base.BaseTestCase):
def test_load_ok(self):
in_config = json.dumps({'command': '/bin/true',
'config_files': {}})
mo = mock.mock_open(read_data=in_config)
with mock.patch.object(set_configs, 'open', mo):
set_configs.load_config()
self.assertEqual([
mock.call('/var/lib/kolla/config_files/config.json'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call('/run_command', 'w+'),
mock.call().__enter__(),
mock.call().write(u'/bin/true'),
mock.call().__exit__(None, None, None)], mo.mock_calls)
|
Add a test case for load_config
This is just a basic test to make sure loading from file works.
Change-Id: I074f36023ac4198c436fcee1668d32f9d1f0e61b# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import json
import mock
import os.path
import sys
from oslotest import base
# nasty: to import set_config (not a part of the kolla package)
this_dir = os.path.dirname(sys.modules[__name__].__file__)
set_configs_file = os.path.abspath(
os.path.join(this_dir, '..',
'docker', 'base', 'set_configs.py'))
set_configs = imp.load_source('set_configs', set_configs_file)
class LoadFromFile(base.BaseTestCase):
def test_load_ok(self):
in_config = json.dumps({'command': '/bin/true',
'config_files': {}})
mo = mock.mock_open(read_data=in_config)
with mock.patch.object(set_configs, 'open', mo):
set_configs.load_config()
self.assertEqual([
mock.call('/var/lib/kolla/config_files/config.json'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call('/run_command', 'w+'),
mock.call().__enter__(),
mock.call().write(u'/bin/true'),
mock.call().__exit__(None, None, None)], mo.mock_calls)
|
<commit_before><commit_msg>Add a test case for load_config
This is just a basic test to make sure loading from file works.
Change-Id: I074f36023ac4198c436fcee1668d32f9d1f0e61b<commit_after># Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import imp
import json
import mock
import os.path
import sys
from oslotest import base
# nasty: to import set_config (not a part of the kolla package)
this_dir = os.path.dirname(sys.modules[__name__].__file__)
set_configs_file = os.path.abspath(
os.path.join(this_dir, '..',
'docker', 'base', 'set_configs.py'))
set_configs = imp.load_source('set_configs', set_configs_file)
class LoadFromFile(base.BaseTestCase):
def test_load_ok(self):
in_config = json.dumps({'command': '/bin/true',
'config_files': {}})
mo = mock.mock_open(read_data=in_config)
with mock.patch.object(set_configs, 'open', mo):
set_configs.load_config()
self.assertEqual([
mock.call('/var/lib/kolla/config_files/config.json'),
mock.call().__enter__(),
mock.call().read(),
mock.call().__exit__(None, None, None),
mock.call('/run_command', 'w+'),
mock.call().__enter__(),
mock.call().write(u'/bin/true'),
mock.call().__exit__(None, None, None)], mo.mock_calls)
|
|
2915572be70e98fdd76870b402e85751c952f749
|
meetings/api/migrations/0001_create_account_for_osf.py
|
meetings/api/migrations/0001_create_account_for_osf.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-12 16:31
from __future__ import unicode_literals
from django.db import migrations
from django.contrib.auth.management import create_permissions
class Migration(migrations.Migration):
def create_user_for_osf(apps, schema_editor):
User = apps.get_model("auth", "User")
Permission = apps.get_model("auth", "Permission")
apps.models_module = True
create_permissions(apps, verbosity=0)
apps.models_module = None
osfPassword = 'a very secret password' # make api call to OSF to get password?
osf = User.objects.create(username='osf', password=osfPassword)
set_submission_contributor_permission = Permission.objects.get(codename='can_set_contributor')
osf.user_permissions.add(set_submission_contributor_permission)
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('guardian', '0001_initial'),
('auth', '0001_initial'),
('conferences', '__first__'),
('submissions', '__first__'),
]
operations = [
migrations.RunPython(create_user_for_osf),
]
|
Add migration for creating special osf user
|
Add migration for creating special osf user
|
Python
|
apache-2.0
|
jnayak1/osf-meetings,jnayak1/osf-meetings,leodomingo/osf-meetings,jnayak1/osf-meetings,leodomingo/osf-meetings,jnayak1/osf-meetings,leodomingo/osf-meetings,leodomingo/osf-meetings
|
Add migration for creating special osf user
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-12 16:31
from __future__ import unicode_literals
from django.db import migrations
from django.contrib.auth.management import create_permissions
class Migration(migrations.Migration):
def create_user_for_osf(apps, schema_editor):
User = apps.get_model("auth", "User")
Permission = apps.get_model("auth", "Permission")
apps.models_module = True
create_permissions(apps, verbosity=0)
apps.models_module = None
osfPassword = 'a very secret password' # make api call to OSF to get password?
osf = User.objects.create(username='osf', password=osfPassword)
set_submission_contributor_permission = Permission.objects.get(codename='can_set_contributor')
osf.user_permissions.add(set_submission_contributor_permission)
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('guardian', '0001_initial'),
('auth', '0001_initial'),
('conferences', '__first__'),
('submissions', '__first__'),
]
operations = [
migrations.RunPython(create_user_for_osf),
]
|
<commit_before><commit_msg>Add migration for creating special osf user<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-12 16:31
from __future__ import unicode_literals
from django.db import migrations
from django.contrib.auth.management import create_permissions
class Migration(migrations.Migration):
def create_user_for_osf(apps, schema_editor):
User = apps.get_model("auth", "User")
Permission = apps.get_model("auth", "Permission")
apps.models_module = True
create_permissions(apps, verbosity=0)
apps.models_module = None
osfPassword = 'a very secret password' # make api call to OSF to get password?
osf = User.objects.create(username='osf', password=osfPassword)
set_submission_contributor_permission = Permission.objects.get(codename='can_set_contributor')
osf.user_permissions.add(set_submission_contributor_permission)
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('guardian', '0001_initial'),
('auth', '0001_initial'),
('conferences', '__first__'),
('submissions', '__first__'),
]
operations = [
migrations.RunPython(create_user_for_osf),
]
|
Add migration for creating special osf user# -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-12 16:31
from __future__ import unicode_literals
from django.db import migrations
from django.contrib.auth.management import create_permissions
class Migration(migrations.Migration):
def create_user_for_osf(apps, schema_editor):
User = apps.get_model("auth", "User")
Permission = apps.get_model("auth", "Permission")
apps.models_module = True
create_permissions(apps, verbosity=0)
apps.models_module = None
osfPassword = 'a very secret password' # make api call to OSF to get password?
osf = User.objects.create(username='osf', password=osfPassword)
set_submission_contributor_permission = Permission.objects.get(codename='can_set_contributor')
osf.user_permissions.add(set_submission_contributor_permission)
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('guardian', '0001_initial'),
('auth', '0001_initial'),
('conferences', '__first__'),
('submissions', '__first__'),
]
operations = [
migrations.RunPython(create_user_for_osf),
]
|
<commit_before><commit_msg>Add migration for creating special osf user<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.9.7 on 2016-07-12 16:31
from __future__ import unicode_literals
from django.db import migrations
from django.contrib.auth.management import create_permissions
class Migration(migrations.Migration):
def create_user_for_osf(apps, schema_editor):
User = apps.get_model("auth", "User")
Permission = apps.get_model("auth", "Permission")
apps.models_module = True
create_permissions(apps, verbosity=0)
apps.models_module = None
osfPassword = 'a very secret password' # make api call to OSF to get password?
osf = User.objects.create(username='osf', password=osfPassword)
set_submission_contributor_permission = Permission.objects.get(codename='can_set_contributor')
osf.user_permissions.add(set_submission_contributor_permission)
dependencies = [
('contenttypes', '0002_remove_content_type_name'),
('guardian', '0001_initial'),
('auth', '0001_initial'),
('conferences', '__first__'),
('submissions', '__first__'),
]
operations = [
migrations.RunPython(create_user_for_osf),
]
|
|
09d96508cfcf0e0394c581d7169cdf9abceb4ddf
|
wrench/benchmark_server.py
|
wrench/benchmark_server.py
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
import subprocess
import time
import urllib2
FILE = 'perf.json'
URL = 'https://wrperf.org/submit'
while True:
try:
# Remove any previous results
try:
os.remove(FILE)
except:
pass
# Pull latest code
subprocess.call(["git", "pull"])
# Get the git revision of this build
revision = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip()
# Build
subprocess.call(["cargo", "build", "--release"])
# Run benchmarks
env = os.environ.copy()
# Ensure that vsync is disabled, to get meaningful 'composite' times.
env['vblank_mode'] = '0'
subprocess.call(["cargo", "run", "--release", "--", "perf", FILE], env=env)
# Read the results
with open(FILE) as file:
results = json.load(file)
# Post the results to server
payload = {
'key': env['WEBRENDER_PERF_KEY'],
'revision': revision,
'timestamp': str(time.time()),
'tests': results['tests'],
}
req = urllib2.Request(URL,
headers={"Content-Type": "application/json"},
data=json.dumps(payload))
f = urllib2.urlopen(req)
except Exception as e:
print(e)
# Delay a bit until next benchmark
time.sleep(60 * 60)
|
Add script to run benchmarks and submit to web service.
|
Add script to run benchmarks and submit to web service.
|
Python
|
mpl-2.0
|
servo/webrender,servo/webrender,servo/webrender,servo/webrender,servo/webrender,servo/webrender
|
Add script to run benchmarks and submit to web service.
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
import subprocess
import time
import urllib2
FILE = 'perf.json'
URL = 'https://wrperf.org/submit'
while True:
try:
# Remove any previous results
try:
os.remove(FILE)
except:
pass
# Pull latest code
subprocess.call(["git", "pull"])
# Get the git revision of this build
revision = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip()
# Build
subprocess.call(["cargo", "build", "--release"])
# Run benchmarks
env = os.environ.copy()
# Ensure that vsync is disabled, to get meaningful 'composite' times.
env['vblank_mode'] = '0'
subprocess.call(["cargo", "run", "--release", "--", "perf", FILE], env=env)
# Read the results
with open(FILE) as file:
results = json.load(file)
# Post the results to server
payload = {
'key': env['WEBRENDER_PERF_KEY'],
'revision': revision,
'timestamp': str(time.time()),
'tests': results['tests'],
}
req = urllib2.Request(URL,
headers={"Content-Type": "application/json"},
data=json.dumps(payload))
f = urllib2.urlopen(req)
except Exception as e:
print(e)
# Delay a bit until next benchmark
time.sleep(60 * 60)
|
<commit_before><commit_msg>Add script to run benchmarks and submit to web service.<commit_after>
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
import subprocess
import time
import urllib2
FILE = 'perf.json'
URL = 'https://wrperf.org/submit'
while True:
try:
# Remove any previous results
try:
os.remove(FILE)
except:
pass
# Pull latest code
subprocess.call(["git", "pull"])
# Get the git revision of this build
revision = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip()
# Build
subprocess.call(["cargo", "build", "--release"])
# Run benchmarks
env = os.environ.copy()
# Ensure that vsync is disabled, to get meaningful 'composite' times.
env['vblank_mode'] = '0'
subprocess.call(["cargo", "run", "--release", "--", "perf", FILE], env=env)
# Read the results
with open(FILE) as file:
results = json.load(file)
# Post the results to server
payload = {
'key': env['WEBRENDER_PERF_KEY'],
'revision': revision,
'timestamp': str(time.time()),
'tests': results['tests'],
}
req = urllib2.Request(URL,
headers={"Content-Type": "application/json"},
data=json.dumps(payload))
f = urllib2.urlopen(req)
except Exception as e:
print(e)
# Delay a bit until next benchmark
time.sleep(60 * 60)
|
Add script to run benchmarks and submit to web service.# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
import subprocess
import time
import urllib2
FILE = 'perf.json'
URL = 'https://wrperf.org/submit'
while True:
try:
# Remove any previous results
try:
os.remove(FILE)
except:
pass
# Pull latest code
subprocess.call(["git", "pull"])
# Get the git revision of this build
revision = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip()
# Build
subprocess.call(["cargo", "build", "--release"])
# Run benchmarks
env = os.environ.copy()
# Ensure that vsync is disabled, to get meaningful 'composite' times.
env['vblank_mode'] = '0'
subprocess.call(["cargo", "run", "--release", "--", "perf", FILE], env=env)
# Read the results
with open(FILE) as file:
results = json.load(file)
# Post the results to server
payload = {
'key': env['WEBRENDER_PERF_KEY'],
'revision': revision,
'timestamp': str(time.time()),
'tests': results['tests'],
}
req = urllib2.Request(URL,
headers={"Content-Type": "application/json"},
data=json.dumps(payload))
f = urllib2.urlopen(req)
except Exception as e:
print(e)
# Delay a bit until next benchmark
time.sleep(60 * 60)
|
<commit_before><commit_msg>Add script to run benchmarks and submit to web service.<commit_after># This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import json
import os
import subprocess
import time
import urllib2
FILE = 'perf.json'
URL = 'https://wrperf.org/submit'
while True:
try:
# Remove any previous results
try:
os.remove(FILE)
except:
pass
# Pull latest code
subprocess.call(["git", "pull"])
# Get the git revision of this build
revision = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip()
# Build
subprocess.call(["cargo", "build", "--release"])
# Run benchmarks
env = os.environ.copy()
# Ensure that vsync is disabled, to get meaningful 'composite' times.
env['vblank_mode'] = '0'
subprocess.call(["cargo", "run", "--release", "--", "perf", FILE], env=env)
# Read the results
with open(FILE) as file:
results = json.load(file)
# Post the results to server
payload = {
'key': env['WEBRENDER_PERF_KEY'],
'revision': revision,
'timestamp': str(time.time()),
'tests': results['tests'],
}
req = urllib2.Request(URL,
headers={"Content-Type": "application/json"},
data=json.dumps(payload))
f = urllib2.urlopen(req)
except Exception as e:
print(e)
# Delay a bit until next benchmark
time.sleep(60 * 60)
|
|
b025383bb5d77dd8ca9af8fd77bc2a362e64ba51
|
scripts/fix_names.py
|
scripts/fix_names.py
|
#!/usr/bin/env python3
""" Fix names in auth_users.
Usage: ./fix_names data/site/epcon.db
Uses nameparser package to do the hard work of splitting names into
first and last name.
Written by Marc-Andre Lemburg, 2019.
"""
import sqlite3
import sys
import nameparser
###
def connect(file):
db = sqlite3.connect(file)
db.row_factory = sqlite3.Row
return db
def get_users(db):
c = db.cursor()
c.execute('select * from auth_user')
return c.fetchall()
def fix_names(users):
""" Fix names in user records.
Yields records (first_name, last_name, id).
"""
for user in users:
id = user['id']
first_name = user['first_name'].strip()
last_name = user['last_name'].strip()
if not first_name and not last_name:
# Empty name: skip
print (f'Skipping empty name in record {id}')
continue
elif first_name == last_name:
full_name = first_name
elif first_name.endswith(last_name):
full_name = first_name
elif not last_name:
full_name = first_name
elif not first_name:
full_name = last_name
else:
# In this case, the user has most likely entered the name
# correctly split, so skip
full_name = first_name + last_name
print (f'Skipping already split name: {first_name} / {last_name} ({id})')
continue
print (f'Working on "{full_name}" ({id})')
# Handle email addresses
if '@' in full_name:
print (f' - fixing email address')
# Remove domain part
e_name = full_name[:full_name.find('@')]
if '+' in e_name:
# Remove alias
e_name = e_name[:e_name.find('+')]
# Try to split name parts
e_name = e_name.replace('.', ' ')
e_name = e_name.replace('_', ' ')
e_name = e_name.strip()
if len(e_name) < 4:
# Probably just initials: leave email as is
pass
else:
full_name = e_name
# Parse name
name = nameparser.HumanName(full_name)
name.capitalize()
first_name = name.first
last_name = name.last
print (f' - splitting name into: {first_name} / {last_name} ({id})')
yield (first_name, last_name, id)
def update_users(db, user_data):
c = db.cursor()
c.executemany('update auth_user set first_name=?, last_name=? where id=?',
user_data)
db.commit()
###
if __name__ == '__main__':
db = connect(sys.argv[1])
users = get_users(db)
user_data = fix_names(users)
update_users(db, user_data)
|
Add helper script to fix names in auth_user.
|
Add helper script to fix names in auth_user.
|
Python
|
bsd-2-clause
|
EuroPython/epcon,EuroPython/epcon,EuroPython/epcon,EuroPython/epcon
|
Add helper script to fix names in auth_user.
|
#!/usr/bin/env python3
""" Fix names in auth_users.
Usage: ./fix_names data/site/epcon.db
Uses nameparser package to do the hard work of splitting names into
first and last name.
Written by Marc-Andre Lemburg, 2019.
"""
import sqlite3
import sys
import nameparser
###
def connect(file):
db = sqlite3.connect(file)
db.row_factory = sqlite3.Row
return db
def get_users(db):
c = db.cursor()
c.execute('select * from auth_user')
return c.fetchall()
def fix_names(users):
""" Fix names in user records.
Yields records (first_name, last_name, id).
"""
for user in users:
id = user['id']
first_name = user['first_name'].strip()
last_name = user['last_name'].strip()
if not first_name and not last_name:
# Empty name: skip
print (f'Skipping empty name in record {id}')
continue
elif first_name == last_name:
full_name = first_name
elif first_name.endswith(last_name):
full_name = first_name
elif not last_name:
full_name = first_name
elif not first_name:
full_name = last_name
else:
# In this case, the user has most likely entered the name
# correctly split, so skip
full_name = first_name + last_name
print (f'Skipping already split name: {first_name} / {last_name} ({id})')
continue
print (f'Working on "{full_name}" ({id})')
# Handle email addresses
if '@' in full_name:
print (f' - fixing email address')
# Remove domain part
e_name = full_name[:full_name.find('@')]
if '+' in e_name:
# Remove alias
e_name = e_name[:e_name.find('+')]
# Try to split name parts
e_name = e_name.replace('.', ' ')
e_name = e_name.replace('_', ' ')
e_name = e_name.strip()
if len(e_name) < 4:
# Probably just initials: leave email as is
pass
else:
full_name = e_name
# Parse name
name = nameparser.HumanName(full_name)
name.capitalize()
first_name = name.first
last_name = name.last
print (f' - splitting name into: {first_name} / {last_name} ({id})')
yield (first_name, last_name, id)
def update_users(db, user_data):
c = db.cursor()
c.executemany('update auth_user set first_name=?, last_name=? where id=?',
user_data)
db.commit()
###
if __name__ == '__main__':
db = connect(sys.argv[1])
users = get_users(db)
user_data = fix_names(users)
update_users(db, user_data)
|
<commit_before><commit_msg>Add helper script to fix names in auth_user.<commit_after>
|
#!/usr/bin/env python3
""" Fix names in auth_users.
Usage: ./fix_names data/site/epcon.db
Uses nameparser package to do the hard work of splitting names into
first and last name.
Written by Marc-Andre Lemburg, 2019.
"""
import sqlite3
import sys
import nameparser
###
def connect(file):
db = sqlite3.connect(file)
db.row_factory = sqlite3.Row
return db
def get_users(db):
c = db.cursor()
c.execute('select * from auth_user')
return c.fetchall()
def fix_names(users):
""" Fix names in user records.
Yields records (first_name, last_name, id).
"""
for user in users:
id = user['id']
first_name = user['first_name'].strip()
last_name = user['last_name'].strip()
if not first_name and not last_name:
# Empty name: skip
print (f'Skipping empty name in record {id}')
continue
elif first_name == last_name:
full_name = first_name
elif first_name.endswith(last_name):
full_name = first_name
elif not last_name:
full_name = first_name
elif not first_name:
full_name = last_name
else:
# In this case, the user has most likely entered the name
# correctly split, so skip
full_name = first_name + last_name
print (f'Skipping already split name: {first_name} / {last_name} ({id})')
continue
print (f'Working on "{full_name}" ({id})')
# Handle email addresses
if '@' in full_name:
print (f' - fixing email address')
# Remove domain part
e_name = full_name[:full_name.find('@')]
if '+' in e_name:
# Remove alias
e_name = e_name[:e_name.find('+')]
# Try to split name parts
e_name = e_name.replace('.', ' ')
e_name = e_name.replace('_', ' ')
e_name = e_name.strip()
if len(e_name) < 4:
# Probably just initials: leave email as is
pass
else:
full_name = e_name
# Parse name
name = nameparser.HumanName(full_name)
name.capitalize()
first_name = name.first
last_name = name.last
print (f' - splitting name into: {first_name} / {last_name} ({id})')
yield (first_name, last_name, id)
def update_users(db, user_data):
c = db.cursor()
c.executemany('update auth_user set first_name=?, last_name=? where id=?',
user_data)
db.commit()
###
if __name__ == '__main__':
db = connect(sys.argv[1])
users = get_users(db)
user_data = fix_names(users)
update_users(db, user_data)
|
Add helper script to fix names in auth_user.#!/usr/bin/env python3
""" Fix names in auth_users.
Usage: ./fix_names data/site/epcon.db
Uses nameparser package to do the hard work of splitting names into
first and last name.
Written by Marc-Andre Lemburg, 2019.
"""
import sqlite3
import sys
import nameparser
###
def connect(file):
db = sqlite3.connect(file)
db.row_factory = sqlite3.Row
return db
def get_users(db):
c = db.cursor()
c.execute('select * from auth_user')
return c.fetchall()
def fix_names(users):
""" Fix names in user records.
Yields records (first_name, last_name, id).
"""
for user in users:
id = user['id']
first_name = user['first_name'].strip()
last_name = user['last_name'].strip()
if not first_name and not last_name:
# Empty name: skip
print (f'Skipping empty name in record {id}')
continue
elif first_name == last_name:
full_name = first_name
elif first_name.endswith(last_name):
full_name = first_name
elif not last_name:
full_name = first_name
elif not first_name:
full_name = last_name
else:
# In this case, the user has most likely entered the name
# correctly split, so skip
full_name = first_name + last_name
print (f'Skipping already split name: {first_name} / {last_name} ({id})')
continue
print (f'Working on "{full_name}" ({id})')
# Handle email addresses
if '@' in full_name:
print (f' - fixing email address')
# Remove domain part
e_name = full_name[:full_name.find('@')]
if '+' in e_name:
# Remove alias
e_name = e_name[:e_name.find('+')]
# Try to split name parts
e_name = e_name.replace('.', ' ')
e_name = e_name.replace('_', ' ')
e_name = e_name.strip()
if len(e_name) < 4:
# Probably just initials: leave email as is
pass
else:
full_name = e_name
# Parse name
name = nameparser.HumanName(full_name)
name.capitalize()
first_name = name.first
last_name = name.last
print (f' - splitting name into: {first_name} / {last_name} ({id})')
yield (first_name, last_name, id)
def update_users(db, user_data):
c = db.cursor()
c.executemany('update auth_user set first_name=?, last_name=? where id=?',
user_data)
db.commit()
###
if __name__ == '__main__':
db = connect(sys.argv[1])
users = get_users(db)
user_data = fix_names(users)
update_users(db, user_data)
|
<commit_before><commit_msg>Add helper script to fix names in auth_user.<commit_after>#!/usr/bin/env python3
""" Fix names in auth_users.
Usage: ./fix_names data/site/epcon.db
Uses nameparser package to do the hard work of splitting names into
first and last name.
Written by Marc-Andre Lemburg, 2019.
"""
import sqlite3
import sys
import nameparser
###
def connect(file):
db = sqlite3.connect(file)
db.row_factory = sqlite3.Row
return db
def get_users(db):
c = db.cursor()
c.execute('select * from auth_user')
return c.fetchall()
def fix_names(users):
""" Fix names in user records.
Yields records (first_name, last_name, id).
"""
for user in users:
id = user['id']
first_name = user['first_name'].strip()
last_name = user['last_name'].strip()
if not first_name and not last_name:
# Empty name: skip
print (f'Skipping empty name in record {id}')
continue
elif first_name == last_name:
full_name = first_name
elif first_name.endswith(last_name):
full_name = first_name
elif not last_name:
full_name = first_name
elif not first_name:
full_name = last_name
else:
# In this case, the user has most likely entered the name
# correctly split, so skip
full_name = first_name + last_name
print (f'Skipping already split name: {first_name} / {last_name} ({id})')
continue
print (f'Working on "{full_name}" ({id})')
# Handle email addresses
if '@' in full_name:
print (f' - fixing email address')
# Remove domain part
e_name = full_name[:full_name.find('@')]
if '+' in e_name:
# Remove alias
e_name = e_name[:e_name.find('+')]
# Try to split name parts
e_name = e_name.replace('.', ' ')
e_name = e_name.replace('_', ' ')
e_name = e_name.strip()
if len(e_name) < 4:
# Probably just initials: leave email as is
pass
else:
full_name = e_name
# Parse name
name = nameparser.HumanName(full_name)
name.capitalize()
first_name = name.first
last_name = name.last
print (f' - splitting name into: {first_name} / {last_name} ({id})')
yield (first_name, last_name, id)
def update_users(db, user_data):
c = db.cursor()
c.executemany('update auth_user set first_name=?, last_name=? where id=?',
user_data)
db.commit()
###
if __name__ == '__main__':
db = connect(sys.argv[1])
users = get_users(db)
user_data = fix_names(users)
update_users(db, user_data)
|
|
9db4d45498a08efdd772e238a5033f381bd48fea
|
add_user.py
|
add_user.py
|
import argparse
import pdb
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from models import User
db = SQLAlchemy()
def main():
parser = argparse.ArgumentParser(description='Adds user to the database.')
parser.add_argument(
'-n', '--name',
required=True,
help="Name",
)
parser.add_argument(
'-e', '--email',
required=True,
help="Email",
)
parser.add_argument(
'-p', '--password',
required=True,
help="Password",
)
args = parser.parse_args()
app = Flask(__name__)
app.config.from_object('config.Config')
db.init_app(app)
pdb.set_trace()
with app.app_context():
admin = User(args.name, args.email, args.password)
db.session.add(admin)
db.session.commit()
if __name__ == '__main__':
main()
|
Add script to add user to the database
|
Add script to add user to the database
|
Python
|
mit
|
danoneata/video_annotation,danoneata/video_annotation,danoneata/video_annotation
|
Add script to add user to the database
|
import argparse
import pdb
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from models import User
db = SQLAlchemy()
def main():
parser = argparse.ArgumentParser(description='Adds user to the database.')
parser.add_argument(
'-n', '--name',
required=True,
help="Name",
)
parser.add_argument(
'-e', '--email',
required=True,
help="Email",
)
parser.add_argument(
'-p', '--password',
required=True,
help="Password",
)
args = parser.parse_args()
app = Flask(__name__)
app.config.from_object('config.Config')
db.init_app(app)
pdb.set_trace()
with app.app_context():
admin = User(args.name, args.email, args.password)
db.session.add(admin)
db.session.commit()
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add script to add user to the database<commit_after>
|
import argparse
import pdb
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from models import User
db = SQLAlchemy()
def main():
parser = argparse.ArgumentParser(description='Adds user to the database.')
parser.add_argument(
'-n', '--name',
required=True,
help="Name",
)
parser.add_argument(
'-e', '--email',
required=True,
help="Email",
)
parser.add_argument(
'-p', '--password',
required=True,
help="Password",
)
args = parser.parse_args()
app = Flask(__name__)
app.config.from_object('config.Config')
db.init_app(app)
pdb.set_trace()
with app.app_context():
admin = User(args.name, args.email, args.password)
db.session.add(admin)
db.session.commit()
if __name__ == '__main__':
main()
|
Add script to add user to the databaseimport argparse
import pdb
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from models import User
db = SQLAlchemy()
def main():
parser = argparse.ArgumentParser(description='Adds user to the database.')
parser.add_argument(
'-n', '--name',
required=True,
help="Name",
)
parser.add_argument(
'-e', '--email',
required=True,
help="Email",
)
parser.add_argument(
'-p', '--password',
required=True,
help="Password",
)
args = parser.parse_args()
app = Flask(__name__)
app.config.from_object('config.Config')
db.init_app(app)
pdb.set_trace()
with app.app_context():
admin = User(args.name, args.email, args.password)
db.session.add(admin)
db.session.commit()
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add script to add user to the database<commit_after>import argparse
import pdb
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from models import User
db = SQLAlchemy()
def main():
parser = argparse.ArgumentParser(description='Adds user to the database.')
parser.add_argument(
'-n', '--name',
required=True,
help="Name",
)
parser.add_argument(
'-e', '--email',
required=True,
help="Email",
)
parser.add_argument(
'-p', '--password',
required=True,
help="Password",
)
args = parser.parse_args()
app = Flask(__name__)
app.config.from_object('config.Config')
db.init_app(app)
pdb.set_trace()
with app.app_context():
admin = User(args.name, args.email, args.password)
db.session.add(admin)
db.session.commit()
if __name__ == '__main__':
main()
|
|
db451c584541e10101e840b633500e4ec0feb343
|
alembic/versions/40975dd21696_add_theme_column_to_chat_users.py
|
alembic/versions/40975dd21696_add_theme_column_to_chat_users.py
|
"""Add theme column to chat users.
Revision ID: 40975dd21696
Revises: 43912ab8304e
Create Date: 2015-09-22 21:53:34.160708
"""
# revision identifiers, used by Alembic.
revision = '40975dd21696'
down_revision = '43912ab8304e'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('chat_users', sa.Column('theme', sa.Unicode(length=255), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('chat_users', 'theme')
### end Alembic commands ###
|
Add theme column to chat users (alembic revision).
|
Add theme column to chat users (alembic revision).
|
Python
|
agpl-3.0
|
MSPARP/newparp,MSPARP/newparp,MSPARP/newparp
|
Add theme column to chat users (alembic revision).
|
"""Add theme column to chat users.
Revision ID: 40975dd21696
Revises: 43912ab8304e
Create Date: 2015-09-22 21:53:34.160708
"""
# revision identifiers, used by Alembic.
revision = '40975dd21696'
down_revision = '43912ab8304e'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('chat_users', sa.Column('theme', sa.Unicode(length=255), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('chat_users', 'theme')
### end Alembic commands ###
|
<commit_before><commit_msg>Add theme column to chat users (alembic revision).<commit_after>
|
"""Add theme column to chat users.
Revision ID: 40975dd21696
Revises: 43912ab8304e
Create Date: 2015-09-22 21:53:34.160708
"""
# revision identifiers, used by Alembic.
revision = '40975dd21696'
down_revision = '43912ab8304e'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('chat_users', sa.Column('theme', sa.Unicode(length=255), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('chat_users', 'theme')
### end Alembic commands ###
|
Add theme column to chat users (alembic revision)."""Add theme column to chat users.
Revision ID: 40975dd21696
Revises: 43912ab8304e
Create Date: 2015-09-22 21:53:34.160708
"""
# revision identifiers, used by Alembic.
revision = '40975dd21696'
down_revision = '43912ab8304e'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('chat_users', sa.Column('theme', sa.Unicode(length=255), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('chat_users', 'theme')
### end Alembic commands ###
|
<commit_before><commit_msg>Add theme column to chat users (alembic revision).<commit_after>"""Add theme column to chat users.
Revision ID: 40975dd21696
Revises: 43912ab8304e
Create Date: 2015-09-22 21:53:34.160708
"""
# revision identifiers, used by Alembic.
revision = '40975dd21696'
down_revision = '43912ab8304e'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.add_column('chat_users', sa.Column('theme', sa.Unicode(length=255), nullable=True))
### end Alembic commands ###
def downgrade():
### commands auto generated by Alembic - please adjust! ###
op.drop_column('chat_users', 'theme')
### end Alembic commands ###
|
|
d425fa64ece6826d299ca2daadb9a08afa6e87b5
|
test/test_searchentities.py
|
test/test_searchentities.py
|
import unittest
from . import models
from sir.schema.searchentities import SearchEntity as E, SearchField as F
class QueryResultToDictTest(unittest.TestCase):
def setUp(self):
self.entity = E(models.B, [
F("id", "id"),
F("c_bar", "c.bar"),
F("c_bar_trans", "c.bar", transformfunc=lambda v:
v.union(set(["yay"])))
],
1.1
)
self.expected = {
"id": 1,
"c_bar": "foo",
"c_bar_trans": set(["foo", "yay"]),
}
c = models.C(id=2, bar="foo")
self.val = models.B(id=1, c=c)
def test_fields(self):
res = self.entity.query_result_to_dict(self.val)
self.assertDictEqual(self.expected, res)
|
Add a test for query_result_to_dict
|
Add a test for query_result_to_dict
|
Python
|
mit
|
jeffweeksio/sir
|
Add a test for query_result_to_dict
|
import unittest
from . import models
from sir.schema.searchentities import SearchEntity as E, SearchField as F
class QueryResultToDictTest(unittest.TestCase):
def setUp(self):
self.entity = E(models.B, [
F("id", "id"),
F("c_bar", "c.bar"),
F("c_bar_trans", "c.bar", transformfunc=lambda v:
v.union(set(["yay"])))
],
1.1
)
self.expected = {
"id": 1,
"c_bar": "foo",
"c_bar_trans": set(["foo", "yay"]),
}
c = models.C(id=2, bar="foo")
self.val = models.B(id=1, c=c)
def test_fields(self):
res = self.entity.query_result_to_dict(self.val)
self.assertDictEqual(self.expected, res)
|
<commit_before><commit_msg>Add a test for query_result_to_dict<commit_after>
|
import unittest
from . import models
from sir.schema.searchentities import SearchEntity as E, SearchField as F
class QueryResultToDictTest(unittest.TestCase):
def setUp(self):
self.entity = E(models.B, [
F("id", "id"),
F("c_bar", "c.bar"),
F("c_bar_trans", "c.bar", transformfunc=lambda v:
v.union(set(["yay"])))
],
1.1
)
self.expected = {
"id": 1,
"c_bar": "foo",
"c_bar_trans": set(["foo", "yay"]),
}
c = models.C(id=2, bar="foo")
self.val = models.B(id=1, c=c)
def test_fields(self):
res = self.entity.query_result_to_dict(self.val)
self.assertDictEqual(self.expected, res)
|
Add a test for query_result_to_dictimport unittest
from . import models
from sir.schema.searchentities import SearchEntity as E, SearchField as F
class QueryResultToDictTest(unittest.TestCase):
def setUp(self):
self.entity = E(models.B, [
F("id", "id"),
F("c_bar", "c.bar"),
F("c_bar_trans", "c.bar", transformfunc=lambda v:
v.union(set(["yay"])))
],
1.1
)
self.expected = {
"id": 1,
"c_bar": "foo",
"c_bar_trans": set(["foo", "yay"]),
}
c = models.C(id=2, bar="foo")
self.val = models.B(id=1, c=c)
def test_fields(self):
res = self.entity.query_result_to_dict(self.val)
self.assertDictEqual(self.expected, res)
|
<commit_before><commit_msg>Add a test for query_result_to_dict<commit_after>import unittest
from . import models
from sir.schema.searchentities import SearchEntity as E, SearchField as F
class QueryResultToDictTest(unittest.TestCase):
def setUp(self):
self.entity = E(models.B, [
F("id", "id"),
F("c_bar", "c.bar"),
F("c_bar_trans", "c.bar", transformfunc=lambda v:
v.union(set(["yay"])))
],
1.1
)
self.expected = {
"id": 1,
"c_bar": "foo",
"c_bar_trans": set(["foo", "yay"]),
}
c = models.C(id=2, bar="foo")
self.val = models.B(id=1, c=c)
def test_fields(self):
res = self.entity.query_result_to_dict(self.val)
self.assertDictEqual(self.expected, res)
|
|
aa110dca3b424141d57ad5804efc5e90db52aa3c
|
cfgrib/__main__.py
|
cfgrib/__main__.py
|
import argparse
import sys
from . import eccodes
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--selfcheck', default=False, action='store_true')
args = parser.parse_args()
if args.selfcheck:
eccodes.codes_get_api_version()
print("Your system is ready.")
else:
raise RuntimeError("Command not recognised. See usage with --help.")
if __name__ == '__main__':
main()
|
Add a basic selfcheck easily callable from the command line.
|
Add a basic selfcheck easily callable from the command line.
|
Python
|
apache-2.0
|
ecmwf/cfgrib
|
Add a basic selfcheck easily callable from the command line.
|
import argparse
import sys
from . import eccodes
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--selfcheck', default=False, action='store_true')
args = parser.parse_args()
if args.selfcheck:
eccodes.codes_get_api_version()
print("Your system is ready.")
else:
raise RuntimeError("Command not recognised. See usage with --help.")
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add a basic selfcheck easily callable from the command line.<commit_after>
|
import argparse
import sys
from . import eccodes
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--selfcheck', default=False, action='store_true')
args = parser.parse_args()
if args.selfcheck:
eccodes.codes_get_api_version()
print("Your system is ready.")
else:
raise RuntimeError("Command not recognised. See usage with --help.")
if __name__ == '__main__':
main()
|
Add a basic selfcheck easily callable from the command line.
import argparse
import sys
from . import eccodes
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--selfcheck', default=False, action='store_true')
args = parser.parse_args()
if args.selfcheck:
eccodes.codes_get_api_version()
print("Your system is ready.")
else:
raise RuntimeError("Command not recognised. See usage with --help.")
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add a basic selfcheck easily callable from the command line.<commit_after>
import argparse
import sys
from . import eccodes
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--selfcheck', default=False, action='store_true')
args = parser.parse_args()
if args.selfcheck:
eccodes.codes_get_api_version()
print("Your system is ready.")
else:
raise RuntimeError("Command not recognised. See usage with --help.")
if __name__ == '__main__':
main()
|
|
803895453d3a243ec3e7d9cddc6d0479692282ee
|
final/problem4.py
|
final/problem4.py
|
# Problem 4
# 20.0 points possible (graded)
# You are given the following definitions:
# A run of monotonically increasing numbers means that a number at position k+1 in the sequence is greater than or equal to the number at position k in the sequence.
# A run of monotonically decreasing numbers means that a number at position k+1 in the sequence is less than or equal to the number at position k in the sequence.
# Implement a function that meets the specifications below.
def longest_run(L):
"""
Assumes L is a list of integers containing at least 2 elements.
Finds the longest run of numbers in L, where the longest run can
either be monotonically increasing or monotonically decreasing.
In case of a tie for the longest run, choose the longest run
that occurs first.
Does not modify the list.
Returns the sum of the longest run.
"""
longestIncreasingRun = []
longestDecreasingRun = []
increasingRun = []
decreasingRun = []
for i in range(len(L)):
if len(increasingRun) == 0:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] == increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] > increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun = [L[i]]
elif L[i] < decreasingRun[-1]:
decreasingRun.append(L[i])
increasingRun = [L[i]]
if len(increasingRun) > len(longestIncreasingRun):
longestIncreasingRun = increasingRun
if len(decreasingRun) > len(longestDecreasingRun):
longestDecreasingRun = decreasingRun
if len(longestIncreasingRun) > len(longestDecreasingRun):
return sum(longestIncreasingRun)
elif len(longestDecreasingRun) > len(longestIncreasingRun):
return sum(longestDecreasingRun)
else:
for i in range(len(L)):
if L[i] == longestIncreasingRun[0]:
if L[i + len(longestIncreasingRun) - 1] == longestIncreasingRun[-1]:
return sum(longestIncreasingRun)
elif L[i] == longestDecreasingRun[0]:
if L[i + len(longestDecreasingRun) - 1] == longestDecreasingRun[-1]:
return sum(longestDecreasingRun)
# L = [10, 4, 3, 8, 3, 4, 5, 7, 7, 2]
# Increasing is [3, 4, 5, 7, 7], Decreasing is [10, 4, 3]
# Answer is 26 since the longer run is the increasing list (3+4+5+7+7 = 26)
L = [-1, -10, -10, -10, -10, -10, -10, -100]
print(longest_run(L))
|
Implement a function that returns the sum of the longest run
|
Implement a function that returns the sum of the longest run
|
Python
|
mit
|
Kunal57/MIT_6.00.1x
|
Implement a function that returns the sum of the longest run
|
# Problem 4
# 20.0 points possible (graded)
# You are given the following definitions:
# A run of monotonically increasing numbers means that a number at position k+1 in the sequence is greater than or equal to the number at position k in the sequence.
# A run of monotonically decreasing numbers means that a number at position k+1 in the sequence is less than or equal to the number at position k in the sequence.
# Implement a function that meets the specifications below.
def longest_run(L):
"""
Assumes L is a list of integers containing at least 2 elements.
Finds the longest run of numbers in L, where the longest run can
either be monotonically increasing or monotonically decreasing.
In case of a tie for the longest run, choose the longest run
that occurs first.
Does not modify the list.
Returns the sum of the longest run.
"""
longestIncreasingRun = []
longestDecreasingRun = []
increasingRun = []
decreasingRun = []
for i in range(len(L)):
if len(increasingRun) == 0:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] == increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] > increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun = [L[i]]
elif L[i] < decreasingRun[-1]:
decreasingRun.append(L[i])
increasingRun = [L[i]]
if len(increasingRun) > len(longestIncreasingRun):
longestIncreasingRun = increasingRun
if len(decreasingRun) > len(longestDecreasingRun):
longestDecreasingRun = decreasingRun
if len(longestIncreasingRun) > len(longestDecreasingRun):
return sum(longestIncreasingRun)
elif len(longestDecreasingRun) > len(longestIncreasingRun):
return sum(longestDecreasingRun)
else:
for i in range(len(L)):
if L[i] == longestIncreasingRun[0]:
if L[i + len(longestIncreasingRun) - 1] == longestIncreasingRun[-1]:
return sum(longestIncreasingRun)
elif L[i] == longestDecreasingRun[0]:
if L[i + len(longestDecreasingRun) - 1] == longestDecreasingRun[-1]:
return sum(longestDecreasingRun)
# L = [10, 4, 3, 8, 3, 4, 5, 7, 7, 2]
# Increasing is [3, 4, 5, 7, 7], Decreasing is [10, 4, 3]
# Answer is 26 since the longer run is the increasing list (3+4+5+7+7 = 26)
L = [-1, -10, -10, -10, -10, -10, -10, -100]
print(longest_run(L))
|
<commit_before><commit_msg>Implement a function that returns the sum of the longest run<commit_after>
|
# Problem 4
# 20.0 points possible (graded)
# You are given the following definitions:
# A run of monotonically increasing numbers means that a number at position k+1 in the sequence is greater than or equal to the number at position k in the sequence.
# A run of monotonically decreasing numbers means that a number at position k+1 in the sequence is less than or equal to the number at position k in the sequence.
# Implement a function that meets the specifications below.
def longest_run(L):
"""
Assumes L is a list of integers containing at least 2 elements.
Finds the longest run of numbers in L, where the longest run can
either be monotonically increasing or monotonically decreasing.
In case of a tie for the longest run, choose the longest run
that occurs first.
Does not modify the list.
Returns the sum of the longest run.
"""
longestIncreasingRun = []
longestDecreasingRun = []
increasingRun = []
decreasingRun = []
for i in range(len(L)):
if len(increasingRun) == 0:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] == increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] > increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun = [L[i]]
elif L[i] < decreasingRun[-1]:
decreasingRun.append(L[i])
increasingRun = [L[i]]
if len(increasingRun) > len(longestIncreasingRun):
longestIncreasingRun = increasingRun
if len(decreasingRun) > len(longestDecreasingRun):
longestDecreasingRun = decreasingRun
if len(longestIncreasingRun) > len(longestDecreasingRun):
return sum(longestIncreasingRun)
elif len(longestDecreasingRun) > len(longestIncreasingRun):
return sum(longestDecreasingRun)
else:
for i in range(len(L)):
if L[i] == longestIncreasingRun[0]:
if L[i + len(longestIncreasingRun) - 1] == longestIncreasingRun[-1]:
return sum(longestIncreasingRun)
elif L[i] == longestDecreasingRun[0]:
if L[i + len(longestDecreasingRun) - 1] == longestDecreasingRun[-1]:
return sum(longestDecreasingRun)
# L = [10, 4, 3, 8, 3, 4, 5, 7, 7, 2]
# Increasing is [3, 4, 5, 7, 7], Decreasing is [10, 4, 3]
# Answer is 26 since the longer run is the increasing list (3+4+5+7+7 = 26)
L = [-1, -10, -10, -10, -10, -10, -10, -100]
print(longest_run(L))
|
Implement a function that returns the sum of the longest run# Problem 4
# 20.0 points possible (graded)
# You are given the following definitions:
# A run of monotonically increasing numbers means that a number at position k+1 in the sequence is greater than or equal to the number at position k in the sequence.
# A run of monotonically decreasing numbers means that a number at position k+1 in the sequence is less than or equal to the number at position k in the sequence.
# Implement a function that meets the specifications below.
def longest_run(L):
"""
Assumes L is a list of integers containing at least 2 elements.
Finds the longest run of numbers in L, where the longest run can
either be monotonically increasing or monotonically decreasing.
In case of a tie for the longest run, choose the longest run
that occurs first.
Does not modify the list.
Returns the sum of the longest run.
"""
longestIncreasingRun = []
longestDecreasingRun = []
increasingRun = []
decreasingRun = []
for i in range(len(L)):
if len(increasingRun) == 0:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] == increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] > increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun = [L[i]]
elif L[i] < decreasingRun[-1]:
decreasingRun.append(L[i])
increasingRun = [L[i]]
if len(increasingRun) > len(longestIncreasingRun):
longestIncreasingRun = increasingRun
if len(decreasingRun) > len(longestDecreasingRun):
longestDecreasingRun = decreasingRun
if len(longestIncreasingRun) > len(longestDecreasingRun):
return sum(longestIncreasingRun)
elif len(longestDecreasingRun) > len(longestIncreasingRun):
return sum(longestDecreasingRun)
else:
for i in range(len(L)):
if L[i] == longestIncreasingRun[0]:
if L[i + len(longestIncreasingRun) - 1] == longestIncreasingRun[-1]:
return sum(longestIncreasingRun)
elif L[i] == longestDecreasingRun[0]:
if L[i + len(longestDecreasingRun) - 1] == longestDecreasingRun[-1]:
return sum(longestDecreasingRun)
# L = [10, 4, 3, 8, 3, 4, 5, 7, 7, 2]
# Increasing is [3, 4, 5, 7, 7], Decreasing is [10, 4, 3]
# Answer is 26 since the longer run is the increasing list (3+4+5+7+7 = 26)
L = [-1, -10, -10, -10, -10, -10, -10, -100]
print(longest_run(L))
|
<commit_before><commit_msg>Implement a function that returns the sum of the longest run<commit_after># Problem 4
# 20.0 points possible (graded)
# You are given the following definitions:
# A run of monotonically increasing numbers means that a number at position k+1 in the sequence is greater than or equal to the number at position k in the sequence.
# A run of monotonically decreasing numbers means that a number at position k+1 in the sequence is less than or equal to the number at position k in the sequence.
# Implement a function that meets the specifications below.
def longest_run(L):
"""
Assumes L is a list of integers containing at least 2 elements.
Finds the longest run of numbers in L, where the longest run can
either be monotonically increasing or monotonically decreasing.
In case of a tie for the longest run, choose the longest run
that occurs first.
Does not modify the list.
Returns the sum of the longest run.
"""
longestIncreasingRun = []
longestDecreasingRun = []
increasingRun = []
decreasingRun = []
for i in range(len(L)):
if len(increasingRun) == 0:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] == increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun.append(L[i])
elif L[i] > increasingRun[-1]:
increasingRun.append(L[i])
decreasingRun = [L[i]]
elif L[i] < decreasingRun[-1]:
decreasingRun.append(L[i])
increasingRun = [L[i]]
if len(increasingRun) > len(longestIncreasingRun):
longestIncreasingRun = increasingRun
if len(decreasingRun) > len(longestDecreasingRun):
longestDecreasingRun = decreasingRun
if len(longestIncreasingRun) > len(longestDecreasingRun):
return sum(longestIncreasingRun)
elif len(longestDecreasingRun) > len(longestIncreasingRun):
return sum(longestDecreasingRun)
else:
for i in range(len(L)):
if L[i] == longestIncreasingRun[0]:
if L[i + len(longestIncreasingRun) - 1] == longestIncreasingRun[-1]:
return sum(longestIncreasingRun)
elif L[i] == longestDecreasingRun[0]:
if L[i + len(longestDecreasingRun) - 1] == longestDecreasingRun[-1]:
return sum(longestDecreasingRun)
# L = [10, 4, 3, 8, 3, 4, 5, 7, 7, 2]
# Increasing is [3, 4, 5, 7, 7], Decreasing is [10, 4, 3]
# Answer is 26 since the longer run is the increasing list (3+4+5+7+7 = 26)
L = [-1, -10, -10, -10, -10, -10, -10, -100]
print(longest_run(L))
|
|
7da5da19c033d775cd04e927cd7c9df0f66bcea5
|
examples/test_hack_search.py
|
examples/test_hack_search.py
|
""" Testing the "self.set_attribute()" and "self.set_attributes()" methods
to modify a Google search into becoming a Bing search.
set_attribute() -> Modifies the attribute of the first matching element.
set_attributes() -> Modifies the attribute of all matching elements. """
from seleniumbase import BaseCase
class HackingTest(BaseCase):
def test_hack_search(self):
self.open("https://google.com/ncr")
self.assert_element('input[title="Search"]')
self.set_attribute('[action="/search"]', "action", "//bing.com/search")
self.set_attributes('[value="Google Search"]', "value", "Bing Search")
self.update_text('input[title="Search"]', "SeleniumBase GitHub")
self.click('[value="Bing Search"]')
self.assert_element("h1.b_logo")
self.click('[href*="github.com/seleniumbase/SeleniumBase"]')
self.assert_element('[href="/seleniumbase/SeleniumBase"]')
self.assert_true("seleniumbase/SeleniumBase" in self.get_current_url())
self.click('[title="examples"]')
self.assert_text('examples', 'strong.final-path')
|
Add test demonstrating set_attribute() and set_attributes()
|
Add test demonstrating set_attribute() and set_attributes()
|
Python
|
mit
|
mdmintz/SeleniumBase,seleniumbase/SeleniumBase,mdmintz/SeleniumBase,seleniumbase/SeleniumBase,mdmintz/SeleniumBase,seleniumbase/SeleniumBase,seleniumbase/SeleniumBase,mdmintz/SeleniumBase
|
Add test demonstrating set_attribute() and set_attributes()
|
""" Testing the "self.set_attribute()" and "self.set_attributes()" methods
to modify a Google search into becoming a Bing search.
set_attribute() -> Modifies the attribute of the first matching element.
set_attributes() -> Modifies the attribute of all matching elements. """
from seleniumbase import BaseCase
class HackingTest(BaseCase):
def test_hack_search(self):
self.open("https://google.com/ncr")
self.assert_element('input[title="Search"]')
self.set_attribute('[action="/search"]', "action", "//bing.com/search")
self.set_attributes('[value="Google Search"]', "value", "Bing Search")
self.update_text('input[title="Search"]', "SeleniumBase GitHub")
self.click('[value="Bing Search"]')
self.assert_element("h1.b_logo")
self.click('[href*="github.com/seleniumbase/SeleniumBase"]')
self.assert_element('[href="/seleniumbase/SeleniumBase"]')
self.assert_true("seleniumbase/SeleniumBase" in self.get_current_url())
self.click('[title="examples"]')
self.assert_text('examples', 'strong.final-path')
|
<commit_before><commit_msg>Add test demonstrating set_attribute() and set_attributes()<commit_after>
|
""" Testing the "self.set_attribute()" and "self.set_attributes()" methods
to modify a Google search into becoming a Bing search.
set_attribute() -> Modifies the attribute of the first matching element.
set_attributes() -> Modifies the attribute of all matching elements. """
from seleniumbase import BaseCase
class HackingTest(BaseCase):
def test_hack_search(self):
self.open("https://google.com/ncr")
self.assert_element('input[title="Search"]')
self.set_attribute('[action="/search"]', "action", "//bing.com/search")
self.set_attributes('[value="Google Search"]', "value", "Bing Search")
self.update_text('input[title="Search"]', "SeleniumBase GitHub")
self.click('[value="Bing Search"]')
self.assert_element("h1.b_logo")
self.click('[href*="github.com/seleniumbase/SeleniumBase"]')
self.assert_element('[href="/seleniumbase/SeleniumBase"]')
self.assert_true("seleniumbase/SeleniumBase" in self.get_current_url())
self.click('[title="examples"]')
self.assert_text('examples', 'strong.final-path')
|
Add test demonstrating set_attribute() and set_attributes()""" Testing the "self.set_attribute()" and "self.set_attributes()" methods
to modify a Google search into becoming a Bing search.
set_attribute() -> Modifies the attribute of the first matching element.
set_attributes() -> Modifies the attribute of all matching elements. """
from seleniumbase import BaseCase
class HackingTest(BaseCase):
def test_hack_search(self):
self.open("https://google.com/ncr")
self.assert_element('input[title="Search"]')
self.set_attribute('[action="/search"]', "action", "//bing.com/search")
self.set_attributes('[value="Google Search"]', "value", "Bing Search")
self.update_text('input[title="Search"]', "SeleniumBase GitHub")
self.click('[value="Bing Search"]')
self.assert_element("h1.b_logo")
self.click('[href*="github.com/seleniumbase/SeleniumBase"]')
self.assert_element('[href="/seleniumbase/SeleniumBase"]')
self.assert_true("seleniumbase/SeleniumBase" in self.get_current_url())
self.click('[title="examples"]')
self.assert_text('examples', 'strong.final-path')
|
<commit_before><commit_msg>Add test demonstrating set_attribute() and set_attributes()<commit_after>""" Testing the "self.set_attribute()" and "self.set_attributes()" methods
to modify a Google search into becoming a Bing search.
set_attribute() -> Modifies the attribute of the first matching element.
set_attributes() -> Modifies the attribute of all matching elements. """
from seleniumbase import BaseCase
class HackingTest(BaseCase):
def test_hack_search(self):
self.open("https://google.com/ncr")
self.assert_element('input[title="Search"]')
self.set_attribute('[action="/search"]', "action", "//bing.com/search")
self.set_attributes('[value="Google Search"]', "value", "Bing Search")
self.update_text('input[title="Search"]', "SeleniumBase GitHub")
self.click('[value="Bing Search"]')
self.assert_element("h1.b_logo")
self.click('[href*="github.com/seleniumbase/SeleniumBase"]')
self.assert_element('[href="/seleniumbase/SeleniumBase"]')
self.assert_true("seleniumbase/SeleniumBase" in self.get_current_url())
self.click('[title="examples"]')
self.assert_text('examples', 'strong.final-path')
|
|
21d0c9ed87fba2ca706aa2c098d67094c1e43250
|
examples/formatted_output.py
|
examples/formatted_output.py
|
from diffr import diff
a = [0] * 5 + ['-' * 5 + 'a' + '-' * 5] + [0] * 5
b = [0] * 5 + ['-' * 5 + 'b' + '-' * 5] + [0] * 5
d = diff(a, b)
print('Print out the diff in full by default')
print(d)
print()
print('Reduce the context by formatting with context_limit = 3')
print('{:3c}'.format(d))
print()
print('Reduce the context further (context_limit = 1)')
print(format(d, '1c'))
|
Add examples for diff formatting
|
Add examples for diff formatting
|
Python
|
mit
|
grahamegee/diffr
|
Add examples for diff formatting
|
from diffr import diff
a = [0] * 5 + ['-' * 5 + 'a' + '-' * 5] + [0] * 5
b = [0] * 5 + ['-' * 5 + 'b' + '-' * 5] + [0] * 5
d = diff(a, b)
print('Print out the diff in full by default')
print(d)
print()
print('Reduce the context by formatting with context_limit = 3')
print('{:3c}'.format(d))
print()
print('Reduce the context further (context_limit = 1)')
print(format(d, '1c'))
|
<commit_before><commit_msg>Add examples for diff formatting<commit_after>
|
from diffr import diff
a = [0] * 5 + ['-' * 5 + 'a' + '-' * 5] + [0] * 5
b = [0] * 5 + ['-' * 5 + 'b' + '-' * 5] + [0] * 5
d = diff(a, b)
print('Print out the diff in full by default')
print(d)
print()
print('Reduce the context by formatting with context_limit = 3')
print('{:3c}'.format(d))
print()
print('Reduce the context further (context_limit = 1)')
print(format(d, '1c'))
|
Add examples for diff formattingfrom diffr import diff
a = [0] * 5 + ['-' * 5 + 'a' + '-' * 5] + [0] * 5
b = [0] * 5 + ['-' * 5 + 'b' + '-' * 5] + [0] * 5
d = diff(a, b)
print('Print out the diff in full by default')
print(d)
print()
print('Reduce the context by formatting with context_limit = 3')
print('{:3c}'.format(d))
print()
print('Reduce the context further (context_limit = 1)')
print(format(d, '1c'))
|
<commit_before><commit_msg>Add examples for diff formatting<commit_after>from diffr import diff
a = [0] * 5 + ['-' * 5 + 'a' + '-' * 5] + [0] * 5
b = [0] * 5 + ['-' * 5 + 'b' + '-' * 5] + [0] * 5
d = diff(a, b)
print('Print out the diff in full by default')
print(d)
print()
print('Reduce the context by formatting with context_limit = 3')
print('{:3c}'.format(d))
print()
print('Reduce the context further (context_limit = 1)')
print(format(d, '1c'))
|
|
ff1b86135fc42fdd73b34902be248523020cbb2d
|
examples/rpc_dict_handler.py
|
examples/rpc_dict_handler.py
|
import asyncio
import aiozmq
import aiozmq.rpc
@aiozmq.rpc.method
def a():
return 'a'
@aiozmq.rpc.method
def b():
return 'b'
handlers_dict = {'a': a,
'subnamespace': {'b': b}
}
@asyncio.coroutine
def go():
server = yield from aiozmq.rpc.start_server(
handlers_dict, bind='tcp://*:*')
server_addr = next(iter(server.transport.bindings()))
client = yield from aiozmq.rpc.open_client(
connect=server_addr)
ret = yield from client.rpc.a()
assert 'a' == ret
ret = yield from client.rpc.subnamespace.b()
assert 'b' == ret
server.close()
client.close()
def main():
asyncio.set_event_loop_policy(aiozmq.ZmqEventLoopPolicy())
asyncio.get_event_loop().run_until_complete(go())
print("DONE")
if __name__ == '__main__':
main()
|
Add example for dicts as Handler structure.
|
Add example for dicts as Handler structure.
|
Python
|
bsd-2-clause
|
aio-libs/aiozmq,MetaMemoryT/aiozmq,claws/aiozmq,asteven/aiozmq
|
Add example for dicts as Handler structure.
|
import asyncio
import aiozmq
import aiozmq.rpc
@aiozmq.rpc.method
def a():
return 'a'
@aiozmq.rpc.method
def b():
return 'b'
handlers_dict = {'a': a,
'subnamespace': {'b': b}
}
@asyncio.coroutine
def go():
server = yield from aiozmq.rpc.start_server(
handlers_dict, bind='tcp://*:*')
server_addr = next(iter(server.transport.bindings()))
client = yield from aiozmq.rpc.open_client(
connect=server_addr)
ret = yield from client.rpc.a()
assert 'a' == ret
ret = yield from client.rpc.subnamespace.b()
assert 'b' == ret
server.close()
client.close()
def main():
asyncio.set_event_loop_policy(aiozmq.ZmqEventLoopPolicy())
asyncio.get_event_loop().run_until_complete(go())
print("DONE")
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add example for dicts as Handler structure.<commit_after>
|
import asyncio
import aiozmq
import aiozmq.rpc
@aiozmq.rpc.method
def a():
return 'a'
@aiozmq.rpc.method
def b():
return 'b'
handlers_dict = {'a': a,
'subnamespace': {'b': b}
}
@asyncio.coroutine
def go():
server = yield from aiozmq.rpc.start_server(
handlers_dict, bind='tcp://*:*')
server_addr = next(iter(server.transport.bindings()))
client = yield from aiozmq.rpc.open_client(
connect=server_addr)
ret = yield from client.rpc.a()
assert 'a' == ret
ret = yield from client.rpc.subnamespace.b()
assert 'b' == ret
server.close()
client.close()
def main():
asyncio.set_event_loop_policy(aiozmq.ZmqEventLoopPolicy())
asyncio.get_event_loop().run_until_complete(go())
print("DONE")
if __name__ == '__main__':
main()
|
Add example for dicts as Handler structure.import asyncio
import aiozmq
import aiozmq.rpc
@aiozmq.rpc.method
def a():
return 'a'
@aiozmq.rpc.method
def b():
return 'b'
handlers_dict = {'a': a,
'subnamespace': {'b': b}
}
@asyncio.coroutine
def go():
server = yield from aiozmq.rpc.start_server(
handlers_dict, bind='tcp://*:*')
server_addr = next(iter(server.transport.bindings()))
client = yield from aiozmq.rpc.open_client(
connect=server_addr)
ret = yield from client.rpc.a()
assert 'a' == ret
ret = yield from client.rpc.subnamespace.b()
assert 'b' == ret
server.close()
client.close()
def main():
asyncio.set_event_loop_policy(aiozmq.ZmqEventLoopPolicy())
asyncio.get_event_loop().run_until_complete(go())
print("DONE")
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add example for dicts as Handler structure.<commit_after>import asyncio
import aiozmq
import aiozmq.rpc
@aiozmq.rpc.method
def a():
return 'a'
@aiozmq.rpc.method
def b():
return 'b'
handlers_dict = {'a': a,
'subnamespace': {'b': b}
}
@asyncio.coroutine
def go():
server = yield from aiozmq.rpc.start_server(
handlers_dict, bind='tcp://*:*')
server_addr = next(iter(server.transport.bindings()))
client = yield from aiozmq.rpc.open_client(
connect=server_addr)
ret = yield from client.rpc.a()
assert 'a' == ret
ret = yield from client.rpc.subnamespace.b()
assert 'b' == ret
server.close()
client.close()
def main():
asyncio.set_event_loop_policy(aiozmq.ZmqEventLoopPolicy())
asyncio.get_event_loop().run_until_complete(go())
print("DONE")
if __name__ == '__main__':
main()
|
|
cd21f65fd42aa87105c42e3309a99ada3d25d144
|
corehq/apps/users/management/commands/set_locations_properly.py
|
corehq/apps/users/management/commands/set_locations_properly.py
|
from django.core.management.base import BaseCommand
from corehq.apps.users.models import CommCareUser
from corehq.apps.es import UserES
from corehq.apps.locations.models import Location
from dimagi.utils.couch.database import iter_docs
def get_affected_ids():
users = (UserES()
.mobile_users()
.non_null("location_id")
.fields(["location_id", "user_data.commcare_location_id", "_id"])
.run().hits)
print "There are {} users".format(len(users))
user_ids, location_ids = [], []
for u in users:
if u['location_id'] != u.get('user_data', {}).get('commcare_location_id'):
user_ids.append(u['_id'])
location_ids.append(u['location_id'])
print "There are {} bad users".format(len(user_ids))
return user_ids, location_ids
def set_correct_locations():
user_ids, location_ids = get_affected_ids()
locations = {
doc['_id']: Location.wrap(doc)
for doc in iter_docs(Location.get_db(), location_ids)
}
users_set, users_unset = 0, 0
for doc in iter_docs(CommCareUser.get_db(), user_ids):
user = CommCareUser.wrap(doc)
if user.location_id != user.user_data.get('commcare_location_id'):
location = locations.get(user.location_id, None)
if location:
user.set_location(location)
users_set += 1
else:
user.unset_location()
users_unset += 1
print "Set locations on {} users".format(users_set)
print "Unset locations on {} users".format(users_unset)
class Command(BaseCommand):
help = "Deletes the given user"
args = '<user>'
def handle(self, *args, **options):
set_correct_locations()
|
Fix where user_data and location_id don't match
|
Fix where user_data and location_id don't match
|
Python
|
bsd-3-clause
|
dimagi/commcare-hq,dimagi/commcare-hq,dimagi/commcare-hq,qedsoftware/commcare-hq,dimagi/commcare-hq,qedsoftware/commcare-hq,dimagi/commcare-hq,qedsoftware/commcare-hq,qedsoftware/commcare-hq,qedsoftware/commcare-hq
|
Fix where user_data and location_id don't match
|
from django.core.management.base import BaseCommand
from corehq.apps.users.models import CommCareUser
from corehq.apps.es import UserES
from corehq.apps.locations.models import Location
from dimagi.utils.couch.database import iter_docs
def get_affected_ids():
users = (UserES()
.mobile_users()
.non_null("location_id")
.fields(["location_id", "user_data.commcare_location_id", "_id"])
.run().hits)
print "There are {} users".format(len(users))
user_ids, location_ids = [], []
for u in users:
if u['location_id'] != u.get('user_data', {}).get('commcare_location_id'):
user_ids.append(u['_id'])
location_ids.append(u['location_id'])
print "There are {} bad users".format(len(user_ids))
return user_ids, location_ids
def set_correct_locations():
user_ids, location_ids = get_affected_ids()
locations = {
doc['_id']: Location.wrap(doc)
for doc in iter_docs(Location.get_db(), location_ids)
}
users_set, users_unset = 0, 0
for doc in iter_docs(CommCareUser.get_db(), user_ids):
user = CommCareUser.wrap(doc)
if user.location_id != user.user_data.get('commcare_location_id'):
location = locations.get(user.location_id, None)
if location:
user.set_location(location)
users_set += 1
else:
user.unset_location()
users_unset += 1
print "Set locations on {} users".format(users_set)
print "Unset locations on {} users".format(users_unset)
class Command(BaseCommand):
help = "Deletes the given user"
args = '<user>'
def handle(self, *args, **options):
set_correct_locations()
|
<commit_before><commit_msg>Fix where user_data and location_id don't match<commit_after>
|
from django.core.management.base import BaseCommand
from corehq.apps.users.models import CommCareUser
from corehq.apps.es import UserES
from corehq.apps.locations.models import Location
from dimagi.utils.couch.database import iter_docs
def get_affected_ids():
users = (UserES()
.mobile_users()
.non_null("location_id")
.fields(["location_id", "user_data.commcare_location_id", "_id"])
.run().hits)
print "There are {} users".format(len(users))
user_ids, location_ids = [], []
for u in users:
if u['location_id'] != u.get('user_data', {}).get('commcare_location_id'):
user_ids.append(u['_id'])
location_ids.append(u['location_id'])
print "There are {} bad users".format(len(user_ids))
return user_ids, location_ids
def set_correct_locations():
user_ids, location_ids = get_affected_ids()
locations = {
doc['_id']: Location.wrap(doc)
for doc in iter_docs(Location.get_db(), location_ids)
}
users_set, users_unset = 0, 0
for doc in iter_docs(CommCareUser.get_db(), user_ids):
user = CommCareUser.wrap(doc)
if user.location_id != user.user_data.get('commcare_location_id'):
location = locations.get(user.location_id, None)
if location:
user.set_location(location)
users_set += 1
else:
user.unset_location()
users_unset += 1
print "Set locations on {} users".format(users_set)
print "Unset locations on {} users".format(users_unset)
class Command(BaseCommand):
help = "Deletes the given user"
args = '<user>'
def handle(self, *args, **options):
set_correct_locations()
|
Fix where user_data and location_id don't matchfrom django.core.management.base import BaseCommand
from corehq.apps.users.models import CommCareUser
from corehq.apps.es import UserES
from corehq.apps.locations.models import Location
from dimagi.utils.couch.database import iter_docs
def get_affected_ids():
users = (UserES()
.mobile_users()
.non_null("location_id")
.fields(["location_id", "user_data.commcare_location_id", "_id"])
.run().hits)
print "There are {} users".format(len(users))
user_ids, location_ids = [], []
for u in users:
if u['location_id'] != u.get('user_data', {}).get('commcare_location_id'):
user_ids.append(u['_id'])
location_ids.append(u['location_id'])
print "There are {} bad users".format(len(user_ids))
return user_ids, location_ids
def set_correct_locations():
user_ids, location_ids = get_affected_ids()
locations = {
doc['_id']: Location.wrap(doc)
for doc in iter_docs(Location.get_db(), location_ids)
}
users_set, users_unset = 0, 0
for doc in iter_docs(CommCareUser.get_db(), user_ids):
user = CommCareUser.wrap(doc)
if user.location_id != user.user_data.get('commcare_location_id'):
location = locations.get(user.location_id, None)
if location:
user.set_location(location)
users_set += 1
else:
user.unset_location()
users_unset += 1
print "Set locations on {} users".format(users_set)
print "Unset locations on {} users".format(users_unset)
class Command(BaseCommand):
help = "Deletes the given user"
args = '<user>'
def handle(self, *args, **options):
set_correct_locations()
|
<commit_before><commit_msg>Fix where user_data and location_id don't match<commit_after>from django.core.management.base import BaseCommand
from corehq.apps.users.models import CommCareUser
from corehq.apps.es import UserES
from corehq.apps.locations.models import Location
from dimagi.utils.couch.database import iter_docs
def get_affected_ids():
users = (UserES()
.mobile_users()
.non_null("location_id")
.fields(["location_id", "user_data.commcare_location_id", "_id"])
.run().hits)
print "There are {} users".format(len(users))
user_ids, location_ids = [], []
for u in users:
if u['location_id'] != u.get('user_data', {}).get('commcare_location_id'):
user_ids.append(u['_id'])
location_ids.append(u['location_id'])
print "There are {} bad users".format(len(user_ids))
return user_ids, location_ids
def set_correct_locations():
user_ids, location_ids = get_affected_ids()
locations = {
doc['_id']: Location.wrap(doc)
for doc in iter_docs(Location.get_db(), location_ids)
}
users_set, users_unset = 0, 0
for doc in iter_docs(CommCareUser.get_db(), user_ids):
user = CommCareUser.wrap(doc)
if user.location_id != user.user_data.get('commcare_location_id'):
location = locations.get(user.location_id, None)
if location:
user.set_location(location)
users_set += 1
else:
user.unset_location()
users_unset += 1
print "Set locations on {} users".format(users_set)
print "Unset locations on {} users".format(users_unset)
class Command(BaseCommand):
help = "Deletes the given user"
args = '<user>'
def handle(self, *args, **options):
set_correct_locations()
|
|
ba8f89960a9932ad3da58e7959a74bec1b7ea745
|
backend/breach/migrations/0019_auto_20160729_1449.py
|
backend/breach/migrations/0019_auto_20160729_1449.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-07-29 14:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('breach', '0018_target_samplesize'),
]
operations = [
migrations.AddField(
model_name='round',
name='batch',
field=models.IntegerField(default=0, help_text='Which batch of the round is currently being attempted. A new batch starts any time samplesets for the round are created, either because the round is starting or because not enough condidence was built.'),
),
migrations.AddField(
model_name='sampleset',
name='batch',
field=models.IntegerField(default=0, help_text='The round batch that this sampleset belongs to.'),
),
migrations.AddField(
model_name='sampleset',
name='records',
field=models.IntegerField(default=0, help_text='The number of records that contain all the data.'),
),
migrations.AddField(
model_name='target',
name='confidence_threshold',
field=models.FloatField(default=1.0, help_text='The threshold that is used for confidence, in order to determine whether a candidate should be chosen.'),
),
migrations.AddField(
model_name='victim',
name='calibration_wait',
field=models.FloatField(default=0.0, help_text='The amount of time in seconds that sniffer should wait so that Scapy has enough time to lock on low-level network resources.'),
),
migrations.AddField(
model_name='victim',
name='recordscardinality',
field=models.IntegerField(default=0, help_text='The amount of expected TLS response records per request. If 0 then the amount is not known or is expected to change per request.'),
),
]
|
Add migration for calibration_wait, confidence_threshold, records, iteration, Victim recordscardinality
|
Add migration for calibration_wait, confidence_threshold, records, iteration, Victim recordscardinality
|
Python
|
mit
|
esarafianou/rupture,dimkarakostas/rupture,dimriou/rupture,dimkarakostas/rupture,esarafianou/rupture,dionyziz/rupture,dionyziz/rupture,esarafianou/rupture,dimriou/rupture,dimkarakostas/rupture,dionyziz/rupture,dimriou/rupture,dionyziz/rupture,dimkarakostas/rupture,dimriou/rupture,dionyziz/rupture,esarafianou/rupture,dimriou/rupture,dimkarakostas/rupture
|
Add migration for calibration_wait, confidence_threshold, records, iteration, Victim recordscardinality
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-07-29 14:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('breach', '0018_target_samplesize'),
]
operations = [
migrations.AddField(
model_name='round',
name='batch',
field=models.IntegerField(default=0, help_text='Which batch of the round is currently being attempted. A new batch starts any time samplesets for the round are created, either because the round is starting or because not enough condidence was built.'),
),
migrations.AddField(
model_name='sampleset',
name='batch',
field=models.IntegerField(default=0, help_text='The round batch that this sampleset belongs to.'),
),
migrations.AddField(
model_name='sampleset',
name='records',
field=models.IntegerField(default=0, help_text='The number of records that contain all the data.'),
),
migrations.AddField(
model_name='target',
name='confidence_threshold',
field=models.FloatField(default=1.0, help_text='The threshold that is used for confidence, in order to determine whether a candidate should be chosen.'),
),
migrations.AddField(
model_name='victim',
name='calibration_wait',
field=models.FloatField(default=0.0, help_text='The amount of time in seconds that sniffer should wait so that Scapy has enough time to lock on low-level network resources.'),
),
migrations.AddField(
model_name='victim',
name='recordscardinality',
field=models.IntegerField(default=0, help_text='The amount of expected TLS response records per request. If 0 then the amount is not known or is expected to change per request.'),
),
]
|
<commit_before><commit_msg>Add migration for calibration_wait, confidence_threshold, records, iteration, Victim recordscardinality<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-07-29 14:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('breach', '0018_target_samplesize'),
]
operations = [
migrations.AddField(
model_name='round',
name='batch',
field=models.IntegerField(default=0, help_text='Which batch of the round is currently being attempted. A new batch starts any time samplesets for the round are created, either because the round is starting or because not enough condidence was built.'),
),
migrations.AddField(
model_name='sampleset',
name='batch',
field=models.IntegerField(default=0, help_text='The round batch that this sampleset belongs to.'),
),
migrations.AddField(
model_name='sampleset',
name='records',
field=models.IntegerField(default=0, help_text='The number of records that contain all the data.'),
),
migrations.AddField(
model_name='target',
name='confidence_threshold',
field=models.FloatField(default=1.0, help_text='The threshold that is used for confidence, in order to determine whether a candidate should be chosen.'),
),
migrations.AddField(
model_name='victim',
name='calibration_wait',
field=models.FloatField(default=0.0, help_text='The amount of time in seconds that sniffer should wait so that Scapy has enough time to lock on low-level network resources.'),
),
migrations.AddField(
model_name='victim',
name='recordscardinality',
field=models.IntegerField(default=0, help_text='The amount of expected TLS response records per request. If 0 then the amount is not known or is expected to change per request.'),
),
]
|
Add migration for calibration_wait, confidence_threshold, records, iteration, Victim recordscardinality# -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-07-29 14:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('breach', '0018_target_samplesize'),
]
operations = [
migrations.AddField(
model_name='round',
name='batch',
field=models.IntegerField(default=0, help_text='Which batch of the round is currently being attempted. A new batch starts any time samplesets for the round are created, either because the round is starting or because not enough condidence was built.'),
),
migrations.AddField(
model_name='sampleset',
name='batch',
field=models.IntegerField(default=0, help_text='The round batch that this sampleset belongs to.'),
),
migrations.AddField(
model_name='sampleset',
name='records',
field=models.IntegerField(default=0, help_text='The number of records that contain all the data.'),
),
migrations.AddField(
model_name='target',
name='confidence_threshold',
field=models.FloatField(default=1.0, help_text='The threshold that is used for confidence, in order to determine whether a candidate should be chosen.'),
),
migrations.AddField(
model_name='victim',
name='calibration_wait',
field=models.FloatField(default=0.0, help_text='The amount of time in seconds that sniffer should wait so that Scapy has enough time to lock on low-level network resources.'),
),
migrations.AddField(
model_name='victim',
name='recordscardinality',
field=models.IntegerField(default=0, help_text='The amount of expected TLS response records per request. If 0 then the amount is not known or is expected to change per request.'),
),
]
|
<commit_before><commit_msg>Add migration for calibration_wait, confidence_threshold, records, iteration, Victim recordscardinality<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-07-29 14:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('breach', '0018_target_samplesize'),
]
operations = [
migrations.AddField(
model_name='round',
name='batch',
field=models.IntegerField(default=0, help_text='Which batch of the round is currently being attempted. A new batch starts any time samplesets for the round are created, either because the round is starting or because not enough condidence was built.'),
),
migrations.AddField(
model_name='sampleset',
name='batch',
field=models.IntegerField(default=0, help_text='The round batch that this sampleset belongs to.'),
),
migrations.AddField(
model_name='sampleset',
name='records',
field=models.IntegerField(default=0, help_text='The number of records that contain all the data.'),
),
migrations.AddField(
model_name='target',
name='confidence_threshold',
field=models.FloatField(default=1.0, help_text='The threshold that is used for confidence, in order to determine whether a candidate should be chosen.'),
),
migrations.AddField(
model_name='victim',
name='calibration_wait',
field=models.FloatField(default=0.0, help_text='The amount of time in seconds that sniffer should wait so that Scapy has enough time to lock on low-level network resources.'),
),
migrations.AddField(
model_name='victim',
name='recordscardinality',
field=models.IntegerField(default=0, help_text='The amount of expected TLS response records per request. If 0 then the amount is not known or is expected to change per request.'),
),
]
|
|
aaec2e3a742f0c5962508feef09f6fee61167fac
|
benchmarks/expand2.py
|
benchmarks/expand2.py
|
import sys
sys.path.append("..")
from timeit import default_timer as clock
from csympy import var
var("x y z w")
e = (x+y+z+w)**15
f = e*(e+w)
print f
t1 = clock()
g = f.expand()
t2 = clock()
print "Total time:", t2-t1, "s"
|
Add Python version of the benchmark
|
Add Python version of the benchmark
|
Python
|
mit
|
bjodah/symengine.py,bjodah/symengine.py,bjodah/symengine.py,symengine/symengine.py,symengine/symengine.py,symengine/symengine.py
|
Add Python version of the benchmark
|
import sys
sys.path.append("..")
from timeit import default_timer as clock
from csympy import var
var("x y z w")
e = (x+y+z+w)**15
f = e*(e+w)
print f
t1 = clock()
g = f.expand()
t2 = clock()
print "Total time:", t2-t1, "s"
|
<commit_before><commit_msg>Add Python version of the benchmark<commit_after>
|
import sys
sys.path.append("..")
from timeit import default_timer as clock
from csympy import var
var("x y z w")
e = (x+y+z+w)**15
f = e*(e+w)
print f
t1 = clock()
g = f.expand()
t2 = clock()
print "Total time:", t2-t1, "s"
|
Add Python version of the benchmarkimport sys
sys.path.append("..")
from timeit import default_timer as clock
from csympy import var
var("x y z w")
e = (x+y+z+w)**15
f = e*(e+w)
print f
t1 = clock()
g = f.expand()
t2 = clock()
print "Total time:", t2-t1, "s"
|
<commit_before><commit_msg>Add Python version of the benchmark<commit_after>import sys
sys.path.append("..")
from timeit import default_timer as clock
from csympy import var
var("x y z w")
e = (x+y+z+w)**15
f = e*(e+w)
print f
t1 = clock()
g = f.expand()
t2 = clock()
print "Total time:", t2-t1, "s"
|
|
835261d9aaf66ff3d768d68d4a18645c608ba5ce
|
heufybot/modules/__init__.py
|
heufybot/modules/__init__.py
|
from twisted.plugin import pluginPackagePaths
import os
# From txircd:
# https://github.com/ElementalAlchemist/txircd/blob/8832098149b7c5f9b0708efe5c836c8160b0c7e6/txircd/modules/__init__.py
__path__.extend(pluginPackagePaths(__name__))
for directory, subdirs, files in os.walk(os.path.dirname(os.path.realpath(__file__))):
for subdir in subdirs:
__path__.append(os.path.join(directory, subdir)) # Include all module subdirectories in the path
__all__ = []
|
Set up a package for modules
|
Set up a package for modules
|
Python
|
mit
|
Heufneutje/PyHeufyBot,Heufneutje/PyHeufyBot
|
Set up a package for modules
|
from twisted.plugin import pluginPackagePaths
import os
# From txircd:
# https://github.com/ElementalAlchemist/txircd/blob/8832098149b7c5f9b0708efe5c836c8160b0c7e6/txircd/modules/__init__.py
__path__.extend(pluginPackagePaths(__name__))
for directory, subdirs, files in os.walk(os.path.dirname(os.path.realpath(__file__))):
for subdir in subdirs:
__path__.append(os.path.join(directory, subdir)) # Include all module subdirectories in the path
__all__ = []
|
<commit_before><commit_msg>Set up a package for modules<commit_after>
|
from twisted.plugin import pluginPackagePaths
import os
# From txircd:
# https://github.com/ElementalAlchemist/txircd/blob/8832098149b7c5f9b0708efe5c836c8160b0c7e6/txircd/modules/__init__.py
__path__.extend(pluginPackagePaths(__name__))
for directory, subdirs, files in os.walk(os.path.dirname(os.path.realpath(__file__))):
for subdir in subdirs:
__path__.append(os.path.join(directory, subdir)) # Include all module subdirectories in the path
__all__ = []
|
Set up a package for modulesfrom twisted.plugin import pluginPackagePaths
import os
# From txircd:
# https://github.com/ElementalAlchemist/txircd/blob/8832098149b7c5f9b0708efe5c836c8160b0c7e6/txircd/modules/__init__.py
__path__.extend(pluginPackagePaths(__name__))
for directory, subdirs, files in os.walk(os.path.dirname(os.path.realpath(__file__))):
for subdir in subdirs:
__path__.append(os.path.join(directory, subdir)) # Include all module subdirectories in the path
__all__ = []
|
<commit_before><commit_msg>Set up a package for modules<commit_after>from twisted.plugin import pluginPackagePaths
import os
# From txircd:
# https://github.com/ElementalAlchemist/txircd/blob/8832098149b7c5f9b0708efe5c836c8160b0c7e6/txircd/modules/__init__.py
__path__.extend(pluginPackagePaths(__name__))
for directory, subdirs, files in os.walk(os.path.dirname(os.path.realpath(__file__))):
for subdir in subdirs:
__path__.append(os.path.join(directory, subdir)) # Include all module subdirectories in the path
__all__ = []
|
|
c8882e19c8c4b09c91e19cd4cbe3c59e8754a084
|
thinc/neural/tests/unit/Model/test_defaults_no_args.py
|
thinc/neural/tests/unit/Model/test_defaults_no_args.py
|
import pytest
from flexmock import flexmock
from .... import base
@pytest.fixture
def model_with_no_args():
flexmock(base.Model)
base.Model.should_receive('_args2kwargs').and_return({})
base.Model.should_receive('_update_defaults').and_return({})
base.Model.should_receive('setup').and_return(None)
flexmock(base.util)
base.util.should_receive('get_ops').with_args('cpu').and_return('cpu-ops')
model = base.Model()
return model
def test_Model_defaults_to_name_model(model_with_no_args):
assert model_with_no_args.name == 'model'
def test_Model_defaults_to_cpu(model_with_no_args):
assert model_with_no_args.ops == 'cpu-ops'
def test_Model_defaults_to_no_layers(model_with_no_args):
assert model_with_no_args.layers == []
def test_Model_defaults_to_no_param_descripions(model_with_no_args):
assert list(model_with_no_args.describe_params) == []
def test_Model_defaults_to_no_output_shape(model_with_no_args):
assert model_with_no_args.output_shape == None
def test_Model_defaults_to_no_input_shape(model_with_no_args):
assert model_with_no_args.input_shape == None
def test_Model_defaults_to_0_size(model_with_no_args):
assert model_with_no_args.size == 0
def test_Model_defaults_to_no_params(model_with_no_args):
assert model_with_no_args.params is None
|
Add tests for no-args default on base class.
|
Add tests for no-args default on base class.
|
Python
|
mit
|
spacy-io/thinc,explosion/thinc,spacy-io/thinc,explosion/thinc,explosion/thinc,explosion/thinc,spacy-io/thinc
|
Add tests for no-args default on base class.
|
import pytest
from flexmock import flexmock
from .... import base
@pytest.fixture
def model_with_no_args():
flexmock(base.Model)
base.Model.should_receive('_args2kwargs').and_return({})
base.Model.should_receive('_update_defaults').and_return({})
base.Model.should_receive('setup').and_return(None)
flexmock(base.util)
base.util.should_receive('get_ops').with_args('cpu').and_return('cpu-ops')
model = base.Model()
return model
def test_Model_defaults_to_name_model(model_with_no_args):
assert model_with_no_args.name == 'model'
def test_Model_defaults_to_cpu(model_with_no_args):
assert model_with_no_args.ops == 'cpu-ops'
def test_Model_defaults_to_no_layers(model_with_no_args):
assert model_with_no_args.layers == []
def test_Model_defaults_to_no_param_descripions(model_with_no_args):
assert list(model_with_no_args.describe_params) == []
def test_Model_defaults_to_no_output_shape(model_with_no_args):
assert model_with_no_args.output_shape == None
def test_Model_defaults_to_no_input_shape(model_with_no_args):
assert model_with_no_args.input_shape == None
def test_Model_defaults_to_0_size(model_with_no_args):
assert model_with_no_args.size == 0
def test_Model_defaults_to_no_params(model_with_no_args):
assert model_with_no_args.params is None
|
<commit_before><commit_msg>Add tests for no-args default on base class.<commit_after>
|
import pytest
from flexmock import flexmock
from .... import base
@pytest.fixture
def model_with_no_args():
flexmock(base.Model)
base.Model.should_receive('_args2kwargs').and_return({})
base.Model.should_receive('_update_defaults').and_return({})
base.Model.should_receive('setup').and_return(None)
flexmock(base.util)
base.util.should_receive('get_ops').with_args('cpu').and_return('cpu-ops')
model = base.Model()
return model
def test_Model_defaults_to_name_model(model_with_no_args):
assert model_with_no_args.name == 'model'
def test_Model_defaults_to_cpu(model_with_no_args):
assert model_with_no_args.ops == 'cpu-ops'
def test_Model_defaults_to_no_layers(model_with_no_args):
assert model_with_no_args.layers == []
def test_Model_defaults_to_no_param_descripions(model_with_no_args):
assert list(model_with_no_args.describe_params) == []
def test_Model_defaults_to_no_output_shape(model_with_no_args):
assert model_with_no_args.output_shape == None
def test_Model_defaults_to_no_input_shape(model_with_no_args):
assert model_with_no_args.input_shape == None
def test_Model_defaults_to_0_size(model_with_no_args):
assert model_with_no_args.size == 0
def test_Model_defaults_to_no_params(model_with_no_args):
assert model_with_no_args.params is None
|
Add tests for no-args default on base class.import pytest
from flexmock import flexmock
from .... import base
@pytest.fixture
def model_with_no_args():
flexmock(base.Model)
base.Model.should_receive('_args2kwargs').and_return({})
base.Model.should_receive('_update_defaults').and_return({})
base.Model.should_receive('setup').and_return(None)
flexmock(base.util)
base.util.should_receive('get_ops').with_args('cpu').and_return('cpu-ops')
model = base.Model()
return model
def test_Model_defaults_to_name_model(model_with_no_args):
assert model_with_no_args.name == 'model'
def test_Model_defaults_to_cpu(model_with_no_args):
assert model_with_no_args.ops == 'cpu-ops'
def test_Model_defaults_to_no_layers(model_with_no_args):
assert model_with_no_args.layers == []
def test_Model_defaults_to_no_param_descripions(model_with_no_args):
assert list(model_with_no_args.describe_params) == []
def test_Model_defaults_to_no_output_shape(model_with_no_args):
assert model_with_no_args.output_shape == None
def test_Model_defaults_to_no_input_shape(model_with_no_args):
assert model_with_no_args.input_shape == None
def test_Model_defaults_to_0_size(model_with_no_args):
assert model_with_no_args.size == 0
def test_Model_defaults_to_no_params(model_with_no_args):
assert model_with_no_args.params is None
|
<commit_before><commit_msg>Add tests for no-args default on base class.<commit_after>import pytest
from flexmock import flexmock
from .... import base
@pytest.fixture
def model_with_no_args():
flexmock(base.Model)
base.Model.should_receive('_args2kwargs').and_return({})
base.Model.should_receive('_update_defaults').and_return({})
base.Model.should_receive('setup').and_return(None)
flexmock(base.util)
base.util.should_receive('get_ops').with_args('cpu').and_return('cpu-ops')
model = base.Model()
return model
def test_Model_defaults_to_name_model(model_with_no_args):
assert model_with_no_args.name == 'model'
def test_Model_defaults_to_cpu(model_with_no_args):
assert model_with_no_args.ops == 'cpu-ops'
def test_Model_defaults_to_no_layers(model_with_no_args):
assert model_with_no_args.layers == []
def test_Model_defaults_to_no_param_descripions(model_with_no_args):
assert list(model_with_no_args.describe_params) == []
def test_Model_defaults_to_no_output_shape(model_with_no_args):
assert model_with_no_args.output_shape == None
def test_Model_defaults_to_no_input_shape(model_with_no_args):
assert model_with_no_args.input_shape == None
def test_Model_defaults_to_0_size(model_with_no_args):
assert model_with_no_args.size == 0
def test_Model_defaults_to_no_params(model_with_no_args):
assert model_with_no_args.params is None
|
|
5e1272c7c442c759116d6f85fc587514ce97b667
|
scripts/list-components.py
|
scripts/list-components.py
|
"""Prints the names of components installed on a WMT executor."""
import os
import yaml
wmt_executor = os.environ['wmt_executor']
cfg_file = 'wmt-config-' + wmt_executor + '.yaml'
try:
with open(cfg_file, 'r') as fp:
cfg = yaml.safe_load(fp)
except IOError:
raise
components = cfg['components'].keys()
for item in components:
print item
|
Add script to print components installed on executor
|
Add script to print components installed on executor
It reads them from the YAML file output from `cmt-config`.
|
Python
|
mit
|
csdms/wmt-metadata
|
Add script to print components installed on executor
It reads them from the YAML file output from `cmt-config`.
|
"""Prints the names of components installed on a WMT executor."""
import os
import yaml
wmt_executor = os.environ['wmt_executor']
cfg_file = 'wmt-config-' + wmt_executor + '.yaml'
try:
with open(cfg_file, 'r') as fp:
cfg = yaml.safe_load(fp)
except IOError:
raise
components = cfg['components'].keys()
for item in components:
print item
|
<commit_before><commit_msg>Add script to print components installed on executor
It reads them from the YAML file output from `cmt-config`.<commit_after>
|
"""Prints the names of components installed on a WMT executor."""
import os
import yaml
wmt_executor = os.environ['wmt_executor']
cfg_file = 'wmt-config-' + wmt_executor + '.yaml'
try:
with open(cfg_file, 'r') as fp:
cfg = yaml.safe_load(fp)
except IOError:
raise
components = cfg['components'].keys()
for item in components:
print item
|
Add script to print components installed on executor
It reads them from the YAML file output from `cmt-config`."""Prints the names of components installed on a WMT executor."""
import os
import yaml
wmt_executor = os.environ['wmt_executor']
cfg_file = 'wmt-config-' + wmt_executor + '.yaml'
try:
with open(cfg_file, 'r') as fp:
cfg = yaml.safe_load(fp)
except IOError:
raise
components = cfg['components'].keys()
for item in components:
print item
|
<commit_before><commit_msg>Add script to print components installed on executor
It reads them from the YAML file output from `cmt-config`.<commit_after>"""Prints the names of components installed on a WMT executor."""
import os
import yaml
wmt_executor = os.environ['wmt_executor']
cfg_file = 'wmt-config-' + wmt_executor + '.yaml'
try:
with open(cfg_file, 'r') as fp:
cfg = yaml.safe_load(fp)
except IOError:
raise
components = cfg['components'].keys()
for item in components:
print item
|
|
595e111542770c8004317a1d739823342a87cedc
|
main.py
|
main.py
|
import praw
from flask import Flask
from flask import request, render_template
from prawoauth2 import PrawOAuth2Mini
from settings import (app_key, app_secret, access_token, refresh_token,
user_agent, scopes)
reddit_client = praw.Reddit(user_agent=user_agent)
oauth_helper = PrawOAuth2Mini(reddit_client, app_key=app_key,
app_secret=app_secret,
access_token=access_token,
refresh_token=refresh_token, scopes=scopes)
app = Flask(__name__)
@app.route('/')
def index():
username = request.values.get('username')
if not username:
return render_template('index.html')
cakeday = get_cake_day(username)
return render_template('result.html', redditor=username, cakeday=cakeday)
if __name__ == '__main__':
app.run(debug=True)
|
Add the basic flask app
|
Add the basic flask app
|
Python
|
mit
|
avinassh/kekday,avinassh/kekday
|
Add the basic flask app
|
import praw
from flask import Flask
from flask import request, render_template
from prawoauth2 import PrawOAuth2Mini
from settings import (app_key, app_secret, access_token, refresh_token,
user_agent, scopes)
reddit_client = praw.Reddit(user_agent=user_agent)
oauth_helper = PrawOAuth2Mini(reddit_client, app_key=app_key,
app_secret=app_secret,
access_token=access_token,
refresh_token=refresh_token, scopes=scopes)
app = Flask(__name__)
@app.route('/')
def index():
username = request.values.get('username')
if not username:
return render_template('index.html')
cakeday = get_cake_day(username)
return render_template('result.html', redditor=username, cakeday=cakeday)
if __name__ == '__main__':
app.run(debug=True)
|
<commit_before><commit_msg>Add the basic flask app<commit_after>
|
import praw
from flask import Flask
from flask import request, render_template
from prawoauth2 import PrawOAuth2Mini
from settings import (app_key, app_secret, access_token, refresh_token,
user_agent, scopes)
reddit_client = praw.Reddit(user_agent=user_agent)
oauth_helper = PrawOAuth2Mini(reddit_client, app_key=app_key,
app_secret=app_secret,
access_token=access_token,
refresh_token=refresh_token, scopes=scopes)
app = Flask(__name__)
@app.route('/')
def index():
username = request.values.get('username')
if not username:
return render_template('index.html')
cakeday = get_cake_day(username)
return render_template('result.html', redditor=username, cakeday=cakeday)
if __name__ == '__main__':
app.run(debug=True)
|
Add the basic flask appimport praw
from flask import Flask
from flask import request, render_template
from prawoauth2 import PrawOAuth2Mini
from settings import (app_key, app_secret, access_token, refresh_token,
user_agent, scopes)
reddit_client = praw.Reddit(user_agent=user_agent)
oauth_helper = PrawOAuth2Mini(reddit_client, app_key=app_key,
app_secret=app_secret,
access_token=access_token,
refresh_token=refresh_token, scopes=scopes)
app = Flask(__name__)
@app.route('/')
def index():
username = request.values.get('username')
if not username:
return render_template('index.html')
cakeday = get_cake_day(username)
return render_template('result.html', redditor=username, cakeday=cakeday)
if __name__ == '__main__':
app.run(debug=True)
|
<commit_before><commit_msg>Add the basic flask app<commit_after>import praw
from flask import Flask
from flask import request, render_template
from prawoauth2 import PrawOAuth2Mini
from settings import (app_key, app_secret, access_token, refresh_token,
user_agent, scopes)
reddit_client = praw.Reddit(user_agent=user_agent)
oauth_helper = PrawOAuth2Mini(reddit_client, app_key=app_key,
app_secret=app_secret,
access_token=access_token,
refresh_token=refresh_token, scopes=scopes)
app = Flask(__name__)
@app.route('/')
def index():
username = request.values.get('username')
if not username:
return render_template('index.html')
cakeday = get_cake_day(username)
return render_template('result.html', redditor=username, cakeday=cakeday)
if __name__ == '__main__':
app.run(debug=True)
|
|
525be155604d5938ba752f31bbfb7fbf4e61dbd5
|
bin/trim_fasta_alignment.py
|
bin/trim_fasta_alignment.py
|
"""
Trim a fasta alignment file based on the columns with - and then remove sequences that don't have enough coverage.
We assume all the sequences are the same length
"""
import os
import sys
import argparse
def read_fasta(fname):
"""
Read a fasta file and return a hash.
:param fname: The file name to read
:return: dict
"""
seqs = {}
seq = ""
seqid = ""
with open(fname, 'r') as f:
for line in f:
line = line.strip()
if line.startswith(">"):
if "" != seqid:
seqs[seqid] = seq
seqid = line
seq = ""
else:
seq += line
seqs[seqid] = seq
return seqs
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Trim fasta format alignment files based on gaps and lengths')
parser.add_argument('-f', help='fasta file of alignment', required=True)
parser.add_argument('-c', help='minimum coverage of a COLUMN. Default = 0.9. Columns < than this will be removed', default=0.9, type=float)
parser.add_argument('-r', help='minimum coverage of a ROW (sequence). Default = 0.9. Sequences < than this will be removed', default=0.9, type=float)
parser.add_argument('-v', help='verbose output', action='store_true')
args = parser.parse_args()
coltokeep = [] # which columns are we going to output
rowtokeep = [] # which rows do we output
dna = read_fasta(args.f)
ids = list(dna.keys())
numseqs = len(ids)
for i in range(len(dna[ids[0]])):
count = 0
for seq in ids:
if '-' != dna[seq][i]:
count += 1
frac = 1.0 * count/numseqs
if 1.0 * count/numseqs > args.c:
coltokeep.append(i)
if len(coltokeep) == 0:
sys.exit("We did not elect to keep any bases in the alignment!")
finalbases = len(coltokeep)
for seq in ids:
count = 0
for i in coltokeep:
if '_' != dna[seq][i]:
count += 1
if 1.0 * count/finalbases > args.r:
rowtokeep.append(seq)
for seq in rowtokeep:
print(seq)
for i in coltokeep:
sys.stdout.write(dna[seq][i])
sys.stdout.write("\n")
|
Trim a fasta alingment based on coverage and final sequence length.
|
Trim a fasta alingment based on coverage and final sequence length.
|
Python
|
mit
|
linsalrob/crAssphage,linsalrob/crAssphage,linsalrob/crAssphage
|
Trim a fasta alingment based on coverage and final sequence length.
|
"""
Trim a fasta alignment file based on the columns with - and then remove sequences that don't have enough coverage.
We assume all the sequences are the same length
"""
import os
import sys
import argparse
def read_fasta(fname):
"""
Read a fasta file and return a hash.
:param fname: The file name to read
:return: dict
"""
seqs = {}
seq = ""
seqid = ""
with open(fname, 'r') as f:
for line in f:
line = line.strip()
if line.startswith(">"):
if "" != seqid:
seqs[seqid] = seq
seqid = line
seq = ""
else:
seq += line
seqs[seqid] = seq
return seqs
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Trim fasta format alignment files based on gaps and lengths')
parser.add_argument('-f', help='fasta file of alignment', required=True)
parser.add_argument('-c', help='minimum coverage of a COLUMN. Default = 0.9. Columns < than this will be removed', default=0.9, type=float)
parser.add_argument('-r', help='minimum coverage of a ROW (sequence). Default = 0.9. Sequences < than this will be removed', default=0.9, type=float)
parser.add_argument('-v', help='verbose output', action='store_true')
args = parser.parse_args()
coltokeep = [] # which columns are we going to output
rowtokeep = [] # which rows do we output
dna = read_fasta(args.f)
ids = list(dna.keys())
numseqs = len(ids)
for i in range(len(dna[ids[0]])):
count = 0
for seq in ids:
if '-' != dna[seq][i]:
count += 1
frac = 1.0 * count/numseqs
if 1.0 * count/numseqs > args.c:
coltokeep.append(i)
if len(coltokeep) == 0:
sys.exit("We did not elect to keep any bases in the alignment!")
finalbases = len(coltokeep)
for seq in ids:
count = 0
for i in coltokeep:
if '_' != dna[seq][i]:
count += 1
if 1.0 * count/finalbases > args.r:
rowtokeep.append(seq)
for seq in rowtokeep:
print(seq)
for i in coltokeep:
sys.stdout.write(dna[seq][i])
sys.stdout.write("\n")
|
<commit_before><commit_msg>Trim a fasta alingment based on coverage and final sequence length.<commit_after>
|
"""
Trim a fasta alignment file based on the columns with - and then remove sequences that don't have enough coverage.
We assume all the sequences are the same length
"""
import os
import sys
import argparse
def read_fasta(fname):
"""
Read a fasta file and return a hash.
:param fname: The file name to read
:return: dict
"""
seqs = {}
seq = ""
seqid = ""
with open(fname, 'r') as f:
for line in f:
line = line.strip()
if line.startswith(">"):
if "" != seqid:
seqs[seqid] = seq
seqid = line
seq = ""
else:
seq += line
seqs[seqid] = seq
return seqs
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Trim fasta format alignment files based on gaps and lengths')
parser.add_argument('-f', help='fasta file of alignment', required=True)
parser.add_argument('-c', help='minimum coverage of a COLUMN. Default = 0.9. Columns < than this will be removed', default=0.9, type=float)
parser.add_argument('-r', help='minimum coverage of a ROW (sequence). Default = 0.9. Sequences < than this will be removed', default=0.9, type=float)
parser.add_argument('-v', help='verbose output', action='store_true')
args = parser.parse_args()
coltokeep = [] # which columns are we going to output
rowtokeep = [] # which rows do we output
dna = read_fasta(args.f)
ids = list(dna.keys())
numseqs = len(ids)
for i in range(len(dna[ids[0]])):
count = 0
for seq in ids:
if '-' != dna[seq][i]:
count += 1
frac = 1.0 * count/numseqs
if 1.0 * count/numseqs > args.c:
coltokeep.append(i)
if len(coltokeep) == 0:
sys.exit("We did not elect to keep any bases in the alignment!")
finalbases = len(coltokeep)
for seq in ids:
count = 0
for i in coltokeep:
if '_' != dna[seq][i]:
count += 1
if 1.0 * count/finalbases > args.r:
rowtokeep.append(seq)
for seq in rowtokeep:
print(seq)
for i in coltokeep:
sys.stdout.write(dna[seq][i])
sys.stdout.write("\n")
|
Trim a fasta alingment based on coverage and final sequence length."""
Trim a fasta alignment file based on the columns with - and then remove sequences that don't have enough coverage.
We assume all the sequences are the same length
"""
import os
import sys
import argparse
def read_fasta(fname):
"""
Read a fasta file and return a hash.
:param fname: The file name to read
:return: dict
"""
seqs = {}
seq = ""
seqid = ""
with open(fname, 'r') as f:
for line in f:
line = line.strip()
if line.startswith(">"):
if "" != seqid:
seqs[seqid] = seq
seqid = line
seq = ""
else:
seq += line
seqs[seqid] = seq
return seqs
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Trim fasta format alignment files based on gaps and lengths')
parser.add_argument('-f', help='fasta file of alignment', required=True)
parser.add_argument('-c', help='minimum coverage of a COLUMN. Default = 0.9. Columns < than this will be removed', default=0.9, type=float)
parser.add_argument('-r', help='minimum coverage of a ROW (sequence). Default = 0.9. Sequences < than this will be removed', default=0.9, type=float)
parser.add_argument('-v', help='verbose output', action='store_true')
args = parser.parse_args()
coltokeep = [] # which columns are we going to output
rowtokeep = [] # which rows do we output
dna = read_fasta(args.f)
ids = list(dna.keys())
numseqs = len(ids)
for i in range(len(dna[ids[0]])):
count = 0
for seq in ids:
if '-' != dna[seq][i]:
count += 1
frac = 1.0 * count/numseqs
if 1.0 * count/numseqs > args.c:
coltokeep.append(i)
if len(coltokeep) == 0:
sys.exit("We did not elect to keep any bases in the alignment!")
finalbases = len(coltokeep)
for seq in ids:
count = 0
for i in coltokeep:
if '_' != dna[seq][i]:
count += 1
if 1.0 * count/finalbases > args.r:
rowtokeep.append(seq)
for seq in rowtokeep:
print(seq)
for i in coltokeep:
sys.stdout.write(dna[seq][i])
sys.stdout.write("\n")
|
<commit_before><commit_msg>Trim a fasta alingment based on coverage and final sequence length.<commit_after>"""
Trim a fasta alignment file based on the columns with - and then remove sequences that don't have enough coverage.
We assume all the sequences are the same length
"""
import os
import sys
import argparse
def read_fasta(fname):
"""
Read a fasta file and return a hash.
:param fname: The file name to read
:return: dict
"""
seqs = {}
seq = ""
seqid = ""
with open(fname, 'r') as f:
for line in f:
line = line.strip()
if line.startswith(">"):
if "" != seqid:
seqs[seqid] = seq
seqid = line
seq = ""
else:
seq += line
seqs[seqid] = seq
return seqs
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Trim fasta format alignment files based on gaps and lengths')
parser.add_argument('-f', help='fasta file of alignment', required=True)
parser.add_argument('-c', help='minimum coverage of a COLUMN. Default = 0.9. Columns < than this will be removed', default=0.9, type=float)
parser.add_argument('-r', help='minimum coverage of a ROW (sequence). Default = 0.9. Sequences < than this will be removed', default=0.9, type=float)
parser.add_argument('-v', help='verbose output', action='store_true')
args = parser.parse_args()
coltokeep = [] # which columns are we going to output
rowtokeep = [] # which rows do we output
dna = read_fasta(args.f)
ids = list(dna.keys())
numseqs = len(ids)
for i in range(len(dna[ids[0]])):
count = 0
for seq in ids:
if '-' != dna[seq][i]:
count += 1
frac = 1.0 * count/numseqs
if 1.0 * count/numseqs > args.c:
coltokeep.append(i)
if len(coltokeep) == 0:
sys.exit("We did not elect to keep any bases in the alignment!")
finalbases = len(coltokeep)
for seq in ids:
count = 0
for i in coltokeep:
if '_' != dna[seq][i]:
count += 1
if 1.0 * count/finalbases > args.r:
rowtokeep.append(seq)
for seq in rowtokeep:
print(seq)
for i in coltokeep:
sys.stdout.write(dna[seq][i])
sys.stdout.write("\n")
|
|
12ee53b4ec4d4fafd7f7af692dda0b0553b82226
|
metpy/io/tests/test_nexrad.py
|
metpy/io/tests/test_nexrad.py
|
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
|
import glob
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
def test_generator():
for fname in glob.glob(os.path.join(datadir, 'nids', 'KOUN*')):
yield read_level3_file, fname
def read_level3_file(fname):
Level3File(fname)
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
|
Add parameterized test for nids.
|
Add parameterized test for nids.
|
Python
|
bsd-3-clause
|
ahaberlie/MetPy,Unidata/MetPy,dopplershift/MetPy,deeplycloudy/MetPy,jrleeman/MetPy,dopplershift/MetPy,jrleeman/MetPy,ahill818/MetPy,Unidata/MetPy,ShawnMurd/MetPy,ahaberlie/MetPy
|
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
Add parameterized test for nids.
|
import glob
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
def test_generator():
for fname in glob.glob(os.path.join(datadir, 'nids', 'KOUN*')):
yield read_level3_file, fname
def read_level3_file(fname):
Level3File(fname)
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
|
<commit_before>import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
<commit_msg>Add parameterized test for nids.<commit_after>
|
import glob
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
def test_generator():
for fname in glob.glob(os.path.join(datadir, 'nids', 'KOUN*')):
yield read_level3_file, fname
def read_level3_file(fname):
Level3File(fname)
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
|
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
Add parameterized test for nids.import glob
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
def test_generator():
for fname in glob.glob(os.path.join(datadir, 'nids', 'KOUN*')):
yield read_level3_file, fname
def read_level3_file(fname):
Level3File(fname)
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
|
<commit_before>import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
<commit_msg>Add parameterized test for nids.<commit_after>import glob
import os.path
from numpy.testing import TestCase
from metpy.io.nexrad import Level2File, Level3File
curdir, f = os.path.split(__file__)
datadir = os.path.join(curdir, '../../../examples/testdata')
def test_generator():
for fname in glob.glob(os.path.join(datadir, 'nids', 'KOUN*')):
yield read_level3_file, fname
def read_level3_file(fname):
Level3File(fname)
class TestLevel3(TestCase):
def test_basic(self):
Level3File(os.path.join(datadir, 'nids/Level3_FFC_N0Q_20140407_1805.nids'))
class TestLevel2(TestCase):
def test_basic(self):
Level2File(os.path.join(datadir, 'KTLX20130520_190411_V06.gz'))
|
4a57664fee968713aa33fa06b482b775058cef4f
|
steve/tests/test_restapi.py
|
steve/tests/test_restapi.py
|
from unittest import TestCase
from nose.tools import eq_
from steve.restapi import urljoin
class UrlJoinTestCase(TestCase):
def test_urljoin(self):
for base, args, expected in [
('http://localhost', ['path1'],
'http://localhost/path1'),
('http://localhost/path1', ['path2'],
'http://localhost/path1/path2'),
('http://localhost', ['path1', 'path2'],
'http://localhost/path1/path2'),
('http://localhost?foo=bar', ['path1'],
'http://localhost/path1?foo=bar'),
]:
eq_(urljoin(base, *args), expected)
|
Add test case for urljoin
|
Add test case for urljoin
|
Python
|
bsd-2-clause
|
CarlFK/steve,willkg/steve,willkg/steve,pyvideo/steve,CarlFK/steve,CarlFK/steve,pyvideo/steve,willkg/steve,pyvideo/steve
|
Add test case for urljoin
|
from unittest import TestCase
from nose.tools import eq_
from steve.restapi import urljoin
class UrlJoinTestCase(TestCase):
def test_urljoin(self):
for base, args, expected in [
('http://localhost', ['path1'],
'http://localhost/path1'),
('http://localhost/path1', ['path2'],
'http://localhost/path1/path2'),
('http://localhost', ['path1', 'path2'],
'http://localhost/path1/path2'),
('http://localhost?foo=bar', ['path1'],
'http://localhost/path1?foo=bar'),
]:
eq_(urljoin(base, *args), expected)
|
<commit_before><commit_msg>Add test case for urljoin<commit_after>
|
from unittest import TestCase
from nose.tools import eq_
from steve.restapi import urljoin
class UrlJoinTestCase(TestCase):
def test_urljoin(self):
for base, args, expected in [
('http://localhost', ['path1'],
'http://localhost/path1'),
('http://localhost/path1', ['path2'],
'http://localhost/path1/path2'),
('http://localhost', ['path1', 'path2'],
'http://localhost/path1/path2'),
('http://localhost?foo=bar', ['path1'],
'http://localhost/path1?foo=bar'),
]:
eq_(urljoin(base, *args), expected)
|
Add test case for urljoinfrom unittest import TestCase
from nose.tools import eq_
from steve.restapi import urljoin
class UrlJoinTestCase(TestCase):
def test_urljoin(self):
for base, args, expected in [
('http://localhost', ['path1'],
'http://localhost/path1'),
('http://localhost/path1', ['path2'],
'http://localhost/path1/path2'),
('http://localhost', ['path1', 'path2'],
'http://localhost/path1/path2'),
('http://localhost?foo=bar', ['path1'],
'http://localhost/path1?foo=bar'),
]:
eq_(urljoin(base, *args), expected)
|
<commit_before><commit_msg>Add test case for urljoin<commit_after>from unittest import TestCase
from nose.tools import eq_
from steve.restapi import urljoin
class UrlJoinTestCase(TestCase):
def test_urljoin(self):
for base, args, expected in [
('http://localhost', ['path1'],
'http://localhost/path1'),
('http://localhost/path1', ['path2'],
'http://localhost/path1/path2'),
('http://localhost', ['path1', 'path2'],
'http://localhost/path1/path2'),
('http://localhost?foo=bar', ['path1'],
'http://localhost/path1?foo=bar'),
]:
eq_(urljoin(base, *args), expected)
|
|
3511498f98d582afe6e08f8c2f5168ae66489444
|
fluent_contents/plugins/gist/migrations/0001_initial.py
|
fluent_contents/plugins/gist/migrations/0001_initial.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('fluent_contents', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='GistItem',
fields=[
('contentitem_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='fluent_contents.ContentItem')),
('gist_id', models.CharField(help_text='Go to <a href="https://gist.github.com/" target="_blank">https://gist.github.com/</a> and copy the number of the Gist snippet you want to display.', max_length=128, verbose_name='Gist number')),
('filename', models.CharField(help_text='Leave the filename empty to display all files in the Gist.', max_length=128, verbose_name='Gist filename', blank=True)),
],
options={
'verbose_name': 'GitHub Gist snippet',
'verbose_name_plural': 'GitHub Gist snippets',
},
bases=('fluent_contents.contentitem',),
),
]
|
Add Django 1.7 migration for gist plugin
|
Add Django 1.7 migration for gist plugin
|
Python
|
apache-2.0
|
ixc/django-fluent-contents,edoburu/django-fluent-contents,django-fluent/django-fluent-contents,jpotterm/django-fluent-contents,edoburu/django-fluent-contents,jpotterm/django-fluent-contents,jpotterm/django-fluent-contents,edoburu/django-fluent-contents,ixc/django-fluent-contents,django-fluent/django-fluent-contents,ixc/django-fluent-contents,django-fluent/django-fluent-contents
|
Add Django 1.7 migration for gist plugin
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('fluent_contents', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='GistItem',
fields=[
('contentitem_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='fluent_contents.ContentItem')),
('gist_id', models.CharField(help_text='Go to <a href="https://gist.github.com/" target="_blank">https://gist.github.com/</a> and copy the number of the Gist snippet you want to display.', max_length=128, verbose_name='Gist number')),
('filename', models.CharField(help_text='Leave the filename empty to display all files in the Gist.', max_length=128, verbose_name='Gist filename', blank=True)),
],
options={
'verbose_name': 'GitHub Gist snippet',
'verbose_name_plural': 'GitHub Gist snippets',
},
bases=('fluent_contents.contentitem',),
),
]
|
<commit_before><commit_msg>Add Django 1.7 migration for gist plugin<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('fluent_contents', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='GistItem',
fields=[
('contentitem_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='fluent_contents.ContentItem')),
('gist_id', models.CharField(help_text='Go to <a href="https://gist.github.com/" target="_blank">https://gist.github.com/</a> and copy the number of the Gist snippet you want to display.', max_length=128, verbose_name='Gist number')),
('filename', models.CharField(help_text='Leave the filename empty to display all files in the Gist.', max_length=128, verbose_name='Gist filename', blank=True)),
],
options={
'verbose_name': 'GitHub Gist snippet',
'verbose_name_plural': 'GitHub Gist snippets',
},
bases=('fluent_contents.contentitem',),
),
]
|
Add Django 1.7 migration for gist plugin# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('fluent_contents', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='GistItem',
fields=[
('contentitem_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='fluent_contents.ContentItem')),
('gist_id', models.CharField(help_text='Go to <a href="https://gist.github.com/" target="_blank">https://gist.github.com/</a> and copy the number of the Gist snippet you want to display.', max_length=128, verbose_name='Gist number')),
('filename', models.CharField(help_text='Leave the filename empty to display all files in the Gist.', max_length=128, verbose_name='Gist filename', blank=True)),
],
options={
'verbose_name': 'GitHub Gist snippet',
'verbose_name_plural': 'GitHub Gist snippets',
},
bases=('fluent_contents.contentitem',),
),
]
|
<commit_before><commit_msg>Add Django 1.7 migration for gist plugin<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('fluent_contents', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='GistItem',
fields=[
('contentitem_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='fluent_contents.ContentItem')),
('gist_id', models.CharField(help_text='Go to <a href="https://gist.github.com/" target="_blank">https://gist.github.com/</a> and copy the number of the Gist snippet you want to display.', max_length=128, verbose_name='Gist number')),
('filename', models.CharField(help_text='Leave the filename empty to display all files in the Gist.', max_length=128, verbose_name='Gist filename', blank=True)),
],
options={
'verbose_name': 'GitHub Gist snippet',
'verbose_name_plural': 'GitHub Gist snippets',
},
bases=('fluent_contents.contentitem',),
),
]
|
|
b208beafa5e060b4c2effb946f6dfda94aee8423
|
load_data.py
|
load_data.py
|
"""
Creates a nice tidy pickle file of the data in the data/ directory.
"""
import os
import csv
from collections import defaultdict
class Position:
"""
A position for fantasy football.
"""
def __init__(self, title, players=[]):
if title in [QB, RB, WR, TE, DEF, ST, K]:
self.title = title
else:
raise Exception("Position name not valid: %s" % name)
# a dictionary keyed on player name for quick lookups
self.players = {}
for player in players:
self.players[player.name] = player
class Player:
"""A player/squad"""
def __init__(self, name, position):
self.name = name
self.position = position
self.stat_categories = []
self.seasons = defaultdict(dict)
# stats is a dictionary keyed on the name of the stat, with a
# value that can be converted to a float
def add_season(self, year, stats):
if self.stat_categories == []:
for key in stats.iterkeys():
key = self.clean_stat_name(key)
if key:
self.stat_categories.append(key)
for key, val in stats.iteritems():
key = self.clean_stat_name(key)
if key and self.stat_categories and key not in self.stat_categories:
raise Exception("Stat '%s' not in existing categories: %s" % \
(key ,str(self.stat_categories)))
try:
val = float(val)
except:
pass # if we can't float it, it's probably text or something
self.seasons[year][key] = val
def clean_stat_name(self, stat_name):
""" for dealing with unruly headers """
stat_name = stat_name.strip()
mapping = {'Rec Tgt':'Targets',
'Tgt':'Targets',
'KR Lng': 'KR Long',
}
if self.position == 'QB':
mapping['YdsL'] = 'Sack Yds'
for key, val in mapping.iteritems():
if stat_name == key:
stat_name = val
if stat_name:
return stat_name
else:
return False
if __name__ == "__main__":
data_root = "./data/"
for subdir, dirs, files in os.walk(data_root):
if not dirs:
year = subdir.split('/')[-1]
for filename in files:
if filename.split('.')[-1].lower() == 'csv':
position = filename.split('.')[0].upper()
with open(os.path.join(subdir,filename),'rU+') as csvfile:
reader=csv.DictReader(csvfile)
for obj in reader:
try:
p = Player(obj["Name"], position)
p.add_season(year, obj)
except KeyError:
p = Player(obj["Team"], position)
p.add_season(year, obj)
a, b = p.position, p.stat_categories
b.sort()
print a, b
|
Add data loader script with a couple classes. Not finished yet...
|
Add data loader script with a couple classes. Not finished yet...
|
Python
|
mit
|
bjlange/revenge,bjlange/revenge
|
Add data loader script with a couple classes. Not finished yet...
|
"""
Creates a nice tidy pickle file of the data in the data/ directory.
"""
import os
import csv
from collections import defaultdict
class Position:
"""
A position for fantasy football.
"""
def __init__(self, title, players=[]):
if title in [QB, RB, WR, TE, DEF, ST, K]:
self.title = title
else:
raise Exception("Position name not valid: %s" % name)
# a dictionary keyed on player name for quick lookups
self.players = {}
for player in players:
self.players[player.name] = player
class Player:
"""A player/squad"""
def __init__(self, name, position):
self.name = name
self.position = position
self.stat_categories = []
self.seasons = defaultdict(dict)
# stats is a dictionary keyed on the name of the stat, with a
# value that can be converted to a float
def add_season(self, year, stats):
if self.stat_categories == []:
for key in stats.iterkeys():
key = self.clean_stat_name(key)
if key:
self.stat_categories.append(key)
for key, val in stats.iteritems():
key = self.clean_stat_name(key)
if key and self.stat_categories and key not in self.stat_categories:
raise Exception("Stat '%s' not in existing categories: %s" % \
(key ,str(self.stat_categories)))
try:
val = float(val)
except:
pass # if we can't float it, it's probably text or something
self.seasons[year][key] = val
def clean_stat_name(self, stat_name):
""" for dealing with unruly headers """
stat_name = stat_name.strip()
mapping = {'Rec Tgt':'Targets',
'Tgt':'Targets',
'KR Lng': 'KR Long',
}
if self.position == 'QB':
mapping['YdsL'] = 'Sack Yds'
for key, val in mapping.iteritems():
if stat_name == key:
stat_name = val
if stat_name:
return stat_name
else:
return False
if __name__ == "__main__":
data_root = "./data/"
for subdir, dirs, files in os.walk(data_root):
if not dirs:
year = subdir.split('/')[-1]
for filename in files:
if filename.split('.')[-1].lower() == 'csv':
position = filename.split('.')[0].upper()
with open(os.path.join(subdir,filename),'rU+') as csvfile:
reader=csv.DictReader(csvfile)
for obj in reader:
try:
p = Player(obj["Name"], position)
p.add_season(year, obj)
except KeyError:
p = Player(obj["Team"], position)
p.add_season(year, obj)
a, b = p.position, p.stat_categories
b.sort()
print a, b
|
<commit_before><commit_msg>Add data loader script with a couple classes. Not finished yet...<commit_after>
|
"""
Creates a nice tidy pickle file of the data in the data/ directory.
"""
import os
import csv
from collections import defaultdict
class Position:
"""
A position for fantasy football.
"""
def __init__(self, title, players=[]):
if title in [QB, RB, WR, TE, DEF, ST, K]:
self.title = title
else:
raise Exception("Position name not valid: %s" % name)
# a dictionary keyed on player name for quick lookups
self.players = {}
for player in players:
self.players[player.name] = player
class Player:
"""A player/squad"""
def __init__(self, name, position):
self.name = name
self.position = position
self.stat_categories = []
self.seasons = defaultdict(dict)
# stats is a dictionary keyed on the name of the stat, with a
# value that can be converted to a float
def add_season(self, year, stats):
if self.stat_categories == []:
for key in stats.iterkeys():
key = self.clean_stat_name(key)
if key:
self.stat_categories.append(key)
for key, val in stats.iteritems():
key = self.clean_stat_name(key)
if key and self.stat_categories and key not in self.stat_categories:
raise Exception("Stat '%s' not in existing categories: %s" % \
(key ,str(self.stat_categories)))
try:
val = float(val)
except:
pass # if we can't float it, it's probably text or something
self.seasons[year][key] = val
def clean_stat_name(self, stat_name):
""" for dealing with unruly headers """
stat_name = stat_name.strip()
mapping = {'Rec Tgt':'Targets',
'Tgt':'Targets',
'KR Lng': 'KR Long',
}
if self.position == 'QB':
mapping['YdsL'] = 'Sack Yds'
for key, val in mapping.iteritems():
if stat_name == key:
stat_name = val
if stat_name:
return stat_name
else:
return False
if __name__ == "__main__":
data_root = "./data/"
for subdir, dirs, files in os.walk(data_root):
if not dirs:
year = subdir.split('/')[-1]
for filename in files:
if filename.split('.')[-1].lower() == 'csv':
position = filename.split('.')[0].upper()
with open(os.path.join(subdir,filename),'rU+') as csvfile:
reader=csv.DictReader(csvfile)
for obj in reader:
try:
p = Player(obj["Name"], position)
p.add_season(year, obj)
except KeyError:
p = Player(obj["Team"], position)
p.add_season(year, obj)
a, b = p.position, p.stat_categories
b.sort()
print a, b
|
Add data loader script with a couple classes. Not finished yet..."""
Creates a nice tidy pickle file of the data in the data/ directory.
"""
import os
import csv
from collections import defaultdict
class Position:
"""
A position for fantasy football.
"""
def __init__(self, title, players=[]):
if title in [QB, RB, WR, TE, DEF, ST, K]:
self.title = title
else:
raise Exception("Position name not valid: %s" % name)
# a dictionary keyed on player name for quick lookups
self.players = {}
for player in players:
self.players[player.name] = player
class Player:
"""A player/squad"""
def __init__(self, name, position):
self.name = name
self.position = position
self.stat_categories = []
self.seasons = defaultdict(dict)
# stats is a dictionary keyed on the name of the stat, with a
# value that can be converted to a float
def add_season(self, year, stats):
if self.stat_categories == []:
for key in stats.iterkeys():
key = self.clean_stat_name(key)
if key:
self.stat_categories.append(key)
for key, val in stats.iteritems():
key = self.clean_stat_name(key)
if key and self.stat_categories and key not in self.stat_categories:
raise Exception("Stat '%s' not in existing categories: %s" % \
(key ,str(self.stat_categories)))
try:
val = float(val)
except:
pass # if we can't float it, it's probably text or something
self.seasons[year][key] = val
def clean_stat_name(self, stat_name):
""" for dealing with unruly headers """
stat_name = stat_name.strip()
mapping = {'Rec Tgt':'Targets',
'Tgt':'Targets',
'KR Lng': 'KR Long',
}
if self.position == 'QB':
mapping['YdsL'] = 'Sack Yds'
for key, val in mapping.iteritems():
if stat_name == key:
stat_name = val
if stat_name:
return stat_name
else:
return False
if __name__ == "__main__":
data_root = "./data/"
for subdir, dirs, files in os.walk(data_root):
if not dirs:
year = subdir.split('/')[-1]
for filename in files:
if filename.split('.')[-1].lower() == 'csv':
position = filename.split('.')[0].upper()
with open(os.path.join(subdir,filename),'rU+') as csvfile:
reader=csv.DictReader(csvfile)
for obj in reader:
try:
p = Player(obj["Name"], position)
p.add_season(year, obj)
except KeyError:
p = Player(obj["Team"], position)
p.add_season(year, obj)
a, b = p.position, p.stat_categories
b.sort()
print a, b
|
<commit_before><commit_msg>Add data loader script with a couple classes. Not finished yet...<commit_after>"""
Creates a nice tidy pickle file of the data in the data/ directory.
"""
import os
import csv
from collections import defaultdict
class Position:
"""
A position for fantasy football.
"""
def __init__(self, title, players=[]):
if title in [QB, RB, WR, TE, DEF, ST, K]:
self.title = title
else:
raise Exception("Position name not valid: %s" % name)
# a dictionary keyed on player name for quick lookups
self.players = {}
for player in players:
self.players[player.name] = player
class Player:
"""A player/squad"""
def __init__(self, name, position):
self.name = name
self.position = position
self.stat_categories = []
self.seasons = defaultdict(dict)
# stats is a dictionary keyed on the name of the stat, with a
# value that can be converted to a float
def add_season(self, year, stats):
if self.stat_categories == []:
for key in stats.iterkeys():
key = self.clean_stat_name(key)
if key:
self.stat_categories.append(key)
for key, val in stats.iteritems():
key = self.clean_stat_name(key)
if key and self.stat_categories and key not in self.stat_categories:
raise Exception("Stat '%s' not in existing categories: %s" % \
(key ,str(self.stat_categories)))
try:
val = float(val)
except:
pass # if we can't float it, it's probably text or something
self.seasons[year][key] = val
def clean_stat_name(self, stat_name):
""" for dealing with unruly headers """
stat_name = stat_name.strip()
mapping = {'Rec Tgt':'Targets',
'Tgt':'Targets',
'KR Lng': 'KR Long',
}
if self.position == 'QB':
mapping['YdsL'] = 'Sack Yds'
for key, val in mapping.iteritems():
if stat_name == key:
stat_name = val
if stat_name:
return stat_name
else:
return False
if __name__ == "__main__":
data_root = "./data/"
for subdir, dirs, files in os.walk(data_root):
if not dirs:
year = subdir.split('/')[-1]
for filename in files:
if filename.split('.')[-1].lower() == 'csv':
position = filename.split('.')[0].upper()
with open(os.path.join(subdir,filename),'rU+') as csvfile:
reader=csv.DictReader(csvfile)
for obj in reader:
try:
p = Player(obj["Name"], position)
p.add_season(year, obj)
except KeyError:
p = Player(obj["Team"], position)
p.add_season(year, obj)
a, b = p.position, p.stat_categories
b.sort()
print a, b
|
|
66eb71614deccf1825f087e56e493cc6215bba92
|
scripts/calculate_lqr_gain.py
|
scripts/calculate_lqr_gain.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import scipy
import control
from dtk.bicycle import benchmark_state_space_vs_speed, benchmark_matrices
def compute_whipple_lqr_gain(velocity):
_, A, B = benchmark_state_space_vs_speed(*benchmark_matrices(), velocity)
Q = np.diag([1e5, 1e3, 1e3, 1e2])
R = np.eye(2)
gains = [control.lqr(Ai, Bi, Q, R)[0] for Ai, Bi in zip(A, B)]
return gains
if __name__ == '__main__':
import sys
v_low = 0 # m/s
if len(sys.argv) > 1:
v_high = int(sys.argv[1])
else:
v_high = 1 # m/s
velocities = [v_low, v_high]
gains = compute_whipple_lqr_gain(velocities)
for v, K in zip(velocities, gains):
print('computed LQR controller feedback gain for v = {}'.format(v)j)
print(K)
print()
|
Add script to compute LQR gains
|
Add script to compute LQR gains
|
Python
|
bsd-2-clause
|
oliverlee/phobos,oliverlee/phobos,oliverlee/phobos,oliverlee/phobos
|
Add script to compute LQR gains
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import scipy
import control
from dtk.bicycle import benchmark_state_space_vs_speed, benchmark_matrices
def compute_whipple_lqr_gain(velocity):
_, A, B = benchmark_state_space_vs_speed(*benchmark_matrices(), velocity)
Q = np.diag([1e5, 1e3, 1e3, 1e2])
R = np.eye(2)
gains = [control.lqr(Ai, Bi, Q, R)[0] for Ai, Bi in zip(A, B)]
return gains
if __name__ == '__main__':
import sys
v_low = 0 # m/s
if len(sys.argv) > 1:
v_high = int(sys.argv[1])
else:
v_high = 1 # m/s
velocities = [v_low, v_high]
gains = compute_whipple_lqr_gain(velocities)
for v, K in zip(velocities, gains):
print('computed LQR controller feedback gain for v = {}'.format(v)j)
print(K)
print()
|
<commit_before><commit_msg>Add script to compute LQR gains<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import scipy
import control
from dtk.bicycle import benchmark_state_space_vs_speed, benchmark_matrices
def compute_whipple_lqr_gain(velocity):
_, A, B = benchmark_state_space_vs_speed(*benchmark_matrices(), velocity)
Q = np.diag([1e5, 1e3, 1e3, 1e2])
R = np.eye(2)
gains = [control.lqr(Ai, Bi, Q, R)[0] for Ai, Bi in zip(A, B)]
return gains
if __name__ == '__main__':
import sys
v_low = 0 # m/s
if len(sys.argv) > 1:
v_high = int(sys.argv[1])
else:
v_high = 1 # m/s
velocities = [v_low, v_high]
gains = compute_whipple_lqr_gain(velocities)
for v, K in zip(velocities, gains):
print('computed LQR controller feedback gain for v = {}'.format(v)j)
print(K)
print()
|
Add script to compute LQR gains#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import scipy
import control
from dtk.bicycle import benchmark_state_space_vs_speed, benchmark_matrices
def compute_whipple_lqr_gain(velocity):
_, A, B = benchmark_state_space_vs_speed(*benchmark_matrices(), velocity)
Q = np.diag([1e5, 1e3, 1e3, 1e2])
R = np.eye(2)
gains = [control.lqr(Ai, Bi, Q, R)[0] for Ai, Bi in zip(A, B)]
return gains
if __name__ == '__main__':
import sys
v_low = 0 # m/s
if len(sys.argv) > 1:
v_high = int(sys.argv[1])
else:
v_high = 1 # m/s
velocities = [v_low, v_high]
gains = compute_whipple_lqr_gain(velocities)
for v, K in zip(velocities, gains):
print('computed LQR controller feedback gain for v = {}'.format(v)j)
print(K)
print()
|
<commit_before><commit_msg>Add script to compute LQR gains<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import numpy as np
import scipy
import control
from dtk.bicycle import benchmark_state_space_vs_speed, benchmark_matrices
def compute_whipple_lqr_gain(velocity):
_, A, B = benchmark_state_space_vs_speed(*benchmark_matrices(), velocity)
Q = np.diag([1e5, 1e3, 1e3, 1e2])
R = np.eye(2)
gains = [control.lqr(Ai, Bi, Q, R)[0] for Ai, Bi in zip(A, B)]
return gains
if __name__ == '__main__':
import sys
v_low = 0 # m/s
if len(sys.argv) > 1:
v_high = int(sys.argv[1])
else:
v_high = 1 # m/s
velocities = [v_low, v_high]
gains = compute_whipple_lqr_gain(velocities)
for v, K in zip(velocities, gains):
print('computed LQR controller feedback gain for v = {}'.format(v)j)
print(K)
print()
|
|
dbf39babaacc8c6407620b7d6d6e5cb568a99940
|
conf_site/proposals/migrations/0003_add_disclaimer.py
|
conf_site/proposals/migrations/0003_add_disclaimer.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.12 on 2017-01-19 05:30
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('proposals', '0002_add_under_represented_population_questions'),
]
operations = [
migrations.AlterField(
model_name='proposal',
name='under_represented_population',
field=models.CharField(choices=[(b'', b'----'), (b'Y', b'Yes'), (b'N', b'No'), (b'O', b'I would prefer not to answer')], default=b'', max_length=1, verbose_name=b"Do you feel that you or your talk represent a population under-represented in the Python and/or Data community? In no way will this data be used as part of your proposal. This will only be used to gather diversity statistics in order to further NumFOCUS' mission."),
),
]
|
Add missing migration from proposals application.
|
Add missing migration from proposals application.
See e4b9588275dda17a70ba13fc3997b7dc20f66f57.
|
Python
|
mit
|
pydata/conf_site,pydata/conf_site,pydata/conf_site
|
Add missing migration from proposals application.
See e4b9588275dda17a70ba13fc3997b7dc20f66f57.
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.12 on 2017-01-19 05:30
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('proposals', '0002_add_under_represented_population_questions'),
]
operations = [
migrations.AlterField(
model_name='proposal',
name='under_represented_population',
field=models.CharField(choices=[(b'', b'----'), (b'Y', b'Yes'), (b'N', b'No'), (b'O', b'I would prefer not to answer')], default=b'', max_length=1, verbose_name=b"Do you feel that you or your talk represent a population under-represented in the Python and/or Data community? In no way will this data be used as part of your proposal. This will only be used to gather diversity statistics in order to further NumFOCUS' mission."),
),
]
|
<commit_before><commit_msg>Add missing migration from proposals application.
See e4b9588275dda17a70ba13fc3997b7dc20f66f57.<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.12 on 2017-01-19 05:30
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('proposals', '0002_add_under_represented_population_questions'),
]
operations = [
migrations.AlterField(
model_name='proposal',
name='under_represented_population',
field=models.CharField(choices=[(b'', b'----'), (b'Y', b'Yes'), (b'N', b'No'), (b'O', b'I would prefer not to answer')], default=b'', max_length=1, verbose_name=b"Do you feel that you or your talk represent a population under-represented in the Python and/or Data community? In no way will this data be used as part of your proposal. This will only be used to gather diversity statistics in order to further NumFOCUS' mission."),
),
]
|
Add missing migration from proposals application.
See e4b9588275dda17a70ba13fc3997b7dc20f66f57.# -*- coding: utf-8 -*-
# Generated by Django 1.9.12 on 2017-01-19 05:30
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('proposals', '0002_add_under_represented_population_questions'),
]
operations = [
migrations.AlterField(
model_name='proposal',
name='under_represented_population',
field=models.CharField(choices=[(b'', b'----'), (b'Y', b'Yes'), (b'N', b'No'), (b'O', b'I would prefer not to answer')], default=b'', max_length=1, verbose_name=b"Do you feel that you or your talk represent a population under-represented in the Python and/or Data community? In no way will this data be used as part of your proposal. This will only be used to gather diversity statistics in order to further NumFOCUS' mission."),
),
]
|
<commit_before><commit_msg>Add missing migration from proposals application.
See e4b9588275dda17a70ba13fc3997b7dc20f66f57.<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.9.12 on 2017-01-19 05:30
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('proposals', '0002_add_under_represented_population_questions'),
]
operations = [
migrations.AlterField(
model_name='proposal',
name='under_represented_population',
field=models.CharField(choices=[(b'', b'----'), (b'Y', b'Yes'), (b'N', b'No'), (b'O', b'I would prefer not to answer')], default=b'', max_length=1, verbose_name=b"Do you feel that you or your talk represent a population under-represented in the Python and/or Data community? In no way will this data be used as part of your proposal. This will only be used to gather diversity statistics in order to further NumFOCUS' mission."),
),
]
|
|
3f8b118eed23434b9784013d88e4f35a4be160ec
|
mom1_grid.py
|
mom1_grid.py
|
#!/usr/bin/env python
from __future__ import print_function
import numpy as np
import netCDF4 as nc
from base_grid import BaseGrid
class Mom1Grid(BaseGrid):
def __init__(self, h_grid_def, v_grid_def=None, mask_file=None,
description='MOM tripolar'):
"""
MOM 1 degree grid.
"""
self.type = 'Arakawa B'
with nc.Dataset(h_grid_def) as f:
# Select points from double density horizontal grid.
# t-cells.
x_t = f.variables['x_T'][:]
y_t = f.variables['y_T'][:]
# u-cells.
x_u = f.variables['x_C'][:]
y_u = f.variables['y_C'][:]
self.area_t = f.variables['area_T'][:]
self.area_u = f.variables['area_C'][:]
self.mask_t = f.variables['wet'][:]
self.mask_u = f.variables['wet'][:]
self.clon_t = f.variables['x_vert_T'][:]
self.clat_t = f.variables['y_vert_T'][:]
self.clon_u = f.variables['x_vert_C'][:]
self.clat_u = f.variables['y_vert_C'][:]
super(Mom1Grid, self).__init__(x_t, y_t, levels=[0], x_u=x_u, y_u=y_u,
description=description)
def set_mask(self):
"""
Mask is read from file above.
"""
pass
def calc_areas(self):
"""
Areas are read from file above.
"""
pass
def make_corners(self):
"""
Corners are read from file above.
"""
|
Add 1 deg MOM grid.
|
Add 1 deg MOM grid.
|
Python
|
apache-2.0
|
DoublePrecision/esmgrids
|
Add 1 deg MOM grid.
|
#!/usr/bin/env python
from __future__ import print_function
import numpy as np
import netCDF4 as nc
from base_grid import BaseGrid
class Mom1Grid(BaseGrid):
def __init__(self, h_grid_def, v_grid_def=None, mask_file=None,
description='MOM tripolar'):
"""
MOM 1 degree grid.
"""
self.type = 'Arakawa B'
with nc.Dataset(h_grid_def) as f:
# Select points from double density horizontal grid.
# t-cells.
x_t = f.variables['x_T'][:]
y_t = f.variables['y_T'][:]
# u-cells.
x_u = f.variables['x_C'][:]
y_u = f.variables['y_C'][:]
self.area_t = f.variables['area_T'][:]
self.area_u = f.variables['area_C'][:]
self.mask_t = f.variables['wet'][:]
self.mask_u = f.variables['wet'][:]
self.clon_t = f.variables['x_vert_T'][:]
self.clat_t = f.variables['y_vert_T'][:]
self.clon_u = f.variables['x_vert_C'][:]
self.clat_u = f.variables['y_vert_C'][:]
super(Mom1Grid, self).__init__(x_t, y_t, levels=[0], x_u=x_u, y_u=y_u,
description=description)
def set_mask(self):
"""
Mask is read from file above.
"""
pass
def calc_areas(self):
"""
Areas are read from file above.
"""
pass
def make_corners(self):
"""
Corners are read from file above.
"""
|
<commit_before><commit_msg>Add 1 deg MOM grid.<commit_after>
|
#!/usr/bin/env python
from __future__ import print_function
import numpy as np
import netCDF4 as nc
from base_grid import BaseGrid
class Mom1Grid(BaseGrid):
def __init__(self, h_grid_def, v_grid_def=None, mask_file=None,
description='MOM tripolar'):
"""
MOM 1 degree grid.
"""
self.type = 'Arakawa B'
with nc.Dataset(h_grid_def) as f:
# Select points from double density horizontal grid.
# t-cells.
x_t = f.variables['x_T'][:]
y_t = f.variables['y_T'][:]
# u-cells.
x_u = f.variables['x_C'][:]
y_u = f.variables['y_C'][:]
self.area_t = f.variables['area_T'][:]
self.area_u = f.variables['area_C'][:]
self.mask_t = f.variables['wet'][:]
self.mask_u = f.variables['wet'][:]
self.clon_t = f.variables['x_vert_T'][:]
self.clat_t = f.variables['y_vert_T'][:]
self.clon_u = f.variables['x_vert_C'][:]
self.clat_u = f.variables['y_vert_C'][:]
super(Mom1Grid, self).__init__(x_t, y_t, levels=[0], x_u=x_u, y_u=y_u,
description=description)
def set_mask(self):
"""
Mask is read from file above.
"""
pass
def calc_areas(self):
"""
Areas are read from file above.
"""
pass
def make_corners(self):
"""
Corners are read from file above.
"""
|
Add 1 deg MOM grid.#!/usr/bin/env python
from __future__ import print_function
import numpy as np
import netCDF4 as nc
from base_grid import BaseGrid
class Mom1Grid(BaseGrid):
def __init__(self, h_grid_def, v_grid_def=None, mask_file=None,
description='MOM tripolar'):
"""
MOM 1 degree grid.
"""
self.type = 'Arakawa B'
with nc.Dataset(h_grid_def) as f:
# Select points from double density horizontal grid.
# t-cells.
x_t = f.variables['x_T'][:]
y_t = f.variables['y_T'][:]
# u-cells.
x_u = f.variables['x_C'][:]
y_u = f.variables['y_C'][:]
self.area_t = f.variables['area_T'][:]
self.area_u = f.variables['area_C'][:]
self.mask_t = f.variables['wet'][:]
self.mask_u = f.variables['wet'][:]
self.clon_t = f.variables['x_vert_T'][:]
self.clat_t = f.variables['y_vert_T'][:]
self.clon_u = f.variables['x_vert_C'][:]
self.clat_u = f.variables['y_vert_C'][:]
super(Mom1Grid, self).__init__(x_t, y_t, levels=[0], x_u=x_u, y_u=y_u,
description=description)
def set_mask(self):
"""
Mask is read from file above.
"""
pass
def calc_areas(self):
"""
Areas are read from file above.
"""
pass
def make_corners(self):
"""
Corners are read from file above.
"""
|
<commit_before><commit_msg>Add 1 deg MOM grid.<commit_after>#!/usr/bin/env python
from __future__ import print_function
import numpy as np
import netCDF4 as nc
from base_grid import BaseGrid
class Mom1Grid(BaseGrid):
def __init__(self, h_grid_def, v_grid_def=None, mask_file=None,
description='MOM tripolar'):
"""
MOM 1 degree grid.
"""
self.type = 'Arakawa B'
with nc.Dataset(h_grid_def) as f:
# Select points from double density horizontal grid.
# t-cells.
x_t = f.variables['x_T'][:]
y_t = f.variables['y_T'][:]
# u-cells.
x_u = f.variables['x_C'][:]
y_u = f.variables['y_C'][:]
self.area_t = f.variables['area_T'][:]
self.area_u = f.variables['area_C'][:]
self.mask_t = f.variables['wet'][:]
self.mask_u = f.variables['wet'][:]
self.clon_t = f.variables['x_vert_T'][:]
self.clat_t = f.variables['y_vert_T'][:]
self.clon_u = f.variables['x_vert_C'][:]
self.clat_u = f.variables['y_vert_C'][:]
super(Mom1Grid, self).__init__(x_t, y_t, levels=[0], x_u=x_u, y_u=y_u,
description=description)
def set_mask(self):
"""
Mask is read from file above.
"""
pass
def calc_areas(self):
"""
Areas are read from file above.
"""
pass
def make_corners(self):
"""
Corners are read from file above.
"""
|
|
2d59e08a2308340fa48d3634f9d6404231a92446
|
14B-088/HI/analysis/HI_pvslices_thin.py
|
14B-088/HI/analysis/HI_pvslices_thin.py
|
'''
Create a set of thin PV slices
'''
from spectral_cube import SpectralCube, Projection
from astropy.io import fits
from astropy import units as u
import numpy as np
from cube_analysis.disk_pvslices import disk_pvslices
from paths import fourteenB_HI_data_wGBT_path, fourteenB_wGBT_HI_file_dict
from galaxy_params import gal_feath
cube = SpectralCube.read(fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms.fits"))
mom0 = Projection.from_hdu(fits.open(fourteenB_wGBT_HI_file_dict["Moment0"])[0])
thetas = np.arange(0, 180, 5) * u.deg
pv_width = 40 * u.arcsec
max_rad = 9. * u.kpc
save_name = fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms",
no_check=True)
# Run pv slicing
disk_pvslices(cube, gal_feath, thetas, pv_width, max_rad,
save_name=save_name, quicklook=False, mom0=mom0,
save_kwargs=dict(overwrite=True),
save_regions=True)
|
Create set of thing pv-slices across the disk
|
Create set of thing pv-slices across the disk
|
Python
|
mit
|
e-koch/VLA_Lband,e-koch/VLA_Lband
|
Create set of thing pv-slices across the disk
|
'''
Create a set of thin PV slices
'''
from spectral_cube import SpectralCube, Projection
from astropy.io import fits
from astropy import units as u
import numpy as np
from cube_analysis.disk_pvslices import disk_pvslices
from paths import fourteenB_HI_data_wGBT_path, fourteenB_wGBT_HI_file_dict
from galaxy_params import gal_feath
cube = SpectralCube.read(fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms.fits"))
mom0 = Projection.from_hdu(fits.open(fourteenB_wGBT_HI_file_dict["Moment0"])[0])
thetas = np.arange(0, 180, 5) * u.deg
pv_width = 40 * u.arcsec
max_rad = 9. * u.kpc
save_name = fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms",
no_check=True)
# Run pv slicing
disk_pvslices(cube, gal_feath, thetas, pv_width, max_rad,
save_name=save_name, quicklook=False, mom0=mom0,
save_kwargs=dict(overwrite=True),
save_regions=True)
|
<commit_before><commit_msg>Create set of thing pv-slices across the disk<commit_after>
|
'''
Create a set of thin PV slices
'''
from spectral_cube import SpectralCube, Projection
from astropy.io import fits
from astropy import units as u
import numpy as np
from cube_analysis.disk_pvslices import disk_pvslices
from paths import fourteenB_HI_data_wGBT_path, fourteenB_wGBT_HI_file_dict
from galaxy_params import gal_feath
cube = SpectralCube.read(fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms.fits"))
mom0 = Projection.from_hdu(fits.open(fourteenB_wGBT_HI_file_dict["Moment0"])[0])
thetas = np.arange(0, 180, 5) * u.deg
pv_width = 40 * u.arcsec
max_rad = 9. * u.kpc
save_name = fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms",
no_check=True)
# Run pv slicing
disk_pvslices(cube, gal_feath, thetas, pv_width, max_rad,
save_name=save_name, quicklook=False, mom0=mom0,
save_kwargs=dict(overwrite=True),
save_regions=True)
|
Create set of thing pv-slices across the disk
'''
Create a set of thin PV slices
'''
from spectral_cube import SpectralCube, Projection
from astropy.io import fits
from astropy import units as u
import numpy as np
from cube_analysis.disk_pvslices import disk_pvslices
from paths import fourteenB_HI_data_wGBT_path, fourteenB_wGBT_HI_file_dict
from galaxy_params import gal_feath
cube = SpectralCube.read(fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms.fits"))
mom0 = Projection.from_hdu(fits.open(fourteenB_wGBT_HI_file_dict["Moment0"])[0])
thetas = np.arange(0, 180, 5) * u.deg
pv_width = 40 * u.arcsec
max_rad = 9. * u.kpc
save_name = fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms",
no_check=True)
# Run pv slicing
disk_pvslices(cube, gal_feath, thetas, pv_width, max_rad,
save_name=save_name, quicklook=False, mom0=mom0,
save_kwargs=dict(overwrite=True),
save_regions=True)
|
<commit_before><commit_msg>Create set of thing pv-slices across the disk<commit_after>
'''
Create a set of thin PV slices
'''
from spectral_cube import SpectralCube, Projection
from astropy.io import fits
from astropy import units as u
import numpy as np
from cube_analysis.disk_pvslices import disk_pvslices
from paths import fourteenB_HI_data_wGBT_path, fourteenB_wGBT_HI_file_dict
from galaxy_params import gal_feath
cube = SpectralCube.read(fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms.fits"))
mom0 = Projection.from_hdu(fits.open(fourteenB_wGBT_HI_file_dict["Moment0"])[0])
thetas = np.arange(0, 180, 5) * u.deg
pv_width = 40 * u.arcsec
max_rad = 9. * u.kpc
save_name = fourteenB_HI_data_wGBT_path("downsamp_1kms/M33_14B-088_HI.clean.image.GBT_feathered.1kms",
no_check=True)
# Run pv slicing
disk_pvslices(cube, gal_feath, thetas, pv_width, max_rad,
save_name=save_name, quicklook=False, mom0=mom0,
save_kwargs=dict(overwrite=True),
save_regions=True)
|
|
68e57778ceca4b8085959c45dc0d746dddc82489
|
midi/examples/printmidiin.py
|
midi/examples/printmidiin.py
|
import pyb
from mid.midiin import MidiIn
def midi_printer(msg):
print(tuple(msg))
def loop(midiin):
while True:
midiin.poll()
pyb.udelay(500)
uart = py.UART(2, 31250)
midiin = MidiIn(uart, callback=midi_printer)
loop(midiin)
|
Add simple midi input example script
|
Add simple midi input example script
|
Python
|
mit
|
SpotlightKid/micropython-stm-lib
|
Add simple midi input example script
|
import pyb
from mid.midiin import MidiIn
def midi_printer(msg):
print(tuple(msg))
def loop(midiin):
while True:
midiin.poll()
pyb.udelay(500)
uart = py.UART(2, 31250)
midiin = MidiIn(uart, callback=midi_printer)
loop(midiin)
|
<commit_before><commit_msg>Add simple midi input example script<commit_after>
|
import pyb
from mid.midiin import MidiIn
def midi_printer(msg):
print(tuple(msg))
def loop(midiin):
while True:
midiin.poll()
pyb.udelay(500)
uart = py.UART(2, 31250)
midiin = MidiIn(uart, callback=midi_printer)
loop(midiin)
|
Add simple midi input example scriptimport pyb
from mid.midiin import MidiIn
def midi_printer(msg):
print(tuple(msg))
def loop(midiin):
while True:
midiin.poll()
pyb.udelay(500)
uart = py.UART(2, 31250)
midiin = MidiIn(uart, callback=midi_printer)
loop(midiin)
|
<commit_before><commit_msg>Add simple midi input example script<commit_after>import pyb
from mid.midiin import MidiIn
def midi_printer(msg):
print(tuple(msg))
def loop(midiin):
while True:
midiin.poll()
pyb.udelay(500)
uart = py.UART(2, 31250)
midiin = MidiIn(uart, callback=midi_printer)
loop(midiin)
|
|
236d707697ba0aa68d2549f2c8bc6e4d038dd626
|
cw_model.py
|
cw_model.py
|
class CWModel(object):
def __init__(self, json_dict=None):
if json_dict is not None:
self.__dict__.update(json_dict)
def __repr__(self):
string = ''
for k, v in self.__dict__.items():
string = ''.join([string, '{}: {}\n'.format(k, v)])
return string
|
Add super model class via upload
|
Add super model class via upload
|
Python
|
mit
|
joshuamsmith/ConnectPyse
|
Add super model class via upload
|
class CWModel(object):
def __init__(self, json_dict=None):
if json_dict is not None:
self.__dict__.update(json_dict)
def __repr__(self):
string = ''
for k, v in self.__dict__.items():
string = ''.join([string, '{}: {}\n'.format(k, v)])
return string
|
<commit_before><commit_msg>Add super model class via upload<commit_after>
|
class CWModel(object):
def __init__(self, json_dict=None):
if json_dict is not None:
self.__dict__.update(json_dict)
def __repr__(self):
string = ''
for k, v in self.__dict__.items():
string = ''.join([string, '{}: {}\n'.format(k, v)])
return string
|
Add super model class via upload
class CWModel(object):
def __init__(self, json_dict=None):
if json_dict is not None:
self.__dict__.update(json_dict)
def __repr__(self):
string = ''
for k, v in self.__dict__.items():
string = ''.join([string, '{}: {}\n'.format(k, v)])
return string
|
<commit_before><commit_msg>Add super model class via upload<commit_after>
class CWModel(object):
def __init__(self, json_dict=None):
if json_dict is not None:
self.__dict__.update(json_dict)
def __repr__(self):
string = ''
for k, v in self.__dict__.items():
string = ''.join([string, '{}: {}\n'.format(k, v)])
return string
|
|
efdd5de9ee67814b7c99aaaaf22fedb4578777b9
|
froide/comments/migrations/0002_auto_20210505_1720.py
|
froide/comments/migrations/0002_auto_20210505_1720.py
|
# Generated by Django 3.1.8 on 2021-05-05 15:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('comments', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='froidecomment',
name='is_removed',
field=models.BooleanField(db_index=True, default=False, help_text='Check this box if the comment is inappropriate. A "This comment has been removed" message will be displayed instead.', verbose_name='is removed'),
),
migrations.AlterField(
model_name='froidecomment',
name='object_pk',
field=models.CharField(db_index=True, max_length=64, verbose_name='object ID'),
),
]
|
Add migration from django conrib comments update
|
Add migration from django conrib comments update
|
Python
|
mit
|
fin/froide,fin/froide,fin/froide,fin/froide
|
Add migration from django conrib comments update
|
# Generated by Django 3.1.8 on 2021-05-05 15:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('comments', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='froidecomment',
name='is_removed',
field=models.BooleanField(db_index=True, default=False, help_text='Check this box if the comment is inappropriate. A "This comment has been removed" message will be displayed instead.', verbose_name='is removed'),
),
migrations.AlterField(
model_name='froidecomment',
name='object_pk',
field=models.CharField(db_index=True, max_length=64, verbose_name='object ID'),
),
]
|
<commit_before><commit_msg>Add migration from django conrib comments update<commit_after>
|
# Generated by Django 3.1.8 on 2021-05-05 15:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('comments', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='froidecomment',
name='is_removed',
field=models.BooleanField(db_index=True, default=False, help_text='Check this box if the comment is inappropriate. A "This comment has been removed" message will be displayed instead.', verbose_name='is removed'),
),
migrations.AlterField(
model_name='froidecomment',
name='object_pk',
field=models.CharField(db_index=True, max_length=64, verbose_name='object ID'),
),
]
|
Add migration from django conrib comments update# Generated by Django 3.1.8 on 2021-05-05 15:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('comments', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='froidecomment',
name='is_removed',
field=models.BooleanField(db_index=True, default=False, help_text='Check this box if the comment is inappropriate. A "This comment has been removed" message will be displayed instead.', verbose_name='is removed'),
),
migrations.AlterField(
model_name='froidecomment',
name='object_pk',
field=models.CharField(db_index=True, max_length=64, verbose_name='object ID'),
),
]
|
<commit_before><commit_msg>Add migration from django conrib comments update<commit_after># Generated by Django 3.1.8 on 2021-05-05 15:20
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('comments', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='froidecomment',
name='is_removed',
field=models.BooleanField(db_index=True, default=False, help_text='Check this box if the comment is inappropriate. A "This comment has been removed" message will be displayed instead.', verbose_name='is removed'),
),
migrations.AlterField(
model_name='froidecomment',
name='object_pk',
field=models.CharField(db_index=True, max_length=64, verbose_name='object ID'),
),
]
|
|
c59fdbe03341843cbc9eb23d71c84376e71d55a1
|
conda_env/main_env.py
|
conda_env/main_env.py
|
from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(args, info_dict['envs'])
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
|
from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(info_dict['envs'], not args.json)
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
|
Update to work with the latest iteration
|
Update to work with the latest iteration
|
Python
|
bsd-3-clause
|
ESSS/conda-env,conda/conda-env,conda/conda-env,dan-blanchard/conda-env,asmeurer/conda-env,nicoddemus/conda-env,isaac-kit/conda-env,phobson/conda-env,dan-blanchard/conda-env,asmeurer/conda-env,phobson/conda-env,nicoddemus/conda-env,ESSS/conda-env,mikecroucher/conda-env,mikecroucher/conda-env,isaac-kit/conda-env
|
from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(args, info_dict['envs'])
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
Update to work with the latest iteration
|
from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(info_dict['envs'], not args.json)
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
|
<commit_before>from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(args, info_dict['envs'])
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
<commit_msg>Update to work with the latest iteration<commit_after>
|
from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(info_dict['envs'], not args.json)
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
|
from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(args, info_dict['envs'])
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
Update to work with the latest iterationfrom __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(info_dict['envs'], not args.json)
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
|
<commit_before>from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(args, info_dict['envs'])
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
<commit_msg>Update to work with the latest iteration<commit_after>from __future__ import print_function, division, absolute_import
import argparse
from argparse import RawDescriptionHelpFormatter
import sys
from conda.cli import common
description = """
Handles interacting with Conda environments.
"""
example = """
examples:
conda env list
conda env list --json
"""
def configure_parser():
p = argparse.ArgumentParser()
sub_parsers = p.add_subparsers()
l = sub_parsers.add_parser(
'list',
formatter_class=RawDescriptionHelpFormatter,
description=description,
help=description,
epilog=example,
)
common.add_parser_json(l)
return p
def main():
args = configure_parser().parse_args()
info_dict = {'envs': []}
common.handle_envs_list(info_dict['envs'], not args.json)
if args.json:
common.stdout_json(info_dict)
if __name__ == '__main__':
sys.exit(main())
|
e2c3e511236715390789acc35fd69f0b51f61ae8
|
multiproc-test.py
|
multiproc-test.py
|
import cv2
import CameraReaderAsync
# Initialize
debugMode = True
camera = cv2.VideoCapture(0)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
cameraReader = CameraReaderAsync.CameraReaderAsync(camera)
# Main Loop
framesToProcess = 60 * 5
while debugMode or framesToProcess >= 0:
raw = cameraReader.Read()
if raw is not None:
framesToProcess -= 1
if debugMode:
cv2.imshow("raw", raw)
if debugMode:
keyPress = cv2.waitKey(1)
if keyPress == ord("q"):
break
# Cleanup
cameraReader.Stop()
camera.release()
cv2.destroyAllWindows()
|
Test file for multiproc experiments.
|
Test file for multiproc experiments.
|
Python
|
mit
|
AluminatiFRC/Vision2016,AluminatiFRC/Vision2016
|
Test file for multiproc experiments.
|
import cv2
import CameraReaderAsync
# Initialize
debugMode = True
camera = cv2.VideoCapture(0)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
cameraReader = CameraReaderAsync.CameraReaderAsync(camera)
# Main Loop
framesToProcess = 60 * 5
while debugMode or framesToProcess >= 0:
raw = cameraReader.Read()
if raw is not None:
framesToProcess -= 1
if debugMode:
cv2.imshow("raw", raw)
if debugMode:
keyPress = cv2.waitKey(1)
if keyPress == ord("q"):
break
# Cleanup
cameraReader.Stop()
camera.release()
cv2.destroyAllWindows()
|
<commit_before><commit_msg>Test file for multiproc experiments.<commit_after>
|
import cv2
import CameraReaderAsync
# Initialize
debugMode = True
camera = cv2.VideoCapture(0)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
cameraReader = CameraReaderAsync.CameraReaderAsync(camera)
# Main Loop
framesToProcess = 60 * 5
while debugMode or framesToProcess >= 0:
raw = cameraReader.Read()
if raw is not None:
framesToProcess -= 1
if debugMode:
cv2.imshow("raw", raw)
if debugMode:
keyPress = cv2.waitKey(1)
if keyPress == ord("q"):
break
# Cleanup
cameraReader.Stop()
camera.release()
cv2.destroyAllWindows()
|
Test file for multiproc experiments.import cv2
import CameraReaderAsync
# Initialize
debugMode = True
camera = cv2.VideoCapture(0)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
cameraReader = CameraReaderAsync.CameraReaderAsync(camera)
# Main Loop
framesToProcess = 60 * 5
while debugMode or framesToProcess >= 0:
raw = cameraReader.Read()
if raw is not None:
framesToProcess -= 1
if debugMode:
cv2.imshow("raw", raw)
if debugMode:
keyPress = cv2.waitKey(1)
if keyPress == ord("q"):
break
# Cleanup
cameraReader.Stop()
camera.release()
cv2.destroyAllWindows()
|
<commit_before><commit_msg>Test file for multiproc experiments.<commit_after>import cv2
import CameraReaderAsync
# Initialize
debugMode = True
camera = cv2.VideoCapture(0)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 640)
camera.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 480)
cameraReader = CameraReaderAsync.CameraReaderAsync(camera)
# Main Loop
framesToProcess = 60 * 5
while debugMode or framesToProcess >= 0:
raw = cameraReader.Read()
if raw is not None:
framesToProcess -= 1
if debugMode:
cv2.imshow("raw", raw)
if debugMode:
keyPress = cv2.waitKey(1)
if keyPress == ord("q"):
break
# Cleanup
cameraReader.Stop()
camera.release()
cv2.destroyAllWindows()
|
|
6698dcb505fe93bf49f85cebfb5afeffe296118d
|
src/pyvision/data/ml/ml.py
|
src/pyvision/data/ml/ml.py
|
'''
Created on Jan 14, 2010
@author: bolme
'''
import pyvision as pv
import numpy as np
import csv
import os.path
from pyvision.vector.SVM import SVM
from pyvision.vector.LDA import trainLDA
IRIS_PATH = os.path.join(pv.__path__[0],'data','ml','iris.csv')
reader = csv.reader(open(IRIS_PATH, "rb"))
data = []
labels = []
reader.next()
for row in reader:
data_point = map(float, row[1:5])
label = row[5]
data.append(data_point)
labels.append(label)
iris_data = np.array(data)
iris_labels = np.array(labels)
iris_training = np.arange(0,150,2)
iris_testing = iris_training+1
def testSVM():
svm = SVM()
for i in iris_training:
svm.addTraining(labels[i],iris_data[i,:])
svm.train(verbose = 5)
success = 0.0
total = 0.0
for i in iris_testing:
c = svm.predict(iris_data[i,:])
#print c, labels[i]
if c == labels[i]:
success += 1
total += 1
print "SVM Rate:",success/total
def testLDA():
training = iris_data[iris_training,:]
labels = iris_labels[iris_training]
w,v = trainLDA(training,labels)
print -3.5634683*w[:,0],-2.6365924*w[:,1],v/v.sum()
testLDA()
|
Load and test the iris data
|
Load and test the iris data
git-svn-id: f502b99eb1c95a567c3db4e21cd7828f3e46dcfd@123 2a83181e-c148-0410-bf7c-9f1693d9d8f0
|
Python
|
bsd-3-clause
|
svohara/pyvision,svohara/pyvision,tigerking/pyvision,svohara/pyvision,hitdong/pyvision,mikeseven/pyvision,hitdong/pyvision,hitdong/pyvision,mikeseven/pyvision,tigerking/pyvision,tigerking/pyvision,mikeseven/pyvision
|
Load and test the iris data
git-svn-id: f502b99eb1c95a567c3db4e21cd7828f3e46dcfd@123 2a83181e-c148-0410-bf7c-9f1693d9d8f0
|
'''
Created on Jan 14, 2010
@author: bolme
'''
import pyvision as pv
import numpy as np
import csv
import os.path
from pyvision.vector.SVM import SVM
from pyvision.vector.LDA import trainLDA
IRIS_PATH = os.path.join(pv.__path__[0],'data','ml','iris.csv')
reader = csv.reader(open(IRIS_PATH, "rb"))
data = []
labels = []
reader.next()
for row in reader:
data_point = map(float, row[1:5])
label = row[5]
data.append(data_point)
labels.append(label)
iris_data = np.array(data)
iris_labels = np.array(labels)
iris_training = np.arange(0,150,2)
iris_testing = iris_training+1
def testSVM():
svm = SVM()
for i in iris_training:
svm.addTraining(labels[i],iris_data[i,:])
svm.train(verbose = 5)
success = 0.0
total = 0.0
for i in iris_testing:
c = svm.predict(iris_data[i,:])
#print c, labels[i]
if c == labels[i]:
success += 1
total += 1
print "SVM Rate:",success/total
def testLDA():
training = iris_data[iris_training,:]
labels = iris_labels[iris_training]
w,v = trainLDA(training,labels)
print -3.5634683*w[:,0],-2.6365924*w[:,1],v/v.sum()
testLDA()
|
<commit_before><commit_msg>Load and test the iris data
git-svn-id: f502b99eb1c95a567c3db4e21cd7828f3e46dcfd@123 2a83181e-c148-0410-bf7c-9f1693d9d8f0<commit_after>
|
'''
Created on Jan 14, 2010
@author: bolme
'''
import pyvision as pv
import numpy as np
import csv
import os.path
from pyvision.vector.SVM import SVM
from pyvision.vector.LDA import trainLDA
IRIS_PATH = os.path.join(pv.__path__[0],'data','ml','iris.csv')
reader = csv.reader(open(IRIS_PATH, "rb"))
data = []
labels = []
reader.next()
for row in reader:
data_point = map(float, row[1:5])
label = row[5]
data.append(data_point)
labels.append(label)
iris_data = np.array(data)
iris_labels = np.array(labels)
iris_training = np.arange(0,150,2)
iris_testing = iris_training+1
def testSVM():
svm = SVM()
for i in iris_training:
svm.addTraining(labels[i],iris_data[i,:])
svm.train(verbose = 5)
success = 0.0
total = 0.0
for i in iris_testing:
c = svm.predict(iris_data[i,:])
#print c, labels[i]
if c == labels[i]:
success += 1
total += 1
print "SVM Rate:",success/total
def testLDA():
training = iris_data[iris_training,:]
labels = iris_labels[iris_training]
w,v = trainLDA(training,labels)
print -3.5634683*w[:,0],-2.6365924*w[:,1],v/v.sum()
testLDA()
|
Load and test the iris data
git-svn-id: f502b99eb1c95a567c3db4e21cd7828f3e46dcfd@123 2a83181e-c148-0410-bf7c-9f1693d9d8f0'''
Created on Jan 14, 2010
@author: bolme
'''
import pyvision as pv
import numpy as np
import csv
import os.path
from pyvision.vector.SVM import SVM
from pyvision.vector.LDA import trainLDA
IRIS_PATH = os.path.join(pv.__path__[0],'data','ml','iris.csv')
reader = csv.reader(open(IRIS_PATH, "rb"))
data = []
labels = []
reader.next()
for row in reader:
data_point = map(float, row[1:5])
label = row[5]
data.append(data_point)
labels.append(label)
iris_data = np.array(data)
iris_labels = np.array(labels)
iris_training = np.arange(0,150,2)
iris_testing = iris_training+1
def testSVM():
svm = SVM()
for i in iris_training:
svm.addTraining(labels[i],iris_data[i,:])
svm.train(verbose = 5)
success = 0.0
total = 0.0
for i in iris_testing:
c = svm.predict(iris_data[i,:])
#print c, labels[i]
if c == labels[i]:
success += 1
total += 1
print "SVM Rate:",success/total
def testLDA():
training = iris_data[iris_training,:]
labels = iris_labels[iris_training]
w,v = trainLDA(training,labels)
print -3.5634683*w[:,0],-2.6365924*w[:,1],v/v.sum()
testLDA()
|
<commit_before><commit_msg>Load and test the iris data
git-svn-id: f502b99eb1c95a567c3db4e21cd7828f3e46dcfd@123 2a83181e-c148-0410-bf7c-9f1693d9d8f0<commit_after>'''
Created on Jan 14, 2010
@author: bolme
'''
import pyvision as pv
import numpy as np
import csv
import os.path
from pyvision.vector.SVM import SVM
from pyvision.vector.LDA import trainLDA
IRIS_PATH = os.path.join(pv.__path__[0],'data','ml','iris.csv')
reader = csv.reader(open(IRIS_PATH, "rb"))
data = []
labels = []
reader.next()
for row in reader:
data_point = map(float, row[1:5])
label = row[5]
data.append(data_point)
labels.append(label)
iris_data = np.array(data)
iris_labels = np.array(labels)
iris_training = np.arange(0,150,2)
iris_testing = iris_training+1
def testSVM():
svm = SVM()
for i in iris_training:
svm.addTraining(labels[i],iris_data[i,:])
svm.train(verbose = 5)
success = 0.0
total = 0.0
for i in iris_testing:
c = svm.predict(iris_data[i,:])
#print c, labels[i]
if c == labels[i]:
success += 1
total += 1
print "SVM Rate:",success/total
def testLDA():
training = iris_data[iris_training,:]
labels = iris_labels[iris_training]
w,v = trainLDA(training,labels)
print -3.5634683*w[:,0],-2.6365924*w[:,1],v/v.sum()
testLDA()
|
|
947890834673ffb29943f8f35843f82de2cc34c5
|
scripts/memory_check.py
|
scripts/memory_check.py
|
"""
This is a convenience script to test the speed and memory usage of Jedi with
large libraries.
Each library is preloaded by jedi, recording the time and memory consumed by
each operation.
You can provide additional libraries via command line arguments.
Note: This requires the psutil library, available on PyPI.
"""
import time
import sys
import psutil
import jedi
def used_memory():
"""Return the total MB of System Memory in use."""
return psutil.virtual_memory().used / 2**20
def profile_preload(mod):
"""Preload a module into Jedi, recording time and memory used."""
base = used_memory()
t0 = time.time()
jedi.preload_module(mod)
elapsed = time.time() - t0
used = used_memory() - base
return elapsed, used
def main(mods):
"""Preload the modules, and print the time and memory used."""
t0 = time.time()
baseline = used_memory()
print('Time (s) | Mem (MB) | Package')
print('------------------------------')
for mod in mods:
elapsed, used = profile_preload(mod)
if used > 0:
print('%8.1f | %8d | %s' % (elapsed, used, mod))
print('------------------------------')
elapsed = time.time() - t0
used = used_memory() - baseline
print('%8.1f | %8d | %s' % (elapsed, used, 'Total'))
if __name__ == '__main__':
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
mods += sys.argv[1:]
main(mods)
|
Add a module to test the memory usage with large libaries
|
Add a module to test the memory usage with large libaries
|
Python
|
mit
|
WoLpH/jedi,mfussenegger/jedi,mfussenegger/jedi,flurischt/jedi,jonashaag/jedi,flurischt/jedi,WoLpH/jedi,jonashaag/jedi,tjwei/jedi,tjwei/jedi,dwillmer/jedi,dwillmer/jedi
|
Add a module to test the memory usage with large libaries
|
"""
This is a convenience script to test the speed and memory usage of Jedi with
large libraries.
Each library is preloaded by jedi, recording the time and memory consumed by
each operation.
You can provide additional libraries via command line arguments.
Note: This requires the psutil library, available on PyPI.
"""
import time
import sys
import psutil
import jedi
def used_memory():
"""Return the total MB of System Memory in use."""
return psutil.virtual_memory().used / 2**20
def profile_preload(mod):
"""Preload a module into Jedi, recording time and memory used."""
base = used_memory()
t0 = time.time()
jedi.preload_module(mod)
elapsed = time.time() - t0
used = used_memory() - base
return elapsed, used
def main(mods):
"""Preload the modules, and print the time and memory used."""
t0 = time.time()
baseline = used_memory()
print('Time (s) | Mem (MB) | Package')
print('------------------------------')
for mod in mods:
elapsed, used = profile_preload(mod)
if used > 0:
print('%8.1f | %8d | %s' % (elapsed, used, mod))
print('------------------------------')
elapsed = time.time() - t0
used = used_memory() - baseline
print('%8.1f | %8d | %s' % (elapsed, used, 'Total'))
if __name__ == '__main__':
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
mods += sys.argv[1:]
main(mods)
|
<commit_before><commit_msg>Add a module to test the memory usage with large libaries<commit_after>
|
"""
This is a convenience script to test the speed and memory usage of Jedi with
large libraries.
Each library is preloaded by jedi, recording the time and memory consumed by
each operation.
You can provide additional libraries via command line arguments.
Note: This requires the psutil library, available on PyPI.
"""
import time
import sys
import psutil
import jedi
def used_memory():
"""Return the total MB of System Memory in use."""
return psutil.virtual_memory().used / 2**20
def profile_preload(mod):
"""Preload a module into Jedi, recording time and memory used."""
base = used_memory()
t0 = time.time()
jedi.preload_module(mod)
elapsed = time.time() - t0
used = used_memory() - base
return elapsed, used
def main(mods):
"""Preload the modules, and print the time and memory used."""
t0 = time.time()
baseline = used_memory()
print('Time (s) | Mem (MB) | Package')
print('------------------------------')
for mod in mods:
elapsed, used = profile_preload(mod)
if used > 0:
print('%8.1f | %8d | %s' % (elapsed, used, mod))
print('------------------------------')
elapsed = time.time() - t0
used = used_memory() - baseline
print('%8.1f | %8d | %s' % (elapsed, used, 'Total'))
if __name__ == '__main__':
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
mods += sys.argv[1:]
main(mods)
|
Add a module to test the memory usage with large libaries"""
This is a convenience script to test the speed and memory usage of Jedi with
large libraries.
Each library is preloaded by jedi, recording the time and memory consumed by
each operation.
You can provide additional libraries via command line arguments.
Note: This requires the psutil library, available on PyPI.
"""
import time
import sys
import psutil
import jedi
def used_memory():
"""Return the total MB of System Memory in use."""
return psutil.virtual_memory().used / 2**20
def profile_preload(mod):
"""Preload a module into Jedi, recording time and memory used."""
base = used_memory()
t0 = time.time()
jedi.preload_module(mod)
elapsed = time.time() - t0
used = used_memory() - base
return elapsed, used
def main(mods):
"""Preload the modules, and print the time and memory used."""
t0 = time.time()
baseline = used_memory()
print('Time (s) | Mem (MB) | Package')
print('------------------------------')
for mod in mods:
elapsed, used = profile_preload(mod)
if used > 0:
print('%8.1f | %8d | %s' % (elapsed, used, mod))
print('------------------------------')
elapsed = time.time() - t0
used = used_memory() - baseline
print('%8.1f | %8d | %s' % (elapsed, used, 'Total'))
if __name__ == '__main__':
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
mods += sys.argv[1:]
main(mods)
|
<commit_before><commit_msg>Add a module to test the memory usage with large libaries<commit_after>"""
This is a convenience script to test the speed and memory usage of Jedi with
large libraries.
Each library is preloaded by jedi, recording the time and memory consumed by
each operation.
You can provide additional libraries via command line arguments.
Note: This requires the psutil library, available on PyPI.
"""
import time
import sys
import psutil
import jedi
def used_memory():
"""Return the total MB of System Memory in use."""
return psutil.virtual_memory().used / 2**20
def profile_preload(mod):
"""Preload a module into Jedi, recording time and memory used."""
base = used_memory()
t0 = time.time()
jedi.preload_module(mod)
elapsed = time.time() - t0
used = used_memory() - base
return elapsed, used
def main(mods):
"""Preload the modules, and print the time and memory used."""
t0 = time.time()
baseline = used_memory()
print('Time (s) | Mem (MB) | Package')
print('------------------------------')
for mod in mods:
elapsed, used = profile_preload(mod)
if used > 0:
print('%8.1f | %8d | %s' % (elapsed, used, mod))
print('------------------------------')
elapsed = time.time() - t0
used = used_memory() - baseline
print('%8.1f | %8d | %s' % (elapsed, used, 'Total'))
if __name__ == '__main__':
mods = ['re', 'numpy', 'scipy', 'scipy.sparse', 'scipy.stats',
'wx', 'decimal', 'PyQt4.QtGui', 'PySide.QtGui', 'Tkinter']
mods += sys.argv[1:]
main(mods)
|
|
40fdc4724b9b4bd8471fbd5b44e28ed5d4d3aa43
|
zerver/migrations/0182_set_initial_value_is_private_flag.py
|
zerver/migrations/0182_set_initial_value_is_private_flag.py
|
# -*- coding: utf-8 -*-
import sys
from django.db import migrations
from django.db import migrations, transaction
from django.db.backends.postgresql_psycopg2.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
from django.db.models import F
def set_initial_value_of_is_private_flag(
apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
UserMessage = apps.get_model("zerver", "UserMessage")
Message = apps.get_model("zerver", "Message")
if not Message.objects.exists():
return
i = 0
# Total is only used for the progress bar
total = Message.objects.filter(recipient__type__in=[1, 3]).count()
processed = 0
print("\nStart setting initial value for is_private flag...")
sys.stdout.flush()
while True:
range_end = i + 10000
# Can't use [Recipient.PERSONAL, Recipient.HUDDLE] in migration files
message_ids = list(Message.objects.filter(recipient__type__in=[1, 3],
id__gt=i,
id__lte=range_end).values_list("id", flat=True).order_by("id"))
count = UserMessage.objects.filter(message_id__in=message_ids).update(flags=F('flags').bitor(UserMessage.flags.is_private))
if count == 0 and range_end >= Message.objects.last().id:
break
i = range_end
processed += len(message_ids)
percent = round((processed / total) * 100, 2)
print("Processed %s/%s %s%%" % (processed, total, percent))
sys.stdout.flush()
class Migration(migrations.Migration):
atomic = False
dependencies = [
('zerver', '0181_userprofile_change_emojiset'),
]
operations = [
migrations.RunPython(set_initial_value_of_is_private_flag,
reverse_code=migrations.RunPython.noop),
]
|
Set initial value for is_private flags.
|
migrations: Set initial value for is_private flags.
This initializes the is_private flag to have the correct value for
historical messages.
|
Python
|
apache-2.0
|
dhcrzf/zulip,timabbott/zulip,punchagan/zulip,zulip/zulip,kou/zulip,shubhamdhama/zulip,zulip/zulip,punchagan/zulip,tommyip/zulip,rishig/zulip,tommyip/zulip,jackrzhang/zulip,synicalsyntax/zulip,rishig/zulip,synicalsyntax/zulip,brainwane/zulip,brainwane/zulip,eeshangarg/zulip,kou/zulip,andersk/zulip,synicalsyntax/zulip,rishig/zulip,eeshangarg/zulip,rishig/zulip,andersk/zulip,synicalsyntax/zulip,brainwane/zulip,hackerkid/zulip,andersk/zulip,synicalsyntax/zulip,rht/zulip,tommyip/zulip,zulip/zulip,jackrzhang/zulip,eeshangarg/zulip,brainwane/zulip,synicalsyntax/zulip,andersk/zulip,tommyip/zulip,hackerkid/zulip,zulip/zulip,dhcrzf/zulip,showell/zulip,brainwane/zulip,kou/zulip,hackerkid/zulip,tommyip/zulip,showell/zulip,punchagan/zulip,showell/zulip,dhcrzf/zulip,brainwane/zulip,kou/zulip,jackrzhang/zulip,shubhamdhama/zulip,brainwane/zulip,hackerkid/zulip,showell/zulip,showell/zulip,timabbott/zulip,shubhamdhama/zulip,tommyip/zulip,jackrzhang/zulip,timabbott/zulip,tommyip/zulip,timabbott/zulip,synicalsyntax/zulip,kou/zulip,rishig/zulip,andersk/zulip,showell/zulip,rht/zulip,hackerkid/zulip,eeshangarg/zulip,zulip/zulip,zulip/zulip,kou/zulip,hackerkid/zulip,shubhamdhama/zulip,punchagan/zulip,shubhamdhama/zulip,rht/zulip,jackrzhang/zulip,timabbott/zulip,rht/zulip,timabbott/zulip,andersk/zulip,punchagan/zulip,showell/zulip,rht/zulip,kou/zulip,shubhamdhama/zulip,shubhamdhama/zulip,eeshangarg/zulip,dhcrzf/zulip,rishig/zulip,dhcrzf/zulip,rht/zulip,punchagan/zulip,andersk/zulip,dhcrzf/zulip,rishig/zulip,eeshangarg/zulip,punchagan/zulip,hackerkid/zulip,rht/zulip,jackrzhang/zulip,dhcrzf/zulip,eeshangarg/zulip,timabbott/zulip,jackrzhang/zulip,zulip/zulip
|
migrations: Set initial value for is_private flags.
This initializes the is_private flag to have the correct value for
historical messages.
|
# -*- coding: utf-8 -*-
import sys
from django.db import migrations
from django.db import migrations, transaction
from django.db.backends.postgresql_psycopg2.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
from django.db.models import F
def set_initial_value_of_is_private_flag(
apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
UserMessage = apps.get_model("zerver", "UserMessage")
Message = apps.get_model("zerver", "Message")
if not Message.objects.exists():
return
i = 0
# Total is only used for the progress bar
total = Message.objects.filter(recipient__type__in=[1, 3]).count()
processed = 0
print("\nStart setting initial value for is_private flag...")
sys.stdout.flush()
while True:
range_end = i + 10000
# Can't use [Recipient.PERSONAL, Recipient.HUDDLE] in migration files
message_ids = list(Message.objects.filter(recipient__type__in=[1, 3],
id__gt=i,
id__lte=range_end).values_list("id", flat=True).order_by("id"))
count = UserMessage.objects.filter(message_id__in=message_ids).update(flags=F('flags').bitor(UserMessage.flags.is_private))
if count == 0 and range_end >= Message.objects.last().id:
break
i = range_end
processed += len(message_ids)
percent = round((processed / total) * 100, 2)
print("Processed %s/%s %s%%" % (processed, total, percent))
sys.stdout.flush()
class Migration(migrations.Migration):
atomic = False
dependencies = [
('zerver', '0181_userprofile_change_emojiset'),
]
operations = [
migrations.RunPython(set_initial_value_of_is_private_flag,
reverse_code=migrations.RunPython.noop),
]
|
<commit_before><commit_msg>migrations: Set initial value for is_private flags.
This initializes the is_private flag to have the correct value for
historical messages.<commit_after>
|
# -*- coding: utf-8 -*-
import sys
from django.db import migrations
from django.db import migrations, transaction
from django.db.backends.postgresql_psycopg2.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
from django.db.models import F
def set_initial_value_of_is_private_flag(
apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
UserMessage = apps.get_model("zerver", "UserMessage")
Message = apps.get_model("zerver", "Message")
if not Message.objects.exists():
return
i = 0
# Total is only used for the progress bar
total = Message.objects.filter(recipient__type__in=[1, 3]).count()
processed = 0
print("\nStart setting initial value for is_private flag...")
sys.stdout.flush()
while True:
range_end = i + 10000
# Can't use [Recipient.PERSONAL, Recipient.HUDDLE] in migration files
message_ids = list(Message.objects.filter(recipient__type__in=[1, 3],
id__gt=i,
id__lte=range_end).values_list("id", flat=True).order_by("id"))
count = UserMessage.objects.filter(message_id__in=message_ids).update(flags=F('flags').bitor(UserMessage.flags.is_private))
if count == 0 and range_end >= Message.objects.last().id:
break
i = range_end
processed += len(message_ids)
percent = round((processed / total) * 100, 2)
print("Processed %s/%s %s%%" % (processed, total, percent))
sys.stdout.flush()
class Migration(migrations.Migration):
atomic = False
dependencies = [
('zerver', '0181_userprofile_change_emojiset'),
]
operations = [
migrations.RunPython(set_initial_value_of_is_private_flag,
reverse_code=migrations.RunPython.noop),
]
|
migrations: Set initial value for is_private flags.
This initializes the is_private flag to have the correct value for
historical messages.# -*- coding: utf-8 -*-
import sys
from django.db import migrations
from django.db import migrations, transaction
from django.db.backends.postgresql_psycopg2.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
from django.db.models import F
def set_initial_value_of_is_private_flag(
apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
UserMessage = apps.get_model("zerver", "UserMessage")
Message = apps.get_model("zerver", "Message")
if not Message.objects.exists():
return
i = 0
# Total is only used for the progress bar
total = Message.objects.filter(recipient__type__in=[1, 3]).count()
processed = 0
print("\nStart setting initial value for is_private flag...")
sys.stdout.flush()
while True:
range_end = i + 10000
# Can't use [Recipient.PERSONAL, Recipient.HUDDLE] in migration files
message_ids = list(Message.objects.filter(recipient__type__in=[1, 3],
id__gt=i,
id__lte=range_end).values_list("id", flat=True).order_by("id"))
count = UserMessage.objects.filter(message_id__in=message_ids).update(flags=F('flags').bitor(UserMessage.flags.is_private))
if count == 0 and range_end >= Message.objects.last().id:
break
i = range_end
processed += len(message_ids)
percent = round((processed / total) * 100, 2)
print("Processed %s/%s %s%%" % (processed, total, percent))
sys.stdout.flush()
class Migration(migrations.Migration):
atomic = False
dependencies = [
('zerver', '0181_userprofile_change_emojiset'),
]
operations = [
migrations.RunPython(set_initial_value_of_is_private_flag,
reverse_code=migrations.RunPython.noop),
]
|
<commit_before><commit_msg>migrations: Set initial value for is_private flags.
This initializes the is_private flag to have the correct value for
historical messages.<commit_after># -*- coding: utf-8 -*-
import sys
from django.db import migrations
from django.db import migrations, transaction
from django.db.backends.postgresql_psycopg2.schema import DatabaseSchemaEditor
from django.db.migrations.state import StateApps
from django.db.models import F
def set_initial_value_of_is_private_flag(
apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
UserMessage = apps.get_model("zerver", "UserMessage")
Message = apps.get_model("zerver", "Message")
if not Message.objects.exists():
return
i = 0
# Total is only used for the progress bar
total = Message.objects.filter(recipient__type__in=[1, 3]).count()
processed = 0
print("\nStart setting initial value for is_private flag...")
sys.stdout.flush()
while True:
range_end = i + 10000
# Can't use [Recipient.PERSONAL, Recipient.HUDDLE] in migration files
message_ids = list(Message.objects.filter(recipient__type__in=[1, 3],
id__gt=i,
id__lte=range_end).values_list("id", flat=True).order_by("id"))
count = UserMessage.objects.filter(message_id__in=message_ids).update(flags=F('flags').bitor(UserMessage.flags.is_private))
if count == 0 and range_end >= Message.objects.last().id:
break
i = range_end
processed += len(message_ids)
percent = round((processed / total) * 100, 2)
print("Processed %s/%s %s%%" % (processed, total, percent))
sys.stdout.flush()
class Migration(migrations.Migration):
atomic = False
dependencies = [
('zerver', '0181_userprofile_change_emojiset'),
]
operations = [
migrations.RunPython(set_initial_value_of_is_private_flag,
reverse_code=migrations.RunPython.noop),
]
|
|
dbf4d6f0e8b12f5eca5d7ff3e4d0a609d675fccf
|
sierpinski/DXFTurtle.py
|
sierpinski/DXFTurtle.py
|
from turtle import Vec2D
class Turtle(object):
def __init__(self):
self.position = Vec2D(0, 0)
self.direction = Vec2D(1, 0)
self.isDrawing = True
def penup(self):
self.isDrawing = False
def pendown(self):
self.isDrawing = True
def right(self, angle):
self.direction.rotate(-angle)
def left(self, angle):
self.direction.rotate(angle)
def forward(self, distance):
initialPosition = self.position
self.position += self.direction * distance
if self.isDrawing:
pass
def done():
print """999
DXF create from Python DXFTurtle
0
SECTION
2
HEADER
9
$ACADVER
1
AC1006
9
$INSBASE
10
0.0
20
0.0
30
0.0
9
$EXTMIN
10
0.0
20
0.0
9
$EXTMAX
10
1000.0
20
1000.0
0
ENDSEC"""
|
Add python turtle to DXF converter.
|
Add python turtle to DXF converter.
|
Python
|
mit
|
loic-fejoz/loic-fejoz-fabmoments,loic-fejoz/loic-fejoz-fabmoments,loic-fejoz/loic-fejoz-fabmoments
|
Add python turtle to DXF converter.
|
from turtle import Vec2D
class Turtle(object):
def __init__(self):
self.position = Vec2D(0, 0)
self.direction = Vec2D(1, 0)
self.isDrawing = True
def penup(self):
self.isDrawing = False
def pendown(self):
self.isDrawing = True
def right(self, angle):
self.direction.rotate(-angle)
def left(self, angle):
self.direction.rotate(angle)
def forward(self, distance):
initialPosition = self.position
self.position += self.direction * distance
if self.isDrawing:
pass
def done():
print """999
DXF create from Python DXFTurtle
0
SECTION
2
HEADER
9
$ACADVER
1
AC1006
9
$INSBASE
10
0.0
20
0.0
30
0.0
9
$EXTMIN
10
0.0
20
0.0
9
$EXTMAX
10
1000.0
20
1000.0
0
ENDSEC"""
|
<commit_before><commit_msg>Add python turtle to DXF converter.<commit_after>
|
from turtle import Vec2D
class Turtle(object):
def __init__(self):
self.position = Vec2D(0, 0)
self.direction = Vec2D(1, 0)
self.isDrawing = True
def penup(self):
self.isDrawing = False
def pendown(self):
self.isDrawing = True
def right(self, angle):
self.direction.rotate(-angle)
def left(self, angle):
self.direction.rotate(angle)
def forward(self, distance):
initialPosition = self.position
self.position += self.direction * distance
if self.isDrawing:
pass
def done():
print """999
DXF create from Python DXFTurtle
0
SECTION
2
HEADER
9
$ACADVER
1
AC1006
9
$INSBASE
10
0.0
20
0.0
30
0.0
9
$EXTMIN
10
0.0
20
0.0
9
$EXTMAX
10
1000.0
20
1000.0
0
ENDSEC"""
|
Add python turtle to DXF converter.from turtle import Vec2D
class Turtle(object):
def __init__(self):
self.position = Vec2D(0, 0)
self.direction = Vec2D(1, 0)
self.isDrawing = True
def penup(self):
self.isDrawing = False
def pendown(self):
self.isDrawing = True
def right(self, angle):
self.direction.rotate(-angle)
def left(self, angle):
self.direction.rotate(angle)
def forward(self, distance):
initialPosition = self.position
self.position += self.direction * distance
if self.isDrawing:
pass
def done():
print """999
DXF create from Python DXFTurtle
0
SECTION
2
HEADER
9
$ACADVER
1
AC1006
9
$INSBASE
10
0.0
20
0.0
30
0.0
9
$EXTMIN
10
0.0
20
0.0
9
$EXTMAX
10
1000.0
20
1000.0
0
ENDSEC"""
|
<commit_before><commit_msg>Add python turtle to DXF converter.<commit_after>from turtle import Vec2D
class Turtle(object):
def __init__(self):
self.position = Vec2D(0, 0)
self.direction = Vec2D(1, 0)
self.isDrawing = True
def penup(self):
self.isDrawing = False
def pendown(self):
self.isDrawing = True
def right(self, angle):
self.direction.rotate(-angle)
def left(self, angle):
self.direction.rotate(angle)
def forward(self, distance):
initialPosition = self.position
self.position += self.direction * distance
if self.isDrawing:
pass
def done():
print """999
DXF create from Python DXFTurtle
0
SECTION
2
HEADER
9
$ACADVER
1
AC1006
9
$INSBASE
10
0.0
20
0.0
30
0.0
9
$EXTMIN
10
0.0
20
0.0
9
$EXTMAX
10
1000.0
20
1000.0
0
ENDSEC"""
|
|
dd3bc4102f647ae0b7c5edab8afb69bd88d6445f
|
4.py
|
4.py
|
# http://www.pythonchallenge.com/pc/def/linkedlist.php
import requests, re
nothing = 12345 # Initial nothing value
url = "http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=%d"
req = requests.get(url % nothing)
print req.text
|
Set up requests to get the body of text
|
Set up requests to get the body of text
|
Python
|
mit
|
yarabarla/python-challenge
|
Set up requests to get the body of text
|
# http://www.pythonchallenge.com/pc/def/linkedlist.php
import requests, re
nothing = 12345 # Initial nothing value
url = "http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=%d"
req = requests.get(url % nothing)
print req.text
|
<commit_before><commit_msg>Set up requests to get the body of text<commit_after>
|
# http://www.pythonchallenge.com/pc/def/linkedlist.php
import requests, re
nothing = 12345 # Initial nothing value
url = "http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=%d"
req = requests.get(url % nothing)
print req.text
|
Set up requests to get the body of text# http://www.pythonchallenge.com/pc/def/linkedlist.php
import requests, re
nothing = 12345 # Initial nothing value
url = "http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=%d"
req = requests.get(url % nothing)
print req.text
|
<commit_before><commit_msg>Set up requests to get the body of text<commit_after># http://www.pythonchallenge.com/pc/def/linkedlist.php
import requests, re
nothing = 12345 # Initial nothing value
url = "http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=%d"
req = requests.get(url % nothing)
print req.text
|
|
eb9af73993adb3295116f959e89591a3fa722cfd
|
tests/functional/cli/test_cli.py
|
tests/functional/cli/test_cli.py
|
import json
import os
from click.testing import CliRunner
from chalice import cli
def assert_chalice_app_structure_created(dirname):
app_contents = os.listdir(os.path.join(os.getcwd(), dirname))
assert 'app.py' in app_contents
assert 'requirements.txt' in app_contents
assert '.chalice' in app_contents
assert '.gitignore' in app_contents
def test_create_new_project_creates_app():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
# The 'new-project' command creates a directory based on
# the project name
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_create_project_with_prompted_app_name():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, input='testproject')
assert result.exit_code == 0
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_error_raised_if_dir_already_exists():
runner = CliRunner()
with runner.isolated_filesystem():
os.mkdir('testproject')
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 1
assert 'Directory already exists: testproject' in result.output
def test_can_load_project_config_after_project_creation():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
config = cli.load_project_config('testproject')
assert config == {'app_name': 'testproject', 'stage': 'dev'}
def test_default_new_project_adds_index_route():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
app = cli.load_chalice_app('testproject')
assert '/' in app.routes
def test_gen_policy_command_creates_policy():
runner = CliRunner()
with runner.isolated_filesystem():
runner.invoke(cli.new_project, ['testproject'])
os.chdir('testproject')
result = runner.invoke(cli.cli, ['gen-policy'], obj={})
assert result.exit_code == 0
# The output should be valid JSON.
parsed_policy = json.loads(result.output)
# We don't want to validate the specific parts of the policy
# (that's tested elsewhere), but we'll check to make sure
# it looks like a policy document.
assert 'Version' in parsed_policy
assert 'Statement' in parsed_policy
|
Add start of tests for CLI package
|
Add start of tests for CLI package
|
Python
|
apache-2.0
|
freaker2k7/chalice,awslabs/chalice
|
Add start of tests for CLI package
|
import json
import os
from click.testing import CliRunner
from chalice import cli
def assert_chalice_app_structure_created(dirname):
app_contents = os.listdir(os.path.join(os.getcwd(), dirname))
assert 'app.py' in app_contents
assert 'requirements.txt' in app_contents
assert '.chalice' in app_contents
assert '.gitignore' in app_contents
def test_create_new_project_creates_app():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
# The 'new-project' command creates a directory based on
# the project name
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_create_project_with_prompted_app_name():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, input='testproject')
assert result.exit_code == 0
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_error_raised_if_dir_already_exists():
runner = CliRunner()
with runner.isolated_filesystem():
os.mkdir('testproject')
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 1
assert 'Directory already exists: testproject' in result.output
def test_can_load_project_config_after_project_creation():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
config = cli.load_project_config('testproject')
assert config == {'app_name': 'testproject', 'stage': 'dev'}
def test_default_new_project_adds_index_route():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
app = cli.load_chalice_app('testproject')
assert '/' in app.routes
def test_gen_policy_command_creates_policy():
runner = CliRunner()
with runner.isolated_filesystem():
runner.invoke(cli.new_project, ['testproject'])
os.chdir('testproject')
result = runner.invoke(cli.cli, ['gen-policy'], obj={})
assert result.exit_code == 0
# The output should be valid JSON.
parsed_policy = json.loads(result.output)
# We don't want to validate the specific parts of the policy
# (that's tested elsewhere), but we'll check to make sure
# it looks like a policy document.
assert 'Version' in parsed_policy
assert 'Statement' in parsed_policy
|
<commit_before><commit_msg>Add start of tests for CLI package<commit_after>
|
import json
import os
from click.testing import CliRunner
from chalice import cli
def assert_chalice_app_structure_created(dirname):
app_contents = os.listdir(os.path.join(os.getcwd(), dirname))
assert 'app.py' in app_contents
assert 'requirements.txt' in app_contents
assert '.chalice' in app_contents
assert '.gitignore' in app_contents
def test_create_new_project_creates_app():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
# The 'new-project' command creates a directory based on
# the project name
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_create_project_with_prompted_app_name():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, input='testproject')
assert result.exit_code == 0
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_error_raised_if_dir_already_exists():
runner = CliRunner()
with runner.isolated_filesystem():
os.mkdir('testproject')
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 1
assert 'Directory already exists: testproject' in result.output
def test_can_load_project_config_after_project_creation():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
config = cli.load_project_config('testproject')
assert config == {'app_name': 'testproject', 'stage': 'dev'}
def test_default_new_project_adds_index_route():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
app = cli.load_chalice_app('testproject')
assert '/' in app.routes
def test_gen_policy_command_creates_policy():
runner = CliRunner()
with runner.isolated_filesystem():
runner.invoke(cli.new_project, ['testproject'])
os.chdir('testproject')
result = runner.invoke(cli.cli, ['gen-policy'], obj={})
assert result.exit_code == 0
# The output should be valid JSON.
parsed_policy = json.loads(result.output)
# We don't want to validate the specific parts of the policy
# (that's tested elsewhere), but we'll check to make sure
# it looks like a policy document.
assert 'Version' in parsed_policy
assert 'Statement' in parsed_policy
|
Add start of tests for CLI packageimport json
import os
from click.testing import CliRunner
from chalice import cli
def assert_chalice_app_structure_created(dirname):
app_contents = os.listdir(os.path.join(os.getcwd(), dirname))
assert 'app.py' in app_contents
assert 'requirements.txt' in app_contents
assert '.chalice' in app_contents
assert '.gitignore' in app_contents
def test_create_new_project_creates_app():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
# The 'new-project' command creates a directory based on
# the project name
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_create_project_with_prompted_app_name():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, input='testproject')
assert result.exit_code == 0
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_error_raised_if_dir_already_exists():
runner = CliRunner()
with runner.isolated_filesystem():
os.mkdir('testproject')
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 1
assert 'Directory already exists: testproject' in result.output
def test_can_load_project_config_after_project_creation():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
config = cli.load_project_config('testproject')
assert config == {'app_name': 'testproject', 'stage': 'dev'}
def test_default_new_project_adds_index_route():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
app = cli.load_chalice_app('testproject')
assert '/' in app.routes
def test_gen_policy_command_creates_policy():
runner = CliRunner()
with runner.isolated_filesystem():
runner.invoke(cli.new_project, ['testproject'])
os.chdir('testproject')
result = runner.invoke(cli.cli, ['gen-policy'], obj={})
assert result.exit_code == 0
# The output should be valid JSON.
parsed_policy = json.loads(result.output)
# We don't want to validate the specific parts of the policy
# (that's tested elsewhere), but we'll check to make sure
# it looks like a policy document.
assert 'Version' in parsed_policy
assert 'Statement' in parsed_policy
|
<commit_before><commit_msg>Add start of tests for CLI package<commit_after>import json
import os
from click.testing import CliRunner
from chalice import cli
def assert_chalice_app_structure_created(dirname):
app_contents = os.listdir(os.path.join(os.getcwd(), dirname))
assert 'app.py' in app_contents
assert 'requirements.txt' in app_contents
assert '.chalice' in app_contents
assert '.gitignore' in app_contents
def test_create_new_project_creates_app():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
# The 'new-project' command creates a directory based on
# the project name
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_create_project_with_prompted_app_name():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, input='testproject')
assert result.exit_code == 0
assert os.listdir(os.getcwd()) == ['testproject']
assert_chalice_app_structure_created(dirname='testproject')
def test_error_raised_if_dir_already_exists():
runner = CliRunner()
with runner.isolated_filesystem():
os.mkdir('testproject')
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 1
assert 'Directory already exists: testproject' in result.output
def test_can_load_project_config_after_project_creation():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
config = cli.load_project_config('testproject')
assert config == {'app_name': 'testproject', 'stage': 'dev'}
def test_default_new_project_adds_index_route():
runner = CliRunner()
with runner.isolated_filesystem():
result = runner.invoke(cli.new_project, ['testproject'])
assert result.exit_code == 0
app = cli.load_chalice_app('testproject')
assert '/' in app.routes
def test_gen_policy_command_creates_policy():
runner = CliRunner()
with runner.isolated_filesystem():
runner.invoke(cli.new_project, ['testproject'])
os.chdir('testproject')
result = runner.invoke(cli.cli, ['gen-policy'], obj={})
assert result.exit_code == 0
# The output should be valid JSON.
parsed_policy = json.loads(result.output)
# We don't want to validate the specific parts of the policy
# (that's tested elsewhere), but we'll check to make sure
# it looks like a policy document.
assert 'Version' in parsed_policy
assert 'Statement' in parsed_policy
|
|
190775d5ea82364ad1baf2471cd453032e9c60de
|
nova/db/sqlalchemy/migrate_repo/versions/094_update_postgresql_sequence_names.py
|
nova/db/sqlalchemy/migrate_repo/versions/094_update_postgresql_sequence_names.py
|
# Copyright (c) 2012 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import MetaData
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# NOTE(dprince): Need to rename the leftover zones stuff and quota_new
# stuff from Essex for PostgreSQL.
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE zones_id_seq RENAME TO cells_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT zones_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT cells_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_new_id_seq RENAME TO quotas_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_new_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE cells_id_seq RENAME TO zones_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT cells_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT zones_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_id_seq RENAME TO quotas_new_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_new_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
|
Update PostgreSQL sequence names for zones/quotas.
|
Update PostgreSQL sequence names for zones/quotas.
Fixes LP Bug #993667.
Fixes LP Bug #993669.
Change-Id: Ifcc33929ced617916bd6613fc941257494f4a99b
|
Python
|
apache-2.0
|
savi-dev/nova,affo/nova,Brocade-OpenSource/OpenStack-DNRM-Nova,bigswitch/nova,cloudbase/nova-virtualbox,alexandrucoman/vbox-nova-driver,JianyuWang/nova,barnsnake351/nova,usc-isi/nova,sebrandon1/nova,tangfeixiong/nova,vladikr/nova_drafts,cyx1231st/nova,dawnpower/nova,mandeepdhami/nova,akash1808/nova,thomasem/nova,josephsuh/extra-specs,apporc/nova,joker946/nova,CEG-FYP-OpenStack/scheduler,qwefi/nova,JianyuWang/nova,usc-isi/extra-specs,ted-gould/nova,kimjaejoong/nova,CiscoSystems/nova,blueboxgroup/nova,gooddata/openstack-nova,NoBodyCam/TftpPxeBootBareMetal,mmnelemane/nova,kimjaejoong/nova,eonpatapon/nova,whitepages/nova,adelina-t/nova,yatinkumbhare/openstack-nova,shahar-stratoscale/nova,klmitch/nova,maheshp/novatest,edulramirez/nova,shail2810/nova,Metaswitch/calico-nova,cloudbase/nova-virtualbox,cloudbase/nova,Yuriy-Leonov/nova,mikalstill/nova,barnsnake351/nova,NeCTAR-RC/nova,OpenAcademy-OpenStack/nova-scheduler,saleemjaveds/https-github.com-openstack-nova,NewpTone/stacklab-nova,j-carpentier/nova,aristanetworks/arista-ovs-nova,josephsuh/extra-specs,blueboxgroup/nova,usc-isi/nova,maoy/zknova,jianghuaw/nova,JioCloud/nova,alaski/nova,TwinkleChawla/nova,rickerc/nova_audit,citrix-openstack-build/nova,shahar-stratoscale/nova,maelnor/nova,angdraug/nova,mikalstill/nova,yosshy/nova,savi-dev/nova,rajalokan/nova,zhimin711/nova,CEG-FYP-OpenStack/scheduler,iuliat/nova,isyippee/nova,ewindisch/nova,TwinkleChawla/nova,alaski/nova,bgxavier/nova,NoBodyCam/TftpPxeBootBareMetal,orbitfp7/nova,mgagne/nova,openstack/nova,fnordahl/nova,varunarya10/nova_test_latest,devendermishrajio/nova_test_latest,openstack/nova,citrix-openstack-build/nova,CloudServer/nova,noironetworks/nova,rahulunair/nova,adelina-t/nova,CCI-MOC/nova,raildo/nova,shootstar/novatest,BeyondTheClouds/nova,tealover/nova,Juniper/nova,alvarolopez/nova,hanlind/nova,Francis-Liu/animated-broccoli,plumgrid/plumgrid-nova,psiwczak/openstack,berrange/nova,watonyweng/nova,gooddata/openstack-nova,tudorvio/nova,aristanetworks/arista-ovs-nova,varunarya10/nova_test_latest,SUSE-Cloud/nova,Yusuke1987/openstack_template,phenoxim/nova,JioCloud/nova,Juniper/nova,jeffrey4l/nova,virtualopensystems/nova,paulmathews/nova,tudorvio/nova,yosshy/nova,NewpTone/stacklab-nova,jianghuaw/nova,BeyondTheClouds/nova,fnordahl/nova,tangfeixiong/nova,sebrandon1/nova,JioCloud/nova_test_latest,rajalokan/nova,bgxavier/nova,cloudbase/nova,belmiromoreira/nova,leilihh/nova,akash1808/nova_test_latest,vmturbo/nova,sridevikoushik31/nova,cloudbau/nova,gspilio/nova,DirectXMan12/nova-hacking,yrobla/nova,apporc/nova,cloudbau/nova,Tehsmash/nova,ted-gould/nova,akash1808/nova,mandeepdhami/nova,petrutlucian94/nova,vmturbo/nova,zaina/nova,fajoy/nova,dims/nova,jianghuaw/nova,houshengbo/nova_vmware_compute_driver,sridevikoushik31/openstack,hanlind/nova,sridevikoushik31/openstack,Tehsmash/nova,klmitch/nova,maoy/zknova,sacharya/nova,DirectXMan12/nova-hacking,j-carpentier/nova,rrader/nova-docker-plugin,Brocade-OpenSource/OpenStack-DNRM-Nova,paulmathews/nova,eharney/nova,psiwczak/openstack,vladikr/nova_drafts,mahak/nova,eonpatapon/nova,virtualopensystems/nova,sacharya/nova,TieWei/nova,devoid/nova,luogangyi/bcec-nova,OpenAcademy-OpenStack/nova-scheduler,redhat-openstack/nova,Triv90/Nova,ruslanloman/nova,spring-week-topos/nova-week,savi-dev/nova,CCI-MOC/nova,NewpTone/stacklab-nova,tealover/nova,takeshineshiro/nova,devoid/nova,mahak/nova,Juniper/nova,cernops/nova,rahulunair/nova,Stavitsky/nova,Metaswitch/calico-nova,devendermishrajio/nova,fajoy/nova,cernops/nova,yrobla/nova,ewindisch/nova,cloudbase/nova,devendermishrajio/nova_test_latest,MountainWei/nova,joker946/nova,dawnpower/nova,yrobla/nova,alvarolopez/nova,klmitch/nova,felixma/nova,plumgrid/plumgrid-nova,LoHChina/nova,orbitfp7/nova,rahulunair/nova,TieWei/nova,zhimin711/nova,zzicewind/nova,rrader/nova-docker-plugin,eayunstack/nova,akash1808/nova_test_latest,aristanetworks/arista-ovs-nova,zzicewind/nova,projectcalico/calico-nova,scripnichenko/nova,paulmathews/nova,houshengbo/nova_vmware_compute_driver,cyx1231st/nova,tianweizhang/nova,saleemjaveds/https-github.com-openstack-nova,bclau/nova,maheshp/novatest,rajalokan/nova,MountainWei/nova,sridevikoushik31/nova,gspilio/nova,BeyondTheClouds/nova,mgagne/nova,noironetworks/nova,leilihh/nova,ruslanloman/nova,gspilio/nova,Yuriy-Leonov/nova,klmitch/nova,NeCTAR-RC/nova,viggates/nova,zaina/nova,qwefi/nova,DirectXMan12/nova-hacking,bigswitch/nova,imsplitbit/nova,Francis-Liu/animated-broccoli,thomasem/nova,phenoxim/nova,hanlind/nova,ntt-sic/nova,dstroppa/openstack-smartos-nova-grizzly,rajalokan/nova,watonyweng/nova,ntt-sic/nova,projectcalico/calico-nova,tianweizhang/nova,bclau/nova,mahak/nova,iuliat/nova,eayunstack/nova,fajoy/nova,houshengbo/nova_vmware_compute_driver,badock/nova,badock/nova,viggates/nova,redhat-openstack/nova,Juniper/nova,Stavitsky/nova,SUSE-Cloud/nova,sridevikoushik31/nova,luogangyi/bcec-nova,CloudServer/nova,silenceli/nova,leilihh/novaha,CiscoSystems/nova,petrutlucian94/nova_dev,dstroppa/openstack-smartos-nova-grizzly,usc-isi/extra-specs,double12gzh/nova,gooddata/openstack-nova,tanglei528/nova,vmturbo/nova,isyippee/nova,rickerc/nova_audit,shail2810/nova,NoBodyCam/TftpPxeBootBareMetal,silenceli/nova,imsplitbit/nova,usc-isi/extra-specs,alexandrucoman/vbox-nova-driver,whitepages/nova,mikalstill/nova,cernops/nova,gooddata/openstack-nova,nikesh-mahalka/nova,sebrandon1/nova,felixma/nova,sridevikoushik31/openstack,petrutlucian94/nova,usc-isi/nova,devendermishrajio/nova,double12gzh/nova,scripnichenko/nova,dims/nova,vmturbo/nova,JioCloud/nova_test_latest,jianghuaw/nova,angdraug/nova,maoy/zknova,Triv90/Nova,affo/nova,josephsuh/extra-specs,takeshineshiro/nova,belmiromoreira/nova,psiwczak/openstack,leilihh/novaha,eharney/nova,shootstar/novatest,tanglei528/nova,mmnelemane/nova,spring-week-topos/nova-week,nikesh-mahalka/nova,maelnor/nova,yatinkumbhare/openstack-nova,LoHChina/nova,openstack/nova,berrange/nova,Triv90/Nova,raildo/nova,Yusuke1987/openstack_template,maheshp/novatest,petrutlucian94/nova_dev,edulramirez/nova,dstroppa/openstack-smartos-nova-grizzly,sridevikoushik31/nova,jeffrey4l/nova
|
Update PostgreSQL sequence names for zones/quotas.
Fixes LP Bug #993667.
Fixes LP Bug #993669.
Change-Id: Ifcc33929ced617916bd6613fc941257494f4a99b
|
# Copyright (c) 2012 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import MetaData
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# NOTE(dprince): Need to rename the leftover zones stuff and quota_new
# stuff from Essex for PostgreSQL.
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE zones_id_seq RENAME TO cells_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT zones_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT cells_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_new_id_seq RENAME TO quotas_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_new_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE cells_id_seq RENAME TO zones_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT cells_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT zones_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_id_seq RENAME TO quotas_new_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_new_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
|
<commit_before><commit_msg>Update PostgreSQL sequence names for zones/quotas.
Fixes LP Bug #993667.
Fixes LP Bug #993669.
Change-Id: Ifcc33929ced617916bd6613fc941257494f4a99b<commit_after>
|
# Copyright (c) 2012 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import MetaData
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# NOTE(dprince): Need to rename the leftover zones stuff and quota_new
# stuff from Essex for PostgreSQL.
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE zones_id_seq RENAME TO cells_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT zones_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT cells_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_new_id_seq RENAME TO quotas_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_new_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE cells_id_seq RENAME TO zones_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT cells_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT zones_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_id_seq RENAME TO quotas_new_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_new_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
|
Update PostgreSQL sequence names for zones/quotas.
Fixes LP Bug #993667.
Fixes LP Bug #993669.
Change-Id: Ifcc33929ced617916bd6613fc941257494f4a99b# Copyright (c) 2012 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import MetaData
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# NOTE(dprince): Need to rename the leftover zones stuff and quota_new
# stuff from Essex for PostgreSQL.
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE zones_id_seq RENAME TO cells_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT zones_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT cells_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_new_id_seq RENAME TO quotas_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_new_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE cells_id_seq RENAME TO zones_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT cells_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT zones_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_id_seq RENAME TO quotas_new_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_new_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
|
<commit_before><commit_msg>Update PostgreSQL sequence names for zones/quotas.
Fixes LP Bug #993667.
Fixes LP Bug #993669.
Change-Id: Ifcc33929ced617916bd6613fc941257494f4a99b<commit_after># Copyright (c) 2012 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy import MetaData
def upgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
# NOTE(dprince): Need to rename the leftover zones stuff and quota_new
# stuff from Essex for PostgreSQL.
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE zones_id_seq RENAME TO cells_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT zones_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT cells_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_new_id_seq RENAME TO quotas_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_new_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
def downgrade(migrate_engine):
meta = MetaData()
meta.bind = migrate_engine
if migrate_engine.name == "postgresql":
sql = """ALTER TABLE cells_id_seq RENAME TO zones_id_seq;
ALTER TABLE ONLY cells DROP CONSTRAINT cells_pkey;
ALTER TABLE ONLY cells ADD CONSTRAINT zones_pkey
PRIMARY KEY (id);
ALTER TABLE quotas_id_seq RENAME TO quotas_new_id_seq;
ALTER TABLE ONLY quotas DROP CONSTRAINT quotas_pkey;
ALTER TABLE ONLY quotas ADD CONSTRAINT quotas_new_pkey
PRIMARY KEY (id);"""
migrate_engine.execute(sql)
|
|
ad409c2d3ca715865f0f94dbd6b8384b2fca8323
|
dbaas/workflow/steps/util/region_migration/remove_old_acl.py
|
dbaas/workflow/steps/util/region_migration/remove_old_acl.py
|
# -*- coding: utf-8 -*-
import logging
from util import full_stack
from workflow.steps.util.base import BaseStep
from workflow.exceptions.error_codes import DBAAS_0020
from dbaas_aclapi.tasks import destroy_acl_for
LOG = logging.getLogger(__name__)
class RemoveOldAcl(BaseStep):
def __unicode__(self):
return "Deleting old acls..."
def do(self, workflow_dict):
try:
source_instances = workflow_dict['source_instances']
source_secondary_ips = workflow_dict['source_secondary_ips']
database = workflow_dict['database']
for source_instance in source_instances:
destroy_acl_for(database=database, ip=source_instance.address)
for source_secondary_ip in source_secondary_ips:
destroy_acl_for(database=database, ip=source_secondary_ip.ip)
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
def undo(self, workflow_dict):
LOG.info("Running undo...")
try:
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
|
Add step to delete old acl
|
Add step to delete old acl
|
Python
|
bsd-3-clause
|
globocom/database-as-a-service,globocom/database-as-a-service,globocom/database-as-a-service,globocom/database-as-a-service
|
Add step to delete old acl
|
# -*- coding: utf-8 -*-
import logging
from util import full_stack
from workflow.steps.util.base import BaseStep
from workflow.exceptions.error_codes import DBAAS_0020
from dbaas_aclapi.tasks import destroy_acl_for
LOG = logging.getLogger(__name__)
class RemoveOldAcl(BaseStep):
def __unicode__(self):
return "Deleting old acls..."
def do(self, workflow_dict):
try:
source_instances = workflow_dict['source_instances']
source_secondary_ips = workflow_dict['source_secondary_ips']
database = workflow_dict['database']
for source_instance in source_instances:
destroy_acl_for(database=database, ip=source_instance.address)
for source_secondary_ip in source_secondary_ips:
destroy_acl_for(database=database, ip=source_secondary_ip.ip)
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
def undo(self, workflow_dict):
LOG.info("Running undo...")
try:
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
|
<commit_before><commit_msg>Add step to delete old acl<commit_after>
|
# -*- coding: utf-8 -*-
import logging
from util import full_stack
from workflow.steps.util.base import BaseStep
from workflow.exceptions.error_codes import DBAAS_0020
from dbaas_aclapi.tasks import destroy_acl_for
LOG = logging.getLogger(__name__)
class RemoveOldAcl(BaseStep):
def __unicode__(self):
return "Deleting old acls..."
def do(self, workflow_dict):
try:
source_instances = workflow_dict['source_instances']
source_secondary_ips = workflow_dict['source_secondary_ips']
database = workflow_dict['database']
for source_instance in source_instances:
destroy_acl_for(database=database, ip=source_instance.address)
for source_secondary_ip in source_secondary_ips:
destroy_acl_for(database=database, ip=source_secondary_ip.ip)
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
def undo(self, workflow_dict):
LOG.info("Running undo...")
try:
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
|
Add step to delete old acl# -*- coding: utf-8 -*-
import logging
from util import full_stack
from workflow.steps.util.base import BaseStep
from workflow.exceptions.error_codes import DBAAS_0020
from dbaas_aclapi.tasks import destroy_acl_for
LOG = logging.getLogger(__name__)
class RemoveOldAcl(BaseStep):
def __unicode__(self):
return "Deleting old acls..."
def do(self, workflow_dict):
try:
source_instances = workflow_dict['source_instances']
source_secondary_ips = workflow_dict['source_secondary_ips']
database = workflow_dict['database']
for source_instance in source_instances:
destroy_acl_for(database=database, ip=source_instance.address)
for source_secondary_ip in source_secondary_ips:
destroy_acl_for(database=database, ip=source_secondary_ip.ip)
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
def undo(self, workflow_dict):
LOG.info("Running undo...")
try:
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
|
<commit_before><commit_msg>Add step to delete old acl<commit_after># -*- coding: utf-8 -*-
import logging
from util import full_stack
from workflow.steps.util.base import BaseStep
from workflow.exceptions.error_codes import DBAAS_0020
from dbaas_aclapi.tasks import destroy_acl_for
LOG = logging.getLogger(__name__)
class RemoveOldAcl(BaseStep):
def __unicode__(self):
return "Deleting old acls..."
def do(self, workflow_dict):
try:
source_instances = workflow_dict['source_instances']
source_secondary_ips = workflow_dict['source_secondary_ips']
database = workflow_dict['database']
for source_instance in source_instances:
destroy_acl_for(database=database, ip=source_instance.address)
for source_secondary_ip in source_secondary_ips:
destroy_acl_for(database=database, ip=source_secondary_ip.ip)
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
def undo(self, workflow_dict):
LOG.info("Running undo...")
try:
return True
except Exception:
traceback = full_stack()
workflow_dict['exceptions']['error_codes'].append(DBAAS_0020)
workflow_dict['exceptions']['traceback'].append(traceback)
return False
|
|
f7da89f1a2a24414778b9b53df77cdac3285a4a7
|
API/chat/migrations/0001_squashed_0002_auto_20150707_1647.py
|
API/chat/migrations/0001_squashed_0002_auto_20150707_1647.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
replaces = [(b'chat', '0001_squashed_0008_auto_20150702_1437'), (b'chat', '0002_auto_20150707_1647')]
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
|
Revert "Revert "[HOTFIX] Remove replaces line on 0001_squashed""
|
Revert "Revert "[HOTFIX] Remove replaces line on 0001_squashed""
|
Python
|
mit
|
VitSalis/ting,dionyziz/ting,mbalamat/ting,odyvarv/ting-1,VitSalis/ting,sirodoht/ting,odyvarv/ting-1,sirodoht/ting,VitSalis/ting,sirodoht/ting,gtklocker/ting,dionyziz/ting,odyvarv/ting-1,odyvarv/ting-1,mbalamat/ting,mbalamat/ting,gtklocker/ting,gtklocker/ting,gtklocker/ting,sirodoht/ting,dionyziz/ting,dionyziz/ting,mbalamat/ting,VitSalis/ting
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
replaces = [(b'chat', '0001_squashed_0008_auto_20150702_1437'), (b'chat', '0002_auto_20150707_1647')]
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
Revert "Revert "[HOTFIX] Remove replaces line on 0001_squashed""
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
|
<commit_before># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
replaces = [(b'chat', '0001_squashed_0008_auto_20150702_1437'), (b'chat', '0002_auto_20150707_1647')]
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
<commit_msg>Revert "Revert "[HOTFIX] Remove replaces line on 0001_squashed""<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
replaces = [(b'chat', '0001_squashed_0008_auto_20150702_1437'), (b'chat', '0002_auto_20150707_1647')]
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
Revert "Revert "[HOTFIX] Remove replaces line on 0001_squashed""# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
|
<commit_before># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
replaces = [(b'chat', '0001_squashed_0008_auto_20150702_1437'), (b'chat', '0002_auto_20150707_1647')]
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
<commit_msg>Revert "Revert "[HOTFIX] Remove replaces line on 0001_squashed""<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Channel',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=20)),
],
),
migrations.CreateModel(
name='Message',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('text', models.TextField(max_length=2000)),
('datetime', models.DateTimeField()),
('channel', models.ForeignKey(to='chat.Channel')),
('username', models.CharField(max_length=20)),
],
),
]
|
9122b16fdf023639725634d6c2b9b758f5adebd2
|
src/python/tests/storage/test_payload_builder.py
|
src/python/tests/storage/test_payload_builder.py
|
# -*- coding: utf-8 -*-
# FOGLAMP_BEGIN
# See: http://foglamp.readthedocs.io/
# FOGLAMP_END
import pytest
import json
from foglamp.storage.payload_builder import PayloadBuilder
__author__ = "Vaibhav Singhal"
__copyright__ = "Copyright (c) 2017 OSIsoft, LLC"
__license__ = "Apache 2.0"
__version__ = "${VERSION}"
class TestPayloadBuilder:
@pytest.mark.parametrize("test_input, expected", [
("name", {"columns": "name"}),
("name,id", {"columns": "name,id"})
])
def test_select_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.SELECT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("test", {"table": "test"}),
("test, test2", {"table": "test, test2"})
])
def test_from_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.FROM(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', '=', 'test'], {'where': {'column': 'name', 'condition': '=', 'value': 'test'}})
])
def test_where_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.WHERE(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(3, {'limit': 3}),
(3.5, {}),
('invalid', {})
])
def test_limit_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.LIMIT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', 'asc'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'desc'], {'sort': {'column': 'name', 'direction': 'desc'}}),
(['name'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'invalid'], {})
])
def test_order_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.ORDER_BY(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("name", {'group': 'name'}),
("name,id", {'group': 'name,id'})
])
def test_group_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.GROUP_BY(test_input).payload()
assert expected == json.loads(res)
|
Test added for Payload builder
|
Test added for Payload builder
|
Python
|
apache-2.0
|
foglamp/FogLAMP,foglamp/FogLAMP,foglamp/FogLAMP,foglamp/FogLAMP
|
Test added for Payload builder
|
# -*- coding: utf-8 -*-
# FOGLAMP_BEGIN
# See: http://foglamp.readthedocs.io/
# FOGLAMP_END
import pytest
import json
from foglamp.storage.payload_builder import PayloadBuilder
__author__ = "Vaibhav Singhal"
__copyright__ = "Copyright (c) 2017 OSIsoft, LLC"
__license__ = "Apache 2.0"
__version__ = "${VERSION}"
class TestPayloadBuilder:
@pytest.mark.parametrize("test_input, expected", [
("name", {"columns": "name"}),
("name,id", {"columns": "name,id"})
])
def test_select_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.SELECT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("test", {"table": "test"}),
("test, test2", {"table": "test, test2"})
])
def test_from_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.FROM(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', '=', 'test'], {'where': {'column': 'name', 'condition': '=', 'value': 'test'}})
])
def test_where_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.WHERE(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(3, {'limit': 3}),
(3.5, {}),
('invalid', {})
])
def test_limit_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.LIMIT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', 'asc'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'desc'], {'sort': {'column': 'name', 'direction': 'desc'}}),
(['name'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'invalid'], {})
])
def test_order_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.ORDER_BY(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("name", {'group': 'name'}),
("name,id", {'group': 'name,id'})
])
def test_group_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.GROUP_BY(test_input).payload()
assert expected == json.loads(res)
|
<commit_before><commit_msg>Test added for Payload builder<commit_after>
|
# -*- coding: utf-8 -*-
# FOGLAMP_BEGIN
# See: http://foglamp.readthedocs.io/
# FOGLAMP_END
import pytest
import json
from foglamp.storage.payload_builder import PayloadBuilder
__author__ = "Vaibhav Singhal"
__copyright__ = "Copyright (c) 2017 OSIsoft, LLC"
__license__ = "Apache 2.0"
__version__ = "${VERSION}"
class TestPayloadBuilder:
@pytest.mark.parametrize("test_input, expected", [
("name", {"columns": "name"}),
("name,id", {"columns": "name,id"})
])
def test_select_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.SELECT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("test", {"table": "test"}),
("test, test2", {"table": "test, test2"})
])
def test_from_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.FROM(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', '=', 'test'], {'where': {'column': 'name', 'condition': '=', 'value': 'test'}})
])
def test_where_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.WHERE(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(3, {'limit': 3}),
(3.5, {}),
('invalid', {})
])
def test_limit_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.LIMIT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', 'asc'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'desc'], {'sort': {'column': 'name', 'direction': 'desc'}}),
(['name'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'invalid'], {})
])
def test_order_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.ORDER_BY(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("name", {'group': 'name'}),
("name,id", {'group': 'name,id'})
])
def test_group_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.GROUP_BY(test_input).payload()
assert expected == json.loads(res)
|
Test added for Payload builder# -*- coding: utf-8 -*-
# FOGLAMP_BEGIN
# See: http://foglamp.readthedocs.io/
# FOGLAMP_END
import pytest
import json
from foglamp.storage.payload_builder import PayloadBuilder
__author__ = "Vaibhav Singhal"
__copyright__ = "Copyright (c) 2017 OSIsoft, LLC"
__license__ = "Apache 2.0"
__version__ = "${VERSION}"
class TestPayloadBuilder:
@pytest.mark.parametrize("test_input, expected", [
("name", {"columns": "name"}),
("name,id", {"columns": "name,id"})
])
def test_select_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.SELECT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("test", {"table": "test"}),
("test, test2", {"table": "test, test2"})
])
def test_from_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.FROM(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', '=', 'test'], {'where': {'column': 'name', 'condition': '=', 'value': 'test'}})
])
def test_where_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.WHERE(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(3, {'limit': 3}),
(3.5, {}),
('invalid', {})
])
def test_limit_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.LIMIT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', 'asc'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'desc'], {'sort': {'column': 'name', 'direction': 'desc'}}),
(['name'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'invalid'], {})
])
def test_order_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.ORDER_BY(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("name", {'group': 'name'}),
("name,id", {'group': 'name,id'})
])
def test_group_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.GROUP_BY(test_input).payload()
assert expected == json.loads(res)
|
<commit_before><commit_msg>Test added for Payload builder<commit_after># -*- coding: utf-8 -*-
# FOGLAMP_BEGIN
# See: http://foglamp.readthedocs.io/
# FOGLAMP_END
import pytest
import json
from foglamp.storage.payload_builder import PayloadBuilder
__author__ = "Vaibhav Singhal"
__copyright__ = "Copyright (c) 2017 OSIsoft, LLC"
__license__ = "Apache 2.0"
__version__ = "${VERSION}"
class TestPayloadBuilder:
@pytest.mark.parametrize("test_input, expected", [
("name", {"columns": "name"}),
("name,id", {"columns": "name,id"})
])
def test_select_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.SELECT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("test", {"table": "test"}),
("test, test2", {"table": "test, test2"})
])
def test_from_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.FROM(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', '=', 'test'], {'where': {'column': 'name', 'condition': '=', 'value': 'test'}})
])
def test_where_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.WHERE(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(3, {'limit': 3}),
(3.5, {}),
('invalid', {})
])
def test_limit_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.LIMIT(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
(['name', 'asc'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'desc'], {'sort': {'column': 'name', 'direction': 'desc'}}),
(['name'], {'sort': {'column': 'name', 'direction': 'asc'}}),
(['name', 'invalid'], {})
])
def test_order_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.ORDER_BY(test_input).payload()
assert expected == json.loads(res)
@pytest.mark.parametrize("test_input, expected", [
("name", {'group': 'name'}),
("name,id", {'group': 'name,id'})
])
def test_group_by_payload(self, test_input, expected):
pb = PayloadBuilder()
res = pb.GROUP_BY(test_input).payload()
assert expected == json.loads(res)
|
|
c44c2db96e9901a32077a8922e52f1d1219be2cb
|
ceph_deploy/tests/parser/test_config.py
|
ceph_deploy/tests/parser/test_config.py
|
import pytest
from ceph_deploy.cli import get_parser
SUBCMDS_WITH_ARGS = ['push', 'pull']
class TestParserConfig(object):
def setup(self):
self.parser = get_parser()
def test_config_help(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config --help'.split())
out, err = capsys.readouterr()
assert 'usage: ceph-deploy config' in out
assert 'positional arguments:' in out
assert 'optional arguments:' in out
@pytest.mark.parametrize('cmd', SUBCMDS_WITH_ARGS)
def test_config_subcommands_with_args(self, cmd):
self.parser.parse_args(['config'] + ['%s' % cmd] + ['host1'])
def test_config_invalid_subcommand(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config bork'.split())
out, err = capsys.readouterr()
assert 'invalid choice' in err
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_push_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config push'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_push_one_host(self):
args = self.parser.parse_args('config push host1'.split())
assert args.client == ['host1']
def test_config_push_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config push'.split() + hostnames)
assert args.client == hostnames
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_pull_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config pull'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_pull_one_host(self):
args = self.parser.parse_args('config pull host1'.split())
assert args.client == ['host1']
def test_config_pull_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config pull'.split() + hostnames)
assert args.client == hostnames
|
Add argparse tests for ceph-deploy config
|
[RM-11742] Add argparse tests for ceph-deploy config
Signed-off-by: Travis Rhoden <e5e44d6dbac12e32e01c3bb8b67940d8b42e225b@redhat.com>
|
Python
|
mit
|
zhouyuan/ceph-deploy,codenrhoden/ceph-deploy,isyippee/ceph-deploy,ghxandsky/ceph-deploy,branto1/ceph-deploy,imzhulei/ceph-deploy,codenrhoden/ceph-deploy,ceph/ceph-deploy,trhoden/ceph-deploy,Vicente-Cheng/ceph-deploy,osynge/ceph-deploy,SUSE/ceph-deploy,SUSE/ceph-deploy,shenhequnying/ceph-deploy,shenhequnying/ceph-deploy,osynge/ceph-deploy,branto1/ceph-deploy,isyippee/ceph-deploy,imzhulei/ceph-deploy,trhoden/ceph-deploy,ghxandsky/ceph-deploy,ceph/ceph-deploy,zhouyuan/ceph-deploy,SUSE/ceph-deploy-to-be-deleted,SUSE/ceph-deploy-to-be-deleted,Vicente-Cheng/ceph-deploy
|
[RM-11742] Add argparse tests for ceph-deploy config
Signed-off-by: Travis Rhoden <e5e44d6dbac12e32e01c3bb8b67940d8b42e225b@redhat.com>
|
import pytest
from ceph_deploy.cli import get_parser
SUBCMDS_WITH_ARGS = ['push', 'pull']
class TestParserConfig(object):
def setup(self):
self.parser = get_parser()
def test_config_help(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config --help'.split())
out, err = capsys.readouterr()
assert 'usage: ceph-deploy config' in out
assert 'positional arguments:' in out
assert 'optional arguments:' in out
@pytest.mark.parametrize('cmd', SUBCMDS_WITH_ARGS)
def test_config_subcommands_with_args(self, cmd):
self.parser.parse_args(['config'] + ['%s' % cmd] + ['host1'])
def test_config_invalid_subcommand(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config bork'.split())
out, err = capsys.readouterr()
assert 'invalid choice' in err
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_push_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config push'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_push_one_host(self):
args = self.parser.parse_args('config push host1'.split())
assert args.client == ['host1']
def test_config_push_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config push'.split() + hostnames)
assert args.client == hostnames
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_pull_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config pull'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_pull_one_host(self):
args = self.parser.parse_args('config pull host1'.split())
assert args.client == ['host1']
def test_config_pull_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config pull'.split() + hostnames)
assert args.client == hostnames
|
<commit_before><commit_msg>[RM-11742] Add argparse tests for ceph-deploy config
Signed-off-by: Travis Rhoden <e5e44d6dbac12e32e01c3bb8b67940d8b42e225b@redhat.com><commit_after>
|
import pytest
from ceph_deploy.cli import get_parser
SUBCMDS_WITH_ARGS = ['push', 'pull']
class TestParserConfig(object):
def setup(self):
self.parser = get_parser()
def test_config_help(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config --help'.split())
out, err = capsys.readouterr()
assert 'usage: ceph-deploy config' in out
assert 'positional arguments:' in out
assert 'optional arguments:' in out
@pytest.mark.parametrize('cmd', SUBCMDS_WITH_ARGS)
def test_config_subcommands_with_args(self, cmd):
self.parser.parse_args(['config'] + ['%s' % cmd] + ['host1'])
def test_config_invalid_subcommand(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config bork'.split())
out, err = capsys.readouterr()
assert 'invalid choice' in err
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_push_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config push'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_push_one_host(self):
args = self.parser.parse_args('config push host1'.split())
assert args.client == ['host1']
def test_config_push_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config push'.split() + hostnames)
assert args.client == hostnames
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_pull_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config pull'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_pull_one_host(self):
args = self.parser.parse_args('config pull host1'.split())
assert args.client == ['host1']
def test_config_pull_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config pull'.split() + hostnames)
assert args.client == hostnames
|
[RM-11742] Add argparse tests for ceph-deploy config
Signed-off-by: Travis Rhoden <e5e44d6dbac12e32e01c3bb8b67940d8b42e225b@redhat.com>import pytest
from ceph_deploy.cli import get_parser
SUBCMDS_WITH_ARGS = ['push', 'pull']
class TestParserConfig(object):
def setup(self):
self.parser = get_parser()
def test_config_help(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config --help'.split())
out, err = capsys.readouterr()
assert 'usage: ceph-deploy config' in out
assert 'positional arguments:' in out
assert 'optional arguments:' in out
@pytest.mark.parametrize('cmd', SUBCMDS_WITH_ARGS)
def test_config_subcommands_with_args(self, cmd):
self.parser.parse_args(['config'] + ['%s' % cmd] + ['host1'])
def test_config_invalid_subcommand(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config bork'.split())
out, err = capsys.readouterr()
assert 'invalid choice' in err
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_push_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config push'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_push_one_host(self):
args = self.parser.parse_args('config push host1'.split())
assert args.client == ['host1']
def test_config_push_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config push'.split() + hostnames)
assert args.client == hostnames
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_pull_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config pull'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_pull_one_host(self):
args = self.parser.parse_args('config pull host1'.split())
assert args.client == ['host1']
def test_config_pull_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config pull'.split() + hostnames)
assert args.client == hostnames
|
<commit_before><commit_msg>[RM-11742] Add argparse tests for ceph-deploy config
Signed-off-by: Travis Rhoden <e5e44d6dbac12e32e01c3bb8b67940d8b42e225b@redhat.com><commit_after>import pytest
from ceph_deploy.cli import get_parser
SUBCMDS_WITH_ARGS = ['push', 'pull']
class TestParserConfig(object):
def setup(self):
self.parser = get_parser()
def test_config_help(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config --help'.split())
out, err = capsys.readouterr()
assert 'usage: ceph-deploy config' in out
assert 'positional arguments:' in out
assert 'optional arguments:' in out
@pytest.mark.parametrize('cmd', SUBCMDS_WITH_ARGS)
def test_config_subcommands_with_args(self, cmd):
self.parser.parse_args(['config'] + ['%s' % cmd] + ['host1'])
def test_config_invalid_subcommand(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config bork'.split())
out, err = capsys.readouterr()
assert 'invalid choice' in err
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_push_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config push'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_push_one_host(self):
args = self.parser.parse_args('config push host1'.split())
assert args.client == ['host1']
def test_config_push_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config push'.split() + hostnames)
assert args.client == hostnames
@pytest.mark.skipif(reason="http://tracker.ceph.com/issues/12150")
def test_config_pull_host_required(self, capsys):
with pytest.raises(SystemExit):
self.parser.parse_args('config pull'.split())
out, err = capsys.readouterr()
assert "error: too few arguments" in err
def test_config_pull_one_host(self):
args = self.parser.parse_args('config pull host1'.split())
assert args.client == ['host1']
def test_config_pull_multiple_hosts(self):
hostnames = ['host1', 'host2', 'host3']
args = self.parser.parse_args('config pull'.split() + hostnames)
assert args.client == hostnames
|
|
bbbe1010b7aaee6dcec9f2abb3f39a5f7ddee903
|
lintcode/Easy/085_Insert_Node_in_a_Binary_Search_Tree.py
|
lintcode/Easy/085_Insert_Node_in_a_Binary_Search_Tree.py
|
"""
Definition of TreeNode:
class TreeNode:
def __init__(self, val):
self.val = val
self.left, self.right = None, None
"""
class Solution:
"""
@param root: The root of the binary search tree.
@param node: insert this node into the binary search tree.
@return: The root of the new binary search tree.
"""
def insertNode(self, root, node):
# write your code here
# Iterative
if (root is None):
return node
parent = None
target = root
dir = None
while (target):
parent = target
if (target.val < node.val):
target = target.right
dir = 'right'
else:
target = target.left
dir = 'left'
if (target is None):
if (dir == 'left'):
parent.left = node
else:
parent.right = node
break
return root;
# Recursive
# if (root is None):
# return node
# if (root.val > node.val):
# root.left = self.insertNode(root.left, node)
# else:
# root.right = self.insertNode(root.right, node)
# return root
|
Add solution to lintcode question 85
|
Add solution to lintcode question 85
|
Python
|
mit
|
Rhadow/leetcode,Rhadow/leetcode,Rhadow/leetcode,Rhadow/leetcode
|
Add solution to lintcode question 85
|
"""
Definition of TreeNode:
class TreeNode:
def __init__(self, val):
self.val = val
self.left, self.right = None, None
"""
class Solution:
"""
@param root: The root of the binary search tree.
@param node: insert this node into the binary search tree.
@return: The root of the new binary search tree.
"""
def insertNode(self, root, node):
# write your code here
# Iterative
if (root is None):
return node
parent = None
target = root
dir = None
while (target):
parent = target
if (target.val < node.val):
target = target.right
dir = 'right'
else:
target = target.left
dir = 'left'
if (target is None):
if (dir == 'left'):
parent.left = node
else:
parent.right = node
break
return root;
# Recursive
# if (root is None):
# return node
# if (root.val > node.val):
# root.left = self.insertNode(root.left, node)
# else:
# root.right = self.insertNode(root.right, node)
# return root
|
<commit_before><commit_msg>Add solution to lintcode question 85<commit_after>
|
"""
Definition of TreeNode:
class TreeNode:
def __init__(self, val):
self.val = val
self.left, self.right = None, None
"""
class Solution:
"""
@param root: The root of the binary search tree.
@param node: insert this node into the binary search tree.
@return: The root of the new binary search tree.
"""
def insertNode(self, root, node):
# write your code here
# Iterative
if (root is None):
return node
parent = None
target = root
dir = None
while (target):
parent = target
if (target.val < node.val):
target = target.right
dir = 'right'
else:
target = target.left
dir = 'left'
if (target is None):
if (dir == 'left'):
parent.left = node
else:
parent.right = node
break
return root;
# Recursive
# if (root is None):
# return node
# if (root.val > node.val):
# root.left = self.insertNode(root.left, node)
# else:
# root.right = self.insertNode(root.right, node)
# return root
|
Add solution to lintcode question 85"""
Definition of TreeNode:
class TreeNode:
def __init__(self, val):
self.val = val
self.left, self.right = None, None
"""
class Solution:
"""
@param root: The root of the binary search tree.
@param node: insert this node into the binary search tree.
@return: The root of the new binary search tree.
"""
def insertNode(self, root, node):
# write your code here
# Iterative
if (root is None):
return node
parent = None
target = root
dir = None
while (target):
parent = target
if (target.val < node.val):
target = target.right
dir = 'right'
else:
target = target.left
dir = 'left'
if (target is None):
if (dir == 'left'):
parent.left = node
else:
parent.right = node
break
return root;
# Recursive
# if (root is None):
# return node
# if (root.val > node.val):
# root.left = self.insertNode(root.left, node)
# else:
# root.right = self.insertNode(root.right, node)
# return root
|
<commit_before><commit_msg>Add solution to lintcode question 85<commit_after>"""
Definition of TreeNode:
class TreeNode:
def __init__(self, val):
self.val = val
self.left, self.right = None, None
"""
class Solution:
"""
@param root: The root of the binary search tree.
@param node: insert this node into the binary search tree.
@return: The root of the new binary search tree.
"""
def insertNode(self, root, node):
# write your code here
# Iterative
if (root is None):
return node
parent = None
target = root
dir = None
while (target):
parent = target
if (target.val < node.val):
target = target.right
dir = 'right'
else:
target = target.left
dir = 'left'
if (target is None):
if (dir == 'left'):
parent.left = node
else:
parent.right = node
break
return root;
# Recursive
# if (root is None):
# return node
# if (root.val > node.val):
# root.left = self.insertNode(root.left, node)
# else:
# root.right = self.insertNode(root.right, node)
# return root
|
|
3964db81c5b646102514d0be380cfd7cde3f2fa8
|
toast/tod/pointing_math.py
|
toast/tod/pointing_math.py
|
# Copyright (c) 2015 by the parties listed in the AUTHORS file.
# All rights reserved. Use of this source code is governed by
# a BSD-style license that can be found in the LICENSE file.
import unittest
import numpy as np
import healpy as hp
import quaternionarray as qa
|
Add empty file stub for pointing math operations.
|
Add empty file stub for pointing math operations.
|
Python
|
bsd-2-clause
|
tskisner/pytoast,tskisner/pytoast
|
Add empty file stub for pointing math operations.
|
# Copyright (c) 2015 by the parties listed in the AUTHORS file.
# All rights reserved. Use of this source code is governed by
# a BSD-style license that can be found in the LICENSE file.
import unittest
import numpy as np
import healpy as hp
import quaternionarray as qa
|
<commit_before><commit_msg>Add empty file stub for pointing math operations.<commit_after>
|
# Copyright (c) 2015 by the parties listed in the AUTHORS file.
# All rights reserved. Use of this source code is governed by
# a BSD-style license that can be found in the LICENSE file.
import unittest
import numpy as np
import healpy as hp
import quaternionarray as qa
|
Add empty file stub for pointing math operations.# Copyright (c) 2015 by the parties listed in the AUTHORS file.
# All rights reserved. Use of this source code is governed by
# a BSD-style license that can be found in the LICENSE file.
import unittest
import numpy as np
import healpy as hp
import quaternionarray as qa
|
<commit_before><commit_msg>Add empty file stub for pointing math operations.<commit_after># Copyright (c) 2015 by the parties listed in the AUTHORS file.
# All rights reserved. Use of this source code is governed by
# a BSD-style license that can be found in the LICENSE file.
import unittest
import numpy as np
import healpy as hp
import quaternionarray as qa
|
|
d9c4ebf7bea31b6479d73d5c24909337582705f3
|
tests/unit/test_pep517_config.py
|
tests/unit/test_pep517_config.py
|
import pytest
from pip._internal.commands import create_command
@pytest.mark.parametrize(
("command", "expected"),
[
("install", True),
("wheel", True),
("freeze", False),
],
)
def test_supports_config(command: str, expected: bool) -> None:
c = create_command(command)
options, _ = c.parse_args([])
assert hasattr(options, "config_settings") == expected
def test_set_config_value_true() -> None:
i = create_command("install")
# Invalid argument exits with an error
with pytest.raises(SystemExit):
options, _ = i.parse_args(["xxx", "--config-settings", "x"])
def test_set_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x=hello"])
assert options.config_settings == {"x": "hello"}
def test_set_config_empty_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x="])
assert options.config_settings == {"x": ""}
def test_replace_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(
["xxx", "--config-settings", "x=hello", "--config-settings", "x=world"]
)
assert options.config_settings == {"x": "world"}
|
Add tests for basic --config-settings parsing
|
Add tests for basic --config-settings parsing
|
Python
|
mit
|
pradyunsg/pip,pypa/pip,pradyunsg/pip,pypa/pip,pfmoore/pip,sbidoul/pip,pfmoore/pip,sbidoul/pip
|
Add tests for basic --config-settings parsing
|
import pytest
from pip._internal.commands import create_command
@pytest.mark.parametrize(
("command", "expected"),
[
("install", True),
("wheel", True),
("freeze", False),
],
)
def test_supports_config(command: str, expected: bool) -> None:
c = create_command(command)
options, _ = c.parse_args([])
assert hasattr(options, "config_settings") == expected
def test_set_config_value_true() -> None:
i = create_command("install")
# Invalid argument exits with an error
with pytest.raises(SystemExit):
options, _ = i.parse_args(["xxx", "--config-settings", "x"])
def test_set_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x=hello"])
assert options.config_settings == {"x": "hello"}
def test_set_config_empty_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x="])
assert options.config_settings == {"x": ""}
def test_replace_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(
["xxx", "--config-settings", "x=hello", "--config-settings", "x=world"]
)
assert options.config_settings == {"x": "world"}
|
<commit_before><commit_msg>Add tests for basic --config-settings parsing<commit_after>
|
import pytest
from pip._internal.commands import create_command
@pytest.mark.parametrize(
("command", "expected"),
[
("install", True),
("wheel", True),
("freeze", False),
],
)
def test_supports_config(command: str, expected: bool) -> None:
c = create_command(command)
options, _ = c.parse_args([])
assert hasattr(options, "config_settings") == expected
def test_set_config_value_true() -> None:
i = create_command("install")
# Invalid argument exits with an error
with pytest.raises(SystemExit):
options, _ = i.parse_args(["xxx", "--config-settings", "x"])
def test_set_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x=hello"])
assert options.config_settings == {"x": "hello"}
def test_set_config_empty_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x="])
assert options.config_settings == {"x": ""}
def test_replace_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(
["xxx", "--config-settings", "x=hello", "--config-settings", "x=world"]
)
assert options.config_settings == {"x": "world"}
|
Add tests for basic --config-settings parsingimport pytest
from pip._internal.commands import create_command
@pytest.mark.parametrize(
("command", "expected"),
[
("install", True),
("wheel", True),
("freeze", False),
],
)
def test_supports_config(command: str, expected: bool) -> None:
c = create_command(command)
options, _ = c.parse_args([])
assert hasattr(options, "config_settings") == expected
def test_set_config_value_true() -> None:
i = create_command("install")
# Invalid argument exits with an error
with pytest.raises(SystemExit):
options, _ = i.parse_args(["xxx", "--config-settings", "x"])
def test_set_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x=hello"])
assert options.config_settings == {"x": "hello"}
def test_set_config_empty_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x="])
assert options.config_settings == {"x": ""}
def test_replace_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(
["xxx", "--config-settings", "x=hello", "--config-settings", "x=world"]
)
assert options.config_settings == {"x": "world"}
|
<commit_before><commit_msg>Add tests for basic --config-settings parsing<commit_after>import pytest
from pip._internal.commands import create_command
@pytest.mark.parametrize(
("command", "expected"),
[
("install", True),
("wheel", True),
("freeze", False),
],
)
def test_supports_config(command: str, expected: bool) -> None:
c = create_command(command)
options, _ = c.parse_args([])
assert hasattr(options, "config_settings") == expected
def test_set_config_value_true() -> None:
i = create_command("install")
# Invalid argument exits with an error
with pytest.raises(SystemExit):
options, _ = i.parse_args(["xxx", "--config-settings", "x"])
def test_set_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x=hello"])
assert options.config_settings == {"x": "hello"}
def test_set_config_empty_value() -> None:
i = create_command("install")
options, _ = i.parse_args(["xxx", "--config-settings", "x="])
assert options.config_settings == {"x": ""}
def test_replace_config_value() -> None:
i = create_command("install")
options, _ = i.parse_args(
["xxx", "--config-settings", "x=hello", "--config-settings", "x=world"]
)
assert options.config_settings == {"x": "world"}
|
|
4e8d972a49d47eaede707796a3d1b439bb7dbccc
|
tests/ChromskyForm/Inverse/OneRuleWithMultipleNonterminalsTest.py
|
tests/ChromskyForm/Inverse/OneRuleWithMultipleNonterminalsTest.py
|
#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 28.08.2017 14:37
:Licence GNUv3
Part of grammpy-transforms
"""
from unittest import TestCase, main
from grammpy import *
from grammpy_transforms import ContextFree, InverseContextFree
from pyparsers import cyk
class S(Nonterminal): pass
class A(Nonterminal): pass
class B(Nonterminal): pass
class C(Nonterminal): pass
class RuleSABC(Rule):
rule = ([S], [A, B, C])
class RuleA0(Rule):
rule = ([A], [0])
class RuleB1(Rule):
rule = ([B], [1])
class RuleC2(Rule):
rule = ([C], [2])
class OneRuleWithMultipleNonterminalsTest(TestCase):
def test_transform(self):
g = Grammar(nonterminals=[S, A, B, C],
rules=[RuleSABC, RuleA0, RuleB1, RuleC2])
com = ContextFree.transform_to_chomsky_normal_form(g)
pars = cyk(com, [0, 1, 2])
trans = InverseContextFree.transform_from_chomsky_normal_form(pars)
self.assertIsInstance(trans, S)
self.assertIsInstance(trans.to_rule, RuleSABC)
a = trans.to_rule.to_symbols[0]
b = trans.to_rule.to_symbols[1]
c = trans.to_rule.to_symbols[2]
self.assertIsInstance(a, A)
self.assertIs(a.to_rule, RuleA0)
self.assertIsInstance(a.to_rule.to_symbols[0], Terminal)
self.assertEqual(a.to_rule.to_symbols[0].s, 0)
self.assertIsInstance(b, B)
self.assertIs(b.to_rule, RuleB1)
self.assertIsInstance(b.to_rule.to_symbols[0], Terminal)
self.assertEqual(b.to_rule.to_symbols[0].s, 1)
self.assertIsInstance(c, C)
self.assertIs(c.to_rule, RuleC2)
self.assertIsInstance(c.to_rule.to_symbols[0], Terminal)
self.assertEqual(c.to_rule.to_symbols[0].s, 2)
if __name__ == '__main__':
main()
|
Add test of inverse transformation of chomsky form
|
Add test of inverse transformation of chomsky form
|
Python
|
mit
|
PatrikValkovic/grammpy
|
Add test of inverse transformation of chomsky form
|
#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 28.08.2017 14:37
:Licence GNUv3
Part of grammpy-transforms
"""
from unittest import TestCase, main
from grammpy import *
from grammpy_transforms import ContextFree, InverseContextFree
from pyparsers import cyk
class S(Nonterminal): pass
class A(Nonterminal): pass
class B(Nonterminal): pass
class C(Nonterminal): pass
class RuleSABC(Rule):
rule = ([S], [A, B, C])
class RuleA0(Rule):
rule = ([A], [0])
class RuleB1(Rule):
rule = ([B], [1])
class RuleC2(Rule):
rule = ([C], [2])
class OneRuleWithMultipleNonterminalsTest(TestCase):
def test_transform(self):
g = Grammar(nonterminals=[S, A, B, C],
rules=[RuleSABC, RuleA0, RuleB1, RuleC2])
com = ContextFree.transform_to_chomsky_normal_form(g)
pars = cyk(com, [0, 1, 2])
trans = InverseContextFree.transform_from_chomsky_normal_form(pars)
self.assertIsInstance(trans, S)
self.assertIsInstance(trans.to_rule, RuleSABC)
a = trans.to_rule.to_symbols[0]
b = trans.to_rule.to_symbols[1]
c = trans.to_rule.to_symbols[2]
self.assertIsInstance(a, A)
self.assertIs(a.to_rule, RuleA0)
self.assertIsInstance(a.to_rule.to_symbols[0], Terminal)
self.assertEqual(a.to_rule.to_symbols[0].s, 0)
self.assertIsInstance(b, B)
self.assertIs(b.to_rule, RuleB1)
self.assertIsInstance(b.to_rule.to_symbols[0], Terminal)
self.assertEqual(b.to_rule.to_symbols[0].s, 1)
self.assertIsInstance(c, C)
self.assertIs(c.to_rule, RuleC2)
self.assertIsInstance(c.to_rule.to_symbols[0], Terminal)
self.assertEqual(c.to_rule.to_symbols[0].s, 2)
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add test of inverse transformation of chomsky form<commit_after>
|
#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 28.08.2017 14:37
:Licence GNUv3
Part of grammpy-transforms
"""
from unittest import TestCase, main
from grammpy import *
from grammpy_transforms import ContextFree, InverseContextFree
from pyparsers import cyk
class S(Nonterminal): pass
class A(Nonterminal): pass
class B(Nonterminal): pass
class C(Nonterminal): pass
class RuleSABC(Rule):
rule = ([S], [A, B, C])
class RuleA0(Rule):
rule = ([A], [0])
class RuleB1(Rule):
rule = ([B], [1])
class RuleC2(Rule):
rule = ([C], [2])
class OneRuleWithMultipleNonterminalsTest(TestCase):
def test_transform(self):
g = Grammar(nonterminals=[S, A, B, C],
rules=[RuleSABC, RuleA0, RuleB1, RuleC2])
com = ContextFree.transform_to_chomsky_normal_form(g)
pars = cyk(com, [0, 1, 2])
trans = InverseContextFree.transform_from_chomsky_normal_form(pars)
self.assertIsInstance(trans, S)
self.assertIsInstance(trans.to_rule, RuleSABC)
a = trans.to_rule.to_symbols[0]
b = trans.to_rule.to_symbols[1]
c = trans.to_rule.to_symbols[2]
self.assertIsInstance(a, A)
self.assertIs(a.to_rule, RuleA0)
self.assertIsInstance(a.to_rule.to_symbols[0], Terminal)
self.assertEqual(a.to_rule.to_symbols[0].s, 0)
self.assertIsInstance(b, B)
self.assertIs(b.to_rule, RuleB1)
self.assertIsInstance(b.to_rule.to_symbols[0], Terminal)
self.assertEqual(b.to_rule.to_symbols[0].s, 1)
self.assertIsInstance(c, C)
self.assertIs(c.to_rule, RuleC2)
self.assertIsInstance(c.to_rule.to_symbols[0], Terminal)
self.assertEqual(c.to_rule.to_symbols[0].s, 2)
if __name__ == '__main__':
main()
|
Add test of inverse transformation of chomsky form#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 28.08.2017 14:37
:Licence GNUv3
Part of grammpy-transforms
"""
from unittest import TestCase, main
from grammpy import *
from grammpy_transforms import ContextFree, InverseContextFree
from pyparsers import cyk
class S(Nonterminal): pass
class A(Nonterminal): pass
class B(Nonterminal): pass
class C(Nonterminal): pass
class RuleSABC(Rule):
rule = ([S], [A, B, C])
class RuleA0(Rule):
rule = ([A], [0])
class RuleB1(Rule):
rule = ([B], [1])
class RuleC2(Rule):
rule = ([C], [2])
class OneRuleWithMultipleNonterminalsTest(TestCase):
def test_transform(self):
g = Grammar(nonterminals=[S, A, B, C],
rules=[RuleSABC, RuleA0, RuleB1, RuleC2])
com = ContextFree.transform_to_chomsky_normal_form(g)
pars = cyk(com, [0, 1, 2])
trans = InverseContextFree.transform_from_chomsky_normal_form(pars)
self.assertIsInstance(trans, S)
self.assertIsInstance(trans.to_rule, RuleSABC)
a = trans.to_rule.to_symbols[0]
b = trans.to_rule.to_symbols[1]
c = trans.to_rule.to_symbols[2]
self.assertIsInstance(a, A)
self.assertIs(a.to_rule, RuleA0)
self.assertIsInstance(a.to_rule.to_symbols[0], Terminal)
self.assertEqual(a.to_rule.to_symbols[0].s, 0)
self.assertIsInstance(b, B)
self.assertIs(b.to_rule, RuleB1)
self.assertIsInstance(b.to_rule.to_symbols[0], Terminal)
self.assertEqual(b.to_rule.to_symbols[0].s, 1)
self.assertIsInstance(c, C)
self.assertIs(c.to_rule, RuleC2)
self.assertIsInstance(c.to_rule.to_symbols[0], Terminal)
self.assertEqual(c.to_rule.to_symbols[0].s, 2)
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add test of inverse transformation of chomsky form<commit_after>#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 28.08.2017 14:37
:Licence GNUv3
Part of grammpy-transforms
"""
from unittest import TestCase, main
from grammpy import *
from grammpy_transforms import ContextFree, InverseContextFree
from pyparsers import cyk
class S(Nonterminal): pass
class A(Nonterminal): pass
class B(Nonterminal): pass
class C(Nonterminal): pass
class RuleSABC(Rule):
rule = ([S], [A, B, C])
class RuleA0(Rule):
rule = ([A], [0])
class RuleB1(Rule):
rule = ([B], [1])
class RuleC2(Rule):
rule = ([C], [2])
class OneRuleWithMultipleNonterminalsTest(TestCase):
def test_transform(self):
g = Grammar(nonterminals=[S, A, B, C],
rules=[RuleSABC, RuleA0, RuleB1, RuleC2])
com = ContextFree.transform_to_chomsky_normal_form(g)
pars = cyk(com, [0, 1, 2])
trans = InverseContextFree.transform_from_chomsky_normal_form(pars)
self.assertIsInstance(trans, S)
self.assertIsInstance(trans.to_rule, RuleSABC)
a = trans.to_rule.to_symbols[0]
b = trans.to_rule.to_symbols[1]
c = trans.to_rule.to_symbols[2]
self.assertIsInstance(a, A)
self.assertIs(a.to_rule, RuleA0)
self.assertIsInstance(a.to_rule.to_symbols[0], Terminal)
self.assertEqual(a.to_rule.to_symbols[0].s, 0)
self.assertIsInstance(b, B)
self.assertIs(b.to_rule, RuleB1)
self.assertIsInstance(b.to_rule.to_symbols[0], Terminal)
self.assertEqual(b.to_rule.to_symbols[0].s, 1)
self.assertIsInstance(c, C)
self.assertIs(c.to_rule, RuleC2)
self.assertIsInstance(c.to_rule.to_symbols[0], Terminal)
self.assertEqual(c.to_rule.to_symbols[0].s, 2)
if __name__ == '__main__':
main()
|
|
778dc4d9096e5dbfcf84f2b5b12417158986dda6
|
tests/rules_tests/grammarManipulation_tests/IterationTest.py
|
tests/rules_tests/grammarManipulation_tests/IterationTest.py
|
#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 23.06.2017 16:39
:Licence GNUv3
Part of grammpy
"""
from unittest import main, TestCase
from grammpy import Grammar, Nonterminal, Rule
from ..grammar import *
class IterationTest(TestCase):
pass
if __name__ == '__main__':
main()
|
Add file for rule iteration tests
|
Add file for rule iteration tests
|
Python
|
mit
|
PatrikValkovic/grammpy
|
Add file for rule iteration tests
|
#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 23.06.2017 16:39
:Licence GNUv3
Part of grammpy
"""
from unittest import main, TestCase
from grammpy import Grammar, Nonterminal, Rule
from ..grammar import *
class IterationTest(TestCase):
pass
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add file for rule iteration tests<commit_after>
|
#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 23.06.2017 16:39
:Licence GNUv3
Part of grammpy
"""
from unittest import main, TestCase
from grammpy import Grammar, Nonterminal, Rule
from ..grammar import *
class IterationTest(TestCase):
pass
if __name__ == '__main__':
main()
|
Add file for rule iteration tests#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 23.06.2017 16:39
:Licence GNUv3
Part of grammpy
"""
from unittest import main, TestCase
from grammpy import Grammar, Nonterminal, Rule
from ..grammar import *
class IterationTest(TestCase):
pass
if __name__ == '__main__':
main()
|
<commit_before><commit_msg>Add file for rule iteration tests<commit_after>#!/usr/bin/env python
"""
:Author Patrik Valkovic
:Created 23.06.2017 16:39
:Licence GNUv3
Part of grammpy
"""
from unittest import main, TestCase
from grammpy import Grammar, Nonterminal, Rule
from ..grammar import *
class IterationTest(TestCase):
pass
if __name__ == '__main__':
main()
|
|
a9ca199bd53591b5c7f0ecd40ceb97b41479208e
|
tests/test_iframe.py
|
tests/test_iframe.py
|
# -*- coding: utf-8 -*-
""""
Folium Element Module class IFrame
----------------------
"""
import branca.element as elem
from selenium import webdriver
import base64
def test_create_empty_iframe():
iframe = elem.IFrame()
iframe.render()
def test_create_iframe():
iframe = elem.IFrame(html="<p>test content<p>",width=60,height=45)
iframe.render()
def test_rendering_utf8_iframe():
iframe = elem.IFrame(html="<p>Cerrahpaşa Tıp Fakültesi</p>")
driver = webdriver.PhantomJS()
driver.get("data:text/html," + iframe.render())
driver.switch_to.frame(0);
assert "Cerrahpaşa Tıp Fakültesi" in driver.page_source
|
Add test of IFrame element, creation and rendering
|
Add test of IFrame element, creation and rendering
|
Python
|
mit
|
python-visualization/branca,nanodan/branca,nanodan/branca,python-visualization/branca
|
Add test of IFrame element, creation and rendering
|
# -*- coding: utf-8 -*-
""""
Folium Element Module class IFrame
----------------------
"""
import branca.element as elem
from selenium import webdriver
import base64
def test_create_empty_iframe():
iframe = elem.IFrame()
iframe.render()
def test_create_iframe():
iframe = elem.IFrame(html="<p>test content<p>",width=60,height=45)
iframe.render()
def test_rendering_utf8_iframe():
iframe = elem.IFrame(html="<p>Cerrahpaşa Tıp Fakültesi</p>")
driver = webdriver.PhantomJS()
driver.get("data:text/html," + iframe.render())
driver.switch_to.frame(0);
assert "Cerrahpaşa Tıp Fakültesi" in driver.page_source
|
<commit_before><commit_msg>Add test of IFrame element, creation and rendering<commit_after>
|
# -*- coding: utf-8 -*-
""""
Folium Element Module class IFrame
----------------------
"""
import branca.element as elem
from selenium import webdriver
import base64
def test_create_empty_iframe():
iframe = elem.IFrame()
iframe.render()
def test_create_iframe():
iframe = elem.IFrame(html="<p>test content<p>",width=60,height=45)
iframe.render()
def test_rendering_utf8_iframe():
iframe = elem.IFrame(html="<p>Cerrahpaşa Tıp Fakültesi</p>")
driver = webdriver.PhantomJS()
driver.get("data:text/html," + iframe.render())
driver.switch_to.frame(0);
assert "Cerrahpaşa Tıp Fakültesi" in driver.page_source
|
Add test of IFrame element, creation and rendering# -*- coding: utf-8 -*-
""""
Folium Element Module class IFrame
----------------------
"""
import branca.element as elem
from selenium import webdriver
import base64
def test_create_empty_iframe():
iframe = elem.IFrame()
iframe.render()
def test_create_iframe():
iframe = elem.IFrame(html="<p>test content<p>",width=60,height=45)
iframe.render()
def test_rendering_utf8_iframe():
iframe = elem.IFrame(html="<p>Cerrahpaşa Tıp Fakültesi</p>")
driver = webdriver.PhantomJS()
driver.get("data:text/html," + iframe.render())
driver.switch_to.frame(0);
assert "Cerrahpaşa Tıp Fakültesi" in driver.page_source
|
<commit_before><commit_msg>Add test of IFrame element, creation and rendering<commit_after># -*- coding: utf-8 -*-
""""
Folium Element Module class IFrame
----------------------
"""
import branca.element as elem
from selenium import webdriver
import base64
def test_create_empty_iframe():
iframe = elem.IFrame()
iframe.render()
def test_create_iframe():
iframe = elem.IFrame(html="<p>test content<p>",width=60,height=45)
iframe.render()
def test_rendering_utf8_iframe():
iframe = elem.IFrame(html="<p>Cerrahpaşa Tıp Fakültesi</p>")
driver = webdriver.PhantomJS()
driver.get("data:text/html," + iframe.render())
driver.switch_to.frame(0);
assert "Cerrahpaşa Tıp Fakültesi" in driver.page_source
|
|
70c28aeee77b49129a7121c2d7eaf1f1fbb3b3ac
|
manipulate.py
|
manipulate.py
|
#!/usr/bin/python -i
import fore.action as action
from echonest.remix.audio import LocalAudioFile
print("You can load MP3 files with LocalAudioFile()")
print("and manipulate them using the functions in action")
print("To save a modified file: save(f, fn)")
def save(f, fn):
action.audition_render([f.data], fn)
|
Add script to ease manual testing and manipulation
|
Add script to ease manual testing and manipulation
|
Python
|
artistic-2.0
|
Rosuav/appension,Rosuav/appension,MikeiLL/appension,Rosuav/appension,MikeiLL/appension,Rosuav/appension,MikeiLL/appension,MikeiLL/appension
|
Add script to ease manual testing and manipulation
|
#!/usr/bin/python -i
import fore.action as action
from echonest.remix.audio import LocalAudioFile
print("You can load MP3 files with LocalAudioFile()")
print("and manipulate them using the functions in action")
print("To save a modified file: save(f, fn)")
def save(f, fn):
action.audition_render([f.data], fn)
|
<commit_before><commit_msg>Add script to ease manual testing and manipulation<commit_after>
|
#!/usr/bin/python -i
import fore.action as action
from echonest.remix.audio import LocalAudioFile
print("You can load MP3 files with LocalAudioFile()")
print("and manipulate them using the functions in action")
print("To save a modified file: save(f, fn)")
def save(f, fn):
action.audition_render([f.data], fn)
|
Add script to ease manual testing and manipulation#!/usr/bin/python -i
import fore.action as action
from echonest.remix.audio import LocalAudioFile
print("You can load MP3 files with LocalAudioFile()")
print("and manipulate them using the functions in action")
print("To save a modified file: save(f, fn)")
def save(f, fn):
action.audition_render([f.data], fn)
|
<commit_before><commit_msg>Add script to ease manual testing and manipulation<commit_after>#!/usr/bin/python -i
import fore.action as action
from echonest.remix.audio import LocalAudioFile
print("You can load MP3 files with LocalAudioFile()")
print("and manipulate them using the functions in action")
print("To save a modified file: save(f, fn)")
def save(f, fn):
action.audition_render([f.data], fn)
|
|
4e9b275cba14dff091d0a10be34b07cdd67bff9e
|
examples/edge_test.py
|
examples/edge_test.py
|
"""
This test is only for Microsoft Edge (Chromium)!
"""
from seleniumbase import BaseCase
class EdgeTestClass(BaseCase):
def test_edge(self):
if self.browser != "edge":
print("\n This test is only for Microsoft Edge (Chromium)!")
print(" (Run with: '--browser=edge')")
self.skipTest("This test is only for Microsoft Edge (Chromium)! ")
self.open("edge://settings/help")
self.assert_element('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div')
self.highlight('div[role="main"] div div div + div > div')
self.highlight('img[alt="Edge logo"]')
self.highlight('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div > div + div')
self.highlight('div[role="main"] div div div + div > div + div + div')
|
Add test for EdgeDriver and Edge (Chromium) Browser
|
Add test for EdgeDriver and Edge (Chromium) Browser
|
Python
|
mit
|
mdmintz/SeleniumBase,mdmintz/SeleniumBase,seleniumbase/SeleniumBase,seleniumbase/SeleniumBase,seleniumbase/SeleniumBase,mdmintz/SeleniumBase,seleniumbase/SeleniumBase,mdmintz/SeleniumBase
|
Add test for EdgeDriver and Edge (Chromium) Browser
|
"""
This test is only for Microsoft Edge (Chromium)!
"""
from seleniumbase import BaseCase
class EdgeTestClass(BaseCase):
def test_edge(self):
if self.browser != "edge":
print("\n This test is only for Microsoft Edge (Chromium)!")
print(" (Run with: '--browser=edge')")
self.skipTest("This test is only for Microsoft Edge (Chromium)! ")
self.open("edge://settings/help")
self.assert_element('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div')
self.highlight('div[role="main"] div div div + div > div')
self.highlight('img[alt="Edge logo"]')
self.highlight('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div > div + div')
self.highlight('div[role="main"] div div div + div > div + div + div')
|
<commit_before><commit_msg>Add test for EdgeDriver and Edge (Chromium) Browser<commit_after>
|
"""
This test is only for Microsoft Edge (Chromium)!
"""
from seleniumbase import BaseCase
class EdgeTestClass(BaseCase):
def test_edge(self):
if self.browser != "edge":
print("\n This test is only for Microsoft Edge (Chromium)!")
print(" (Run with: '--browser=edge')")
self.skipTest("This test is only for Microsoft Edge (Chromium)! ")
self.open("edge://settings/help")
self.assert_element('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div')
self.highlight('div[role="main"] div div div + div > div')
self.highlight('img[alt="Edge logo"]')
self.highlight('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div > div + div')
self.highlight('div[role="main"] div div div + div > div + div + div')
|
Add test for EdgeDriver and Edge (Chromium) Browser"""
This test is only for Microsoft Edge (Chromium)!
"""
from seleniumbase import BaseCase
class EdgeTestClass(BaseCase):
def test_edge(self):
if self.browser != "edge":
print("\n This test is only for Microsoft Edge (Chromium)!")
print(" (Run with: '--browser=edge')")
self.skipTest("This test is only for Microsoft Edge (Chromium)! ")
self.open("edge://settings/help")
self.assert_element('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div')
self.highlight('div[role="main"] div div div + div > div')
self.highlight('img[alt="Edge logo"]')
self.highlight('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div > div + div')
self.highlight('div[role="main"] div div div + div > div + div + div')
|
<commit_before><commit_msg>Add test for EdgeDriver and Edge (Chromium) Browser<commit_after>"""
This test is only for Microsoft Edge (Chromium)!
"""
from seleniumbase import BaseCase
class EdgeTestClass(BaseCase):
def test_edge(self):
if self.browser != "edge":
print("\n This test is only for Microsoft Edge (Chromium)!")
print(" (Run with: '--browser=edge')")
self.skipTest("This test is only for Microsoft Edge (Chromium)! ")
self.open("edge://settings/help")
self.assert_element('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div')
self.highlight('div[role="main"] div div div + div > div')
self.highlight('img[alt="Edge logo"]')
self.highlight('img[alt="Edge logo"] + span')
self.highlight('div[role="main"] div div div + div > div + div')
self.highlight('div[role="main"] div div div + div > div + div + div')
|
|
732206b8dc5fc8184ed5765ddb689e21da89ee26
|
include/pdg.py
|
include/pdg.py
|
#!/usr/bin/env python3
import urllib.request
import sqlite3
### PDG file containing particle info
pdgurl = 'http://pdg.lbl.gov/2012/mcdata/mass_width_2012.mcd'
# format
# 1 contains either "M" or "W" indicating mass or width
# 2 - 9 \ Monte Carlo particle numbers as described in the "Review of
# 10 - 17 | Particle Physics". Charge states appear, as appropriate,
# 18 - 25 | from left-to-right in the order -, 0, +, ++.
# 26 - 33 /
# 34 blank
# 35 - 49 central value of the mass or width (double precision)
# 50 blank
# 51 - 58 positive error
# 59 blank
# 60 - 67 negative error
# 68 blank
# 69 - 89 particle name left-justified in the field and
# charge states right-justified in the field.
# This field is for ease of visual examination of the file and
# should not be taken as a standardized presentation of
# particle names.
def particles():
with urllib.request.urlopen(pdgurl) as f:
for l in f:
# mass lines only
if l[0:1] == b'M':
# extract ID(s)
ID = []
for i in [2,10,18,26]:
j = l[i:i+7]
if j.strip():
ID.append(int(j))
# mass
mass = float(l[35:49])
# name and charge(s)
name,charge = l[68:89].decode().split()
charge = charge.split(',')
### unnecessary...
## convert charge(s) to numbers
#charge = [{'-': -1,
# '-1/3': float(-1/3),
# '0': 0,
# '+2/3': float(2/3),
# '+': 1,
# '++': 2}[x] for x in charge.split(',')]
# treat each ID/charge as a separate particle
for k in range(len(ID)):
yield ID[k],mass,name,charge[k]
def main():
conn = sqlite3.connect('particles.db')
c = conn.cursor()
c.execute('drop table if exists particles')
c.execute('create table particles (id INT,mass DOUBLE,name VARCHAR(30),charge VARCHAR(5))')
c.executemany('insert into particles (id,mass,name,charge) values (?,?,?,?)', particles())
conn.commit()
c.close()
if __name__ == "__main__":
main()
|
Add script to fetch PDG particle info and store in sqlite.
|
Add script to fetch PDG particle info and store in sqlite.
|
Python
|
mit
|
jbernhard/ebe-analysis
|
Add script to fetch PDG particle info and store in sqlite.
|
#!/usr/bin/env python3
import urllib.request
import sqlite3
### PDG file containing particle info
pdgurl = 'http://pdg.lbl.gov/2012/mcdata/mass_width_2012.mcd'
# format
# 1 contains either "M" or "W" indicating mass or width
# 2 - 9 \ Monte Carlo particle numbers as described in the "Review of
# 10 - 17 | Particle Physics". Charge states appear, as appropriate,
# 18 - 25 | from left-to-right in the order -, 0, +, ++.
# 26 - 33 /
# 34 blank
# 35 - 49 central value of the mass or width (double precision)
# 50 blank
# 51 - 58 positive error
# 59 blank
# 60 - 67 negative error
# 68 blank
# 69 - 89 particle name left-justified in the field and
# charge states right-justified in the field.
# This field is for ease of visual examination of the file and
# should not be taken as a standardized presentation of
# particle names.
def particles():
with urllib.request.urlopen(pdgurl) as f:
for l in f:
# mass lines only
if l[0:1] == b'M':
# extract ID(s)
ID = []
for i in [2,10,18,26]:
j = l[i:i+7]
if j.strip():
ID.append(int(j))
# mass
mass = float(l[35:49])
# name and charge(s)
name,charge = l[68:89].decode().split()
charge = charge.split(',')
### unnecessary...
## convert charge(s) to numbers
#charge = [{'-': -1,
# '-1/3': float(-1/3),
# '0': 0,
# '+2/3': float(2/3),
# '+': 1,
# '++': 2}[x] for x in charge.split(',')]
# treat each ID/charge as a separate particle
for k in range(len(ID)):
yield ID[k],mass,name,charge[k]
def main():
conn = sqlite3.connect('particles.db')
c = conn.cursor()
c.execute('drop table if exists particles')
c.execute('create table particles (id INT,mass DOUBLE,name VARCHAR(30),charge VARCHAR(5))')
c.executemany('insert into particles (id,mass,name,charge) values (?,?,?,?)', particles())
conn.commit()
c.close()
if __name__ == "__main__":
main()
|
<commit_before><commit_msg>Add script to fetch PDG particle info and store in sqlite.<commit_after>
|
#!/usr/bin/env python3
import urllib.request
import sqlite3
### PDG file containing particle info
pdgurl = 'http://pdg.lbl.gov/2012/mcdata/mass_width_2012.mcd'
# format
# 1 contains either "M" or "W" indicating mass or width
# 2 - 9 \ Monte Carlo particle numbers as described in the "Review of
# 10 - 17 | Particle Physics". Charge states appear, as appropriate,
# 18 - 25 | from left-to-right in the order -, 0, +, ++.
# 26 - 33 /
# 34 blank
# 35 - 49 central value of the mass or width (double precision)
# 50 blank
# 51 - 58 positive error
# 59 blank
# 60 - 67 negative error
# 68 blank
# 69 - 89 particle name left-justified in the field and
# charge states right-justified in the field.
# This field is for ease of visual examination of the file and
# should not be taken as a standardized presentation of
# particle names.
def particles():
with urllib.request.urlopen(pdgurl) as f:
for l in f:
# mass lines only
if l[0:1] == b'M':
# extract ID(s)
ID = []
for i in [2,10,18,26]:
j = l[i:i+7]
if j.strip():
ID.append(int(j))
# mass
mass = float(l[35:49])
# name and charge(s)
name,charge = l[68:89].decode().split()
charge = charge.split(',')
### unnecessary...
## convert charge(s) to numbers
#charge = [{'-': -1,
# '-1/3': float(-1/3),
# '0': 0,
# '+2/3': float(2/3),
# '+': 1,
# '++': 2}[x] for x in charge.split(',')]
# treat each ID/charge as a separate particle
for k in range(len(ID)):
yield ID[k],mass,name,charge[k]
def main():
conn = sqlite3.connect('particles.db')
c = conn.cursor()
c.execute('drop table if exists particles')
c.execute('create table particles (id INT,mass DOUBLE,name VARCHAR(30),charge VARCHAR(5))')
c.executemany('insert into particles (id,mass,name,charge) values (?,?,?,?)', particles())
conn.commit()
c.close()
if __name__ == "__main__":
main()
|
Add script to fetch PDG particle info and store in sqlite.#!/usr/bin/env python3
import urllib.request
import sqlite3
### PDG file containing particle info
pdgurl = 'http://pdg.lbl.gov/2012/mcdata/mass_width_2012.mcd'
# format
# 1 contains either "M" or "W" indicating mass or width
# 2 - 9 \ Monte Carlo particle numbers as described in the "Review of
# 10 - 17 | Particle Physics". Charge states appear, as appropriate,
# 18 - 25 | from left-to-right in the order -, 0, +, ++.
# 26 - 33 /
# 34 blank
# 35 - 49 central value of the mass or width (double precision)
# 50 blank
# 51 - 58 positive error
# 59 blank
# 60 - 67 negative error
# 68 blank
# 69 - 89 particle name left-justified in the field and
# charge states right-justified in the field.
# This field is for ease of visual examination of the file and
# should not be taken as a standardized presentation of
# particle names.
def particles():
with urllib.request.urlopen(pdgurl) as f:
for l in f:
# mass lines only
if l[0:1] == b'M':
# extract ID(s)
ID = []
for i in [2,10,18,26]:
j = l[i:i+7]
if j.strip():
ID.append(int(j))
# mass
mass = float(l[35:49])
# name and charge(s)
name,charge = l[68:89].decode().split()
charge = charge.split(',')
### unnecessary...
## convert charge(s) to numbers
#charge = [{'-': -1,
# '-1/3': float(-1/3),
# '0': 0,
# '+2/3': float(2/3),
# '+': 1,
# '++': 2}[x] for x in charge.split(',')]
# treat each ID/charge as a separate particle
for k in range(len(ID)):
yield ID[k],mass,name,charge[k]
def main():
conn = sqlite3.connect('particles.db')
c = conn.cursor()
c.execute('drop table if exists particles')
c.execute('create table particles (id INT,mass DOUBLE,name VARCHAR(30),charge VARCHAR(5))')
c.executemany('insert into particles (id,mass,name,charge) values (?,?,?,?)', particles())
conn.commit()
c.close()
if __name__ == "__main__":
main()
|
<commit_before><commit_msg>Add script to fetch PDG particle info and store in sqlite.<commit_after>#!/usr/bin/env python3
import urllib.request
import sqlite3
### PDG file containing particle info
pdgurl = 'http://pdg.lbl.gov/2012/mcdata/mass_width_2012.mcd'
# format
# 1 contains either "M" or "W" indicating mass or width
# 2 - 9 \ Monte Carlo particle numbers as described in the "Review of
# 10 - 17 | Particle Physics". Charge states appear, as appropriate,
# 18 - 25 | from left-to-right in the order -, 0, +, ++.
# 26 - 33 /
# 34 blank
# 35 - 49 central value of the mass or width (double precision)
# 50 blank
# 51 - 58 positive error
# 59 blank
# 60 - 67 negative error
# 68 blank
# 69 - 89 particle name left-justified in the field and
# charge states right-justified in the field.
# This field is for ease of visual examination of the file and
# should not be taken as a standardized presentation of
# particle names.
def particles():
with urllib.request.urlopen(pdgurl) as f:
for l in f:
# mass lines only
if l[0:1] == b'M':
# extract ID(s)
ID = []
for i in [2,10,18,26]:
j = l[i:i+7]
if j.strip():
ID.append(int(j))
# mass
mass = float(l[35:49])
# name and charge(s)
name,charge = l[68:89].decode().split()
charge = charge.split(',')
### unnecessary...
## convert charge(s) to numbers
#charge = [{'-': -1,
# '-1/3': float(-1/3),
# '0': 0,
# '+2/3': float(2/3),
# '+': 1,
# '++': 2}[x] for x in charge.split(',')]
# treat each ID/charge as a separate particle
for k in range(len(ID)):
yield ID[k],mass,name,charge[k]
def main():
conn = sqlite3.connect('particles.db')
c = conn.cursor()
c.execute('drop table if exists particles')
c.execute('create table particles (id INT,mass DOUBLE,name VARCHAR(30),charge VARCHAR(5))')
c.executemany('insert into particles (id,mass,name,charge) values (?,?,?,?)', particles())
conn.commit()
c.close()
if __name__ == "__main__":
main()
|
|
2cbc977e71acded15cc66bea9e36f02ce3934eb8
|
real_estate_agency/resale/migrations/0004_ordering_by_id.py
|
real_estate_agency/resale/migrations/0004_ordering_by_id.py
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-09-11 21:35
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('resale', '0003_change_coords'),
]
operations = [
migrations.AlterModelOptions(
name='resaleapartment',
options={'ordering': ('-id',), 'permissions': (('can_add_change_delete_all_resale', 'Имеет доступ к чужим данным по вторичке'),), 'verbose_name': 'объект вторичка', 'verbose_name_plural': 'объекты вторички'},
),
]
|
Add skipped migration in resale app connected with ordering.
|
Add skipped migration in resale app connected with ordering.
|
Python
|
mit
|
Dybov/real_estate_agency,Dybov/real_estate_agency,Dybov/real_estate_agency
|
Add skipped migration in resale app connected with ordering.
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-09-11 21:35
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('resale', '0003_change_coords'),
]
operations = [
migrations.AlterModelOptions(
name='resaleapartment',
options={'ordering': ('-id',), 'permissions': (('can_add_change_delete_all_resale', 'Имеет доступ к чужим данным по вторичке'),), 'verbose_name': 'объект вторичка', 'verbose_name_plural': 'объекты вторички'},
),
]
|
<commit_before><commit_msg>Add skipped migration in resale app connected with ordering.<commit_after>
|
# -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-09-11 21:35
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('resale', '0003_change_coords'),
]
operations = [
migrations.AlterModelOptions(
name='resaleapartment',
options={'ordering': ('-id',), 'permissions': (('can_add_change_delete_all_resale', 'Имеет доступ к чужим данным по вторичке'),), 'verbose_name': 'объект вторичка', 'verbose_name_plural': 'объекты вторички'},
),
]
|
Add skipped migration in resale app connected with ordering.# -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-09-11 21:35
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('resale', '0003_change_coords'),
]
operations = [
migrations.AlterModelOptions(
name='resaleapartment',
options={'ordering': ('-id',), 'permissions': (('can_add_change_delete_all_resale', 'Имеет доступ к чужим данным по вторичке'),), 'verbose_name': 'объект вторичка', 'verbose_name_plural': 'объекты вторички'},
),
]
|
<commit_before><commit_msg>Add skipped migration in resale app connected with ordering.<commit_after># -*- coding: utf-8 -*-
# Generated by Django 1.11 on 2018-09-11 21:35
from __future__ import unicode_literals
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('resale', '0003_change_coords'),
]
operations = [
migrations.AlterModelOptions(
name='resaleapartment',
options={'ordering': ('-id',), 'permissions': (('can_add_change_delete_all_resale', 'Имеет доступ к чужим данным по вторичке'),), 'verbose_name': 'объект вторичка', 'verbose_name_plural': 'объекты вторички'},
),
]
|
|
082787e5b2bd7f1c6dd98d05a1893ddecc5967d7
|
monasca_common/tests/test_rest.py
|
monasca_common/tests/test_rest.py
|
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import unittest
from monasca_common.rest.exceptions import DataConversionException
from monasca_common.rest.exceptions import UnreadableContentError
from monasca_common.rest.exceptions import UnsupportedContentTypeException
from monasca_common.rest import utils
class TestRestUtils(unittest.TestCase):
def setUp(self):
self.mock_json_patcher = mock.patch('monasca_common.rest.utils.json')
self.mock_json = self.mock_json_patcher.start()
def tearDown(self):
self.mock_json_patcher.stop()
def test_read_body_with_success(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
utils.read_body(payload)
self.mock_json.loads.assert_called_once_with(payload.read.return_value)
def test_read_body_empty_content_in_payload(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
payload.read.return_value = None
self.assertIsNone(utils.read_body(payload))
def test_read_body_json_loads_exception(self):
self.mock_json.loads.side_effect = Exception
payload = mock.Mock()
self.assertRaises(DataConversionException, utils.read_body, payload)
def test_read_body_unsupported_content_type(self):
unsupported_content_type = mock.Mock()
self.assertRaises(
UnsupportedContentTypeException, utils.read_body, None,
unsupported_content_type)
def test_read_body_unreadable_content_error(self):
unreadable_content = mock.Mock()
unreadable_content.read.side_effect = Exception
self.assertRaises(
UnreadableContentError, utils.read_body, unreadable_content)
def test_as_json_success(self):
data = mock.Mock()
dumped_json = utils.as_json(data)
self.assertEqual(dumped_json, self.mock_json.dumps.return_value)
def test_as_json_with_exception(self):
data = mock.Mock()
self.mock_json.dumps.side_effect = Exception
self.assertRaises(DataConversionException, utils.as_json, data)
|
Add unit tests for common rest.utils module
|
Add unit tests for common rest.utils module
Add unit tests for common json conversions and exception handling.
Change-Id: I71ab992bb09114f4fd9c9752820ab57b4c51577a
|
Python
|
apache-2.0
|
openstack/monasca-common,openstack/monasca-common,stackforge/monasca-common,stackforge/monasca-common,stackforge/monasca-common,openstack/monasca-common
|
Add unit tests for common rest.utils module
Add unit tests for common json conversions and exception handling.
Change-Id: I71ab992bb09114f4fd9c9752820ab57b4c51577a
|
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import unittest
from monasca_common.rest.exceptions import DataConversionException
from monasca_common.rest.exceptions import UnreadableContentError
from monasca_common.rest.exceptions import UnsupportedContentTypeException
from monasca_common.rest import utils
class TestRestUtils(unittest.TestCase):
def setUp(self):
self.mock_json_patcher = mock.patch('monasca_common.rest.utils.json')
self.mock_json = self.mock_json_patcher.start()
def tearDown(self):
self.mock_json_patcher.stop()
def test_read_body_with_success(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
utils.read_body(payload)
self.mock_json.loads.assert_called_once_with(payload.read.return_value)
def test_read_body_empty_content_in_payload(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
payload.read.return_value = None
self.assertIsNone(utils.read_body(payload))
def test_read_body_json_loads_exception(self):
self.mock_json.loads.side_effect = Exception
payload = mock.Mock()
self.assertRaises(DataConversionException, utils.read_body, payload)
def test_read_body_unsupported_content_type(self):
unsupported_content_type = mock.Mock()
self.assertRaises(
UnsupportedContentTypeException, utils.read_body, None,
unsupported_content_type)
def test_read_body_unreadable_content_error(self):
unreadable_content = mock.Mock()
unreadable_content.read.side_effect = Exception
self.assertRaises(
UnreadableContentError, utils.read_body, unreadable_content)
def test_as_json_success(self):
data = mock.Mock()
dumped_json = utils.as_json(data)
self.assertEqual(dumped_json, self.mock_json.dumps.return_value)
def test_as_json_with_exception(self):
data = mock.Mock()
self.mock_json.dumps.side_effect = Exception
self.assertRaises(DataConversionException, utils.as_json, data)
|
<commit_before><commit_msg>Add unit tests for common rest.utils module
Add unit tests for common json conversions and exception handling.
Change-Id: I71ab992bb09114f4fd9c9752820ab57b4c51577a<commit_after>
|
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import unittest
from monasca_common.rest.exceptions import DataConversionException
from monasca_common.rest.exceptions import UnreadableContentError
from monasca_common.rest.exceptions import UnsupportedContentTypeException
from monasca_common.rest import utils
class TestRestUtils(unittest.TestCase):
def setUp(self):
self.mock_json_patcher = mock.patch('monasca_common.rest.utils.json')
self.mock_json = self.mock_json_patcher.start()
def tearDown(self):
self.mock_json_patcher.stop()
def test_read_body_with_success(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
utils.read_body(payload)
self.mock_json.loads.assert_called_once_with(payload.read.return_value)
def test_read_body_empty_content_in_payload(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
payload.read.return_value = None
self.assertIsNone(utils.read_body(payload))
def test_read_body_json_loads_exception(self):
self.mock_json.loads.side_effect = Exception
payload = mock.Mock()
self.assertRaises(DataConversionException, utils.read_body, payload)
def test_read_body_unsupported_content_type(self):
unsupported_content_type = mock.Mock()
self.assertRaises(
UnsupportedContentTypeException, utils.read_body, None,
unsupported_content_type)
def test_read_body_unreadable_content_error(self):
unreadable_content = mock.Mock()
unreadable_content.read.side_effect = Exception
self.assertRaises(
UnreadableContentError, utils.read_body, unreadable_content)
def test_as_json_success(self):
data = mock.Mock()
dumped_json = utils.as_json(data)
self.assertEqual(dumped_json, self.mock_json.dumps.return_value)
def test_as_json_with_exception(self):
data = mock.Mock()
self.mock_json.dumps.side_effect = Exception
self.assertRaises(DataConversionException, utils.as_json, data)
|
Add unit tests for common rest.utils module
Add unit tests for common json conversions and exception handling.
Change-Id: I71ab992bb09114f4fd9c9752820ab57b4c51577a# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import unittest
from monasca_common.rest.exceptions import DataConversionException
from monasca_common.rest.exceptions import UnreadableContentError
from monasca_common.rest.exceptions import UnsupportedContentTypeException
from monasca_common.rest import utils
class TestRestUtils(unittest.TestCase):
def setUp(self):
self.mock_json_patcher = mock.patch('monasca_common.rest.utils.json')
self.mock_json = self.mock_json_patcher.start()
def tearDown(self):
self.mock_json_patcher.stop()
def test_read_body_with_success(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
utils.read_body(payload)
self.mock_json.loads.assert_called_once_with(payload.read.return_value)
def test_read_body_empty_content_in_payload(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
payload.read.return_value = None
self.assertIsNone(utils.read_body(payload))
def test_read_body_json_loads_exception(self):
self.mock_json.loads.side_effect = Exception
payload = mock.Mock()
self.assertRaises(DataConversionException, utils.read_body, payload)
def test_read_body_unsupported_content_type(self):
unsupported_content_type = mock.Mock()
self.assertRaises(
UnsupportedContentTypeException, utils.read_body, None,
unsupported_content_type)
def test_read_body_unreadable_content_error(self):
unreadable_content = mock.Mock()
unreadable_content.read.side_effect = Exception
self.assertRaises(
UnreadableContentError, utils.read_body, unreadable_content)
def test_as_json_success(self):
data = mock.Mock()
dumped_json = utils.as_json(data)
self.assertEqual(dumped_json, self.mock_json.dumps.return_value)
def test_as_json_with_exception(self):
data = mock.Mock()
self.mock_json.dumps.side_effect = Exception
self.assertRaises(DataConversionException, utils.as_json, data)
|
<commit_before><commit_msg>Add unit tests for common rest.utils module
Add unit tests for common json conversions and exception handling.
Change-Id: I71ab992bb09114f4fd9c9752820ab57b4c51577a<commit_after># Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import unittest
from monasca_common.rest.exceptions import DataConversionException
from monasca_common.rest.exceptions import UnreadableContentError
from monasca_common.rest.exceptions import UnsupportedContentTypeException
from monasca_common.rest import utils
class TestRestUtils(unittest.TestCase):
def setUp(self):
self.mock_json_patcher = mock.patch('monasca_common.rest.utils.json')
self.mock_json = self.mock_json_patcher.start()
def tearDown(self):
self.mock_json_patcher.stop()
def test_read_body_with_success(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
utils.read_body(payload)
self.mock_json.loads.assert_called_once_with(payload.read.return_value)
def test_read_body_empty_content_in_payload(self):
self.mock_json.loads.return_value = ""
payload = mock.Mock()
payload.read.return_value = None
self.assertIsNone(utils.read_body(payload))
def test_read_body_json_loads_exception(self):
self.mock_json.loads.side_effect = Exception
payload = mock.Mock()
self.assertRaises(DataConversionException, utils.read_body, payload)
def test_read_body_unsupported_content_type(self):
unsupported_content_type = mock.Mock()
self.assertRaises(
UnsupportedContentTypeException, utils.read_body, None,
unsupported_content_type)
def test_read_body_unreadable_content_error(self):
unreadable_content = mock.Mock()
unreadable_content.read.side_effect = Exception
self.assertRaises(
UnreadableContentError, utils.read_body, unreadable_content)
def test_as_json_success(self):
data = mock.Mock()
dumped_json = utils.as_json(data)
self.assertEqual(dumped_json, self.mock_json.dumps.return_value)
def test_as_json_with_exception(self):
data = mock.Mock()
self.mock_json.dumps.side_effect = Exception
self.assertRaises(DataConversionException, utils.as_json, data)
|
|
3aef269fc27a54bfb1171c1a8876f8684db8920c
|
test_dns.py
|
test_dns.py
|
import os
import unittest
ENV_KEY = 'DREAMHOST_API_KEY'
PUBLIC_DREAMHOST_KEY = '6SHU5P2HLDAYECUM'
# Set the public DreamHost API key before importing the client, since it will
# try to get the value of os.environ[ENV_KEY] immediately.
previous_api_key = os.environ.get(ENV_KEY, None)
os.environ[ENV_KEY] = PUBLIC_DREAMHOST_KEY
import dns
# Once the import is complete, we can put the old key back, if necessary.
if previous_api_key:
os.environ[ENV_KEY] = previous_api_key
class DNSTest(unittest.TestCase):
def test_list(self):
self.assertIsNotNone(dns.list())
def test_get(self):
expected = {
"zone": "groo.com",
"value": "173.236.152.210",
"account_id": "1",
"record": "groo.com",
"comment": "",
"editable": "0",
"type": "A",
}
self.assertEqual(dns.get("groo.com"), expected)
|
Add initial unit tests for read-only functions
|
Add initial unit tests for read-only functions
|
Python
|
mit
|
eallrich/dreamdns
|
Add initial unit tests for read-only functions
|
import os
import unittest
ENV_KEY = 'DREAMHOST_API_KEY'
PUBLIC_DREAMHOST_KEY = '6SHU5P2HLDAYECUM'
# Set the public DreamHost API key before importing the client, since it will
# try to get the value of os.environ[ENV_KEY] immediately.
previous_api_key = os.environ.get(ENV_KEY, None)
os.environ[ENV_KEY] = PUBLIC_DREAMHOST_KEY
import dns
# Once the import is complete, we can put the old key back, if necessary.
if previous_api_key:
os.environ[ENV_KEY] = previous_api_key
class DNSTest(unittest.TestCase):
def test_list(self):
self.assertIsNotNone(dns.list())
def test_get(self):
expected = {
"zone": "groo.com",
"value": "173.236.152.210",
"account_id": "1",
"record": "groo.com",
"comment": "",
"editable": "0",
"type": "A",
}
self.assertEqual(dns.get("groo.com"), expected)
|
<commit_before><commit_msg>Add initial unit tests for read-only functions<commit_after>
|
import os
import unittest
ENV_KEY = 'DREAMHOST_API_KEY'
PUBLIC_DREAMHOST_KEY = '6SHU5P2HLDAYECUM'
# Set the public DreamHost API key before importing the client, since it will
# try to get the value of os.environ[ENV_KEY] immediately.
previous_api_key = os.environ.get(ENV_KEY, None)
os.environ[ENV_KEY] = PUBLIC_DREAMHOST_KEY
import dns
# Once the import is complete, we can put the old key back, if necessary.
if previous_api_key:
os.environ[ENV_KEY] = previous_api_key
class DNSTest(unittest.TestCase):
def test_list(self):
self.assertIsNotNone(dns.list())
def test_get(self):
expected = {
"zone": "groo.com",
"value": "173.236.152.210",
"account_id": "1",
"record": "groo.com",
"comment": "",
"editable": "0",
"type": "A",
}
self.assertEqual(dns.get("groo.com"), expected)
|
Add initial unit tests for read-only functionsimport os
import unittest
ENV_KEY = 'DREAMHOST_API_KEY'
PUBLIC_DREAMHOST_KEY = '6SHU5P2HLDAYECUM'
# Set the public DreamHost API key before importing the client, since it will
# try to get the value of os.environ[ENV_KEY] immediately.
previous_api_key = os.environ.get(ENV_KEY, None)
os.environ[ENV_KEY] = PUBLIC_DREAMHOST_KEY
import dns
# Once the import is complete, we can put the old key back, if necessary.
if previous_api_key:
os.environ[ENV_KEY] = previous_api_key
class DNSTest(unittest.TestCase):
def test_list(self):
self.assertIsNotNone(dns.list())
def test_get(self):
expected = {
"zone": "groo.com",
"value": "173.236.152.210",
"account_id": "1",
"record": "groo.com",
"comment": "",
"editable": "0",
"type": "A",
}
self.assertEqual(dns.get("groo.com"), expected)
|
<commit_before><commit_msg>Add initial unit tests for read-only functions<commit_after>import os
import unittest
ENV_KEY = 'DREAMHOST_API_KEY'
PUBLIC_DREAMHOST_KEY = '6SHU5P2HLDAYECUM'
# Set the public DreamHost API key before importing the client, since it will
# try to get the value of os.environ[ENV_KEY] immediately.
previous_api_key = os.environ.get(ENV_KEY, None)
os.environ[ENV_KEY] = PUBLIC_DREAMHOST_KEY
import dns
# Once the import is complete, we can put the old key back, if necessary.
if previous_api_key:
os.environ[ENV_KEY] = previous_api_key
class DNSTest(unittest.TestCase):
def test_list(self):
self.assertIsNotNone(dns.list())
def test_get(self):
expected = {
"zone": "groo.com",
"value": "173.236.152.210",
"account_id": "1",
"record": "groo.com",
"comment": "",
"editable": "0",
"type": "A",
}
self.assertEqual(dns.get("groo.com"), expected)
|
|
762d995071403d277061fcfb4917636e47e8a194
|
froide/document/migrations/0022_auto_20200117_2101.py
|
froide/document/migrations/0022_auto_20200117_2101.py
|
# Generated by Django 2.2.4 on 2020-01-17 20:01
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('filingcabinet', '0016_auto_20200108_1228'),
('document', '0021_document_allow_annotation'),
]
operations = [
migrations.AddField(
model_name='document',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
migrations.AddField(
model_name='documentcollection',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
]
|
Add missing migration to document app
|
Add missing migration to document app
|
Python
|
mit
|
stefanw/froide,fin/froide,fin/froide,fin/froide,stefanw/froide,stefanw/froide,stefanw/froide,stefanw/froide,fin/froide
|
Add missing migration to document app
|
# Generated by Django 2.2.4 on 2020-01-17 20:01
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('filingcabinet', '0016_auto_20200108_1228'),
('document', '0021_document_allow_annotation'),
]
operations = [
migrations.AddField(
model_name='document',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
migrations.AddField(
model_name='documentcollection',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
]
|
<commit_before><commit_msg>Add missing migration to document app<commit_after>
|
# Generated by Django 2.2.4 on 2020-01-17 20:01
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('filingcabinet', '0016_auto_20200108_1228'),
('document', '0021_document_allow_annotation'),
]
operations = [
migrations.AddField(
model_name='document',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
migrations.AddField(
model_name='documentcollection',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
]
|
Add missing migration to document app# Generated by Django 2.2.4 on 2020-01-17 20:01
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('filingcabinet', '0016_auto_20200108_1228'),
('document', '0021_document_allow_annotation'),
]
operations = [
migrations.AddField(
model_name='document',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
migrations.AddField(
model_name='documentcollection',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
]
|
<commit_before><commit_msg>Add missing migration to document app<commit_after># Generated by Django 2.2.4 on 2020-01-17 20:01
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('filingcabinet', '0016_auto_20200108_1228'),
('document', '0021_document_allow_annotation'),
]
operations = [
migrations.AddField(
model_name='document',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
migrations.AddField(
model_name='documentcollection',
name='portal',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, to='filingcabinet.DocumentPortal'),
),
]
|
|
0afef99538ddcf09127a242b03a13986eb3a590e
|
django/website/contacts/migrations/0005_auto_20160621_1456.py
|
django/website/contacts/migrations/0005_auto_20160621_1456.py
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('contacts', '0004_auto_20160421_1645'),
]
operations = [
migrations.AlterField(
model_name='userpreferences',
name='user',
field=models.OneToOneField(related_name='_preferences', to=settings.AUTH_USER_MODEL),
),
]
|
Add migration to update related field name
|
Add migration to update related field name
|
Python
|
agpl-3.0
|
aptivate/kashana,aptivate/kashana,daniell/kashana,aptivate/alfie,aptivate/kashana,aptivate/alfie,daniell/kashana,daniell/kashana,daniell/kashana,aptivate/kashana,aptivate/alfie,aptivate/alfie
|
Add migration to update related field name
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('contacts', '0004_auto_20160421_1645'),
]
operations = [
migrations.AlterField(
model_name='userpreferences',
name='user',
field=models.OneToOneField(related_name='_preferences', to=settings.AUTH_USER_MODEL),
),
]
|
<commit_before><commit_msg>Add migration to update related field name<commit_after>
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('contacts', '0004_auto_20160421_1645'),
]
operations = [
migrations.AlterField(
model_name='userpreferences',
name='user',
field=models.OneToOneField(related_name='_preferences', to=settings.AUTH_USER_MODEL),
),
]
|
Add migration to update related field name# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('contacts', '0004_auto_20160421_1645'),
]
operations = [
migrations.AlterField(
model_name='userpreferences',
name='user',
field=models.OneToOneField(related_name='_preferences', to=settings.AUTH_USER_MODEL),
),
]
|
<commit_before><commit_msg>Add migration to update related field name<commit_after># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
('contacts', '0004_auto_20160421_1645'),
]
operations = [
migrations.AlterField(
model_name='userpreferences',
name='user',
field=models.OneToOneField(related_name='_preferences', to=settings.AUTH_USER_MODEL),
),
]
|
|
e32e52e576b142d62df4733386a5840c2f2c13ad
|
python/src/linkedListCycle/testLinkedListCycle.py
|
python/src/linkedListCycle/testLinkedListCycle.py
|
import unittest
from linkedListCycle import Solution, ListNode
class TestLinkedListCycle(unittest.TestCase):
def setUp(self):
self.solution = Solution()
def testNoNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(None), False)
class TestLinkedListCycleWithList(unittest.TestCase):
def setUp(self):
self.solution = Solution()
self.head = ListNode(42)
def testSingleNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
self.assertEqual(self.solution.hasCycle(self.head), False)
def testThreeNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesInATwoCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node2.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testThreeNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node3.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = node2
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAFourCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
if __name__ == '__main__':
unittest.main()
|
Add 8 test cases for linkedListCycle problem.
|
Add 8 test cases for linkedListCycle problem.
|
Python
|
mit
|
TheGhostHuCodes/leetCode
|
Add 8 test cases for linkedListCycle problem.
|
import unittest
from linkedListCycle import Solution, ListNode
class TestLinkedListCycle(unittest.TestCase):
def setUp(self):
self.solution = Solution()
def testNoNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(None), False)
class TestLinkedListCycleWithList(unittest.TestCase):
def setUp(self):
self.solution = Solution()
self.head = ListNode(42)
def testSingleNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
self.assertEqual(self.solution.hasCycle(self.head), False)
def testThreeNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesInATwoCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node2.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testThreeNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node3.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = node2
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAFourCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add 8 test cases for linkedListCycle problem.<commit_after>
|
import unittest
from linkedListCycle import Solution, ListNode
class TestLinkedListCycle(unittest.TestCase):
def setUp(self):
self.solution = Solution()
def testNoNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(None), False)
class TestLinkedListCycleWithList(unittest.TestCase):
def setUp(self):
self.solution = Solution()
self.head = ListNode(42)
def testSingleNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
self.assertEqual(self.solution.hasCycle(self.head), False)
def testThreeNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesInATwoCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node2.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testThreeNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node3.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = node2
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAFourCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
if __name__ == '__main__':
unittest.main()
|
Add 8 test cases for linkedListCycle problem.import unittest
from linkedListCycle import Solution, ListNode
class TestLinkedListCycle(unittest.TestCase):
def setUp(self):
self.solution = Solution()
def testNoNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(None), False)
class TestLinkedListCycleWithList(unittest.TestCase):
def setUp(self):
self.solution = Solution()
self.head = ListNode(42)
def testSingleNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
self.assertEqual(self.solution.hasCycle(self.head), False)
def testThreeNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesInATwoCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node2.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testThreeNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node3.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = node2
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAFourCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
if __name__ == '__main__':
unittest.main()
|
<commit_before><commit_msg>Add 8 test cases for linkedListCycle problem.<commit_after>import unittest
from linkedListCycle import Solution, ListNode
class TestLinkedListCycle(unittest.TestCase):
def setUp(self):
self.solution = Solution()
def testNoNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(None), False)
class TestLinkedListCycleWithList(unittest.TestCase):
def setUp(self):
self.solution = Solution()
self.head = ListNode(42)
def testSingleNodeReturnsFalse(self):
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
self.assertEqual(self.solution.hasCycle(self.head), False)
def testThreeNodesReturnsFalse(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
self.assertEqual(self.solution.hasCycle(self.head), False)
def testTwoNodesInATwoCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node2.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testThreeNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node3.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAThreeCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = node2
self.assertEqual(self.solution.hasCycle(self.head), True)
def testFourNodesInAFourCycleReturnsTrue(self):
node2 = ListNode(43)
self.head.next = node2
node3 = ListNode(44)
node2.next = node3
node4 = ListNode(45)
node3.next = node4
node4.next = self.head
self.assertEqual(self.solution.hasCycle(self.head), True)
if __name__ == '__main__':
unittest.main()
|
|
c3d6e3f38096bec6ef692e33165a6d25ccf06bd2
|
paper_to_git/commands/update_command.py
|
paper_to_git/commands/update_command.py
|
"""
"""
from paper_to_git.commands.base import BaseCommand
__all__ = [
'UpdateCommand',
]
class UpdateCommand(BaseCommand):
"""Pull the list of Paper docs and update the database."""
name = 'update'
def add(self, parser, command_parser):
self.parser = parser
def process(self, args):
print("Pulling the list of paper docs...")
|
Add a sample update command
|
Add a sample update command
|
Python
|
apache-2.0
|
maxking/paper-to-git,maxking/paper-to-git
|
Add a sample update command
|
"""
"""
from paper_to_git.commands.base import BaseCommand
__all__ = [
'UpdateCommand',
]
class UpdateCommand(BaseCommand):
"""Pull the list of Paper docs and update the database."""
name = 'update'
def add(self, parser, command_parser):
self.parser = parser
def process(self, args):
print("Pulling the list of paper docs...")
|
<commit_before><commit_msg>Add a sample update command<commit_after>
|
"""
"""
from paper_to_git.commands.base import BaseCommand
__all__ = [
'UpdateCommand',
]
class UpdateCommand(BaseCommand):
"""Pull the list of Paper docs and update the database."""
name = 'update'
def add(self, parser, command_parser):
self.parser = parser
def process(self, args):
print("Pulling the list of paper docs...")
|
Add a sample update command"""
"""
from paper_to_git.commands.base import BaseCommand
__all__ = [
'UpdateCommand',
]
class UpdateCommand(BaseCommand):
"""Pull the list of Paper docs and update the database."""
name = 'update'
def add(self, parser, command_parser):
self.parser = parser
def process(self, args):
print("Pulling the list of paper docs...")
|
<commit_before><commit_msg>Add a sample update command<commit_after>"""
"""
from paper_to_git.commands.base import BaseCommand
__all__ = [
'UpdateCommand',
]
class UpdateCommand(BaseCommand):
"""Pull the list of Paper docs and update the database."""
name = 'update'
def add(self, parser, command_parser):
self.parser = parser
def process(self, args):
print("Pulling the list of paper docs...")
|
|
9ef9ce2857bf97b2aa7309608ae69b5710f1bf7e
|
src/integrate_tool.py
|
src/integrate_tool.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from bioblend import galaxy
from bioblend import toolshed
if __name__ == '__main__':
gi_url = "http://172.21.23.6:8080/"
ts_url = "http://172.21.23.6:9009/"
name = "qiime"
owner = "iuc"
tool_panel_section_id = "qiime_rRNA_taxonomic_assignation"
gi = galaxy.GalaxyInstance(url=gi_url, key='8a099e97b0a83c73ead9f5b0fe19f4be')
ts = toolshed.ToolShedInstance(url=ts_url)
changeset_revision = str(ts.repositories.get_ordered_installable_revisions(name,
owner)[-1])
gi.toolShed.install_repository_revision(ts_url, name, owner, changeset_revision,
install_tool_dependencies=True, install_repository_dependencies=False,
tool_panel_section_id=tool_panel_section_id)
|
Add a script to easily integrate a tool using toolshed
|
Add a script to easily integrate a tool using toolshed
|
Python
|
apache-2.0
|
ASaiM/framework,ASaiM/framework
|
Add a script to easily integrate a tool using toolshed
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from bioblend import galaxy
from bioblend import toolshed
if __name__ == '__main__':
gi_url = "http://172.21.23.6:8080/"
ts_url = "http://172.21.23.6:9009/"
name = "qiime"
owner = "iuc"
tool_panel_section_id = "qiime_rRNA_taxonomic_assignation"
gi = galaxy.GalaxyInstance(url=gi_url, key='8a099e97b0a83c73ead9f5b0fe19f4be')
ts = toolshed.ToolShedInstance(url=ts_url)
changeset_revision = str(ts.repositories.get_ordered_installable_revisions(name,
owner)[-1])
gi.toolShed.install_repository_revision(ts_url, name, owner, changeset_revision,
install_tool_dependencies=True, install_repository_dependencies=False,
tool_panel_section_id=tool_panel_section_id)
|
<commit_before><commit_msg>Add a script to easily integrate a tool using toolshed<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from bioblend import galaxy
from bioblend import toolshed
if __name__ == '__main__':
gi_url = "http://172.21.23.6:8080/"
ts_url = "http://172.21.23.6:9009/"
name = "qiime"
owner = "iuc"
tool_panel_section_id = "qiime_rRNA_taxonomic_assignation"
gi = galaxy.GalaxyInstance(url=gi_url, key='8a099e97b0a83c73ead9f5b0fe19f4be')
ts = toolshed.ToolShedInstance(url=ts_url)
changeset_revision = str(ts.repositories.get_ordered_installable_revisions(name,
owner)[-1])
gi.toolShed.install_repository_revision(ts_url, name, owner, changeset_revision,
install_tool_dependencies=True, install_repository_dependencies=False,
tool_panel_section_id=tool_panel_section_id)
|
Add a script to easily integrate a tool using toolshed#!/usr/bin/env python
# -*- coding: utf-8 -*-
from bioblend import galaxy
from bioblend import toolshed
if __name__ == '__main__':
gi_url = "http://172.21.23.6:8080/"
ts_url = "http://172.21.23.6:9009/"
name = "qiime"
owner = "iuc"
tool_panel_section_id = "qiime_rRNA_taxonomic_assignation"
gi = galaxy.GalaxyInstance(url=gi_url, key='8a099e97b0a83c73ead9f5b0fe19f4be')
ts = toolshed.ToolShedInstance(url=ts_url)
changeset_revision = str(ts.repositories.get_ordered_installable_revisions(name,
owner)[-1])
gi.toolShed.install_repository_revision(ts_url, name, owner, changeset_revision,
install_tool_dependencies=True, install_repository_dependencies=False,
tool_panel_section_id=tool_panel_section_id)
|
<commit_before><commit_msg>Add a script to easily integrate a tool using toolshed<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
from bioblend import galaxy
from bioblend import toolshed
if __name__ == '__main__':
gi_url = "http://172.21.23.6:8080/"
ts_url = "http://172.21.23.6:9009/"
name = "qiime"
owner = "iuc"
tool_panel_section_id = "qiime_rRNA_taxonomic_assignation"
gi = galaxy.GalaxyInstance(url=gi_url, key='8a099e97b0a83c73ead9f5b0fe19f4be')
ts = toolshed.ToolShedInstance(url=ts_url)
changeset_revision = str(ts.repositories.get_ordered_installable_revisions(name,
owner)[-1])
gi.toolShed.install_repository_revision(ts_url, name, owner, changeset_revision,
install_tool_dependencies=True, install_repository_dependencies=False,
tool_panel_section_id=tool_panel_section_id)
|
|
957e029bc7e4eec43fbad7cc2c464ab9f6e1389f
|
examples/test_calculator.py
|
examples/test_calculator.py
|
from seleniumbase import BaseCase
class CalculatorTests(BaseCase):
def test_6_times_7_plus_12_equals_54(self):
self.open("seleniumbase.io/apps/calculator")
self.click('button[id="6"]')
self.click("button#multiply")
self.click('button[id="7"]')
self.click("button#add")
self.click('button[id="1"]')
self.click('button[id="2"]')
self.click("button#equal")
self.assert_exact_text("54", "input#output")
|
Add a calculator example test
|
Add a calculator example test
|
Python
|
mit
|
seleniumbase/SeleniumBase,seleniumbase/SeleniumBase,seleniumbase/SeleniumBase,seleniumbase/SeleniumBase
|
Add a calculator example test
|
from seleniumbase import BaseCase
class CalculatorTests(BaseCase):
def test_6_times_7_plus_12_equals_54(self):
self.open("seleniumbase.io/apps/calculator")
self.click('button[id="6"]')
self.click("button#multiply")
self.click('button[id="7"]')
self.click("button#add")
self.click('button[id="1"]')
self.click('button[id="2"]')
self.click("button#equal")
self.assert_exact_text("54", "input#output")
|
<commit_before><commit_msg>Add a calculator example test<commit_after>
|
from seleniumbase import BaseCase
class CalculatorTests(BaseCase):
def test_6_times_7_plus_12_equals_54(self):
self.open("seleniumbase.io/apps/calculator")
self.click('button[id="6"]')
self.click("button#multiply")
self.click('button[id="7"]')
self.click("button#add")
self.click('button[id="1"]')
self.click('button[id="2"]')
self.click("button#equal")
self.assert_exact_text("54", "input#output")
|
Add a calculator example testfrom seleniumbase import BaseCase
class CalculatorTests(BaseCase):
def test_6_times_7_plus_12_equals_54(self):
self.open("seleniumbase.io/apps/calculator")
self.click('button[id="6"]')
self.click("button#multiply")
self.click('button[id="7"]')
self.click("button#add")
self.click('button[id="1"]')
self.click('button[id="2"]')
self.click("button#equal")
self.assert_exact_text("54", "input#output")
|
<commit_before><commit_msg>Add a calculator example test<commit_after>from seleniumbase import BaseCase
class CalculatorTests(BaseCase):
def test_6_times_7_plus_12_equals_54(self):
self.open("seleniumbase.io/apps/calculator")
self.click('button[id="6"]')
self.click("button#multiply")
self.click('button[id="7"]')
self.click("button#add")
self.click('button[id="1"]')
self.click('button[id="2"]')
self.click("button#equal")
self.assert_exact_text("54", "input#output")
|
|
5dd0c4f83ed6b33b3bf6de0ef1e3beb38189e7ce
|
app/soc/logic/helper/convert_db.py
|
app/soc/logic/helper/convert_db.py
|
#!/usr/bin/python2.5
#
# Copyright 2008 the Melange authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Converts the DB from an old scheme to a new one.
"""
__authors__ = [
'"Sverre Rabbelier" <sverre@rabbelier.nl>',
]
from google.appengine.api import users
from django import http
from soc.models import user as user_model
from soc.logic import accounts
from soc.logic.models.user import logic as user_logic
def convert_user_accounts(*args, **kwargs):
"""Converts all current user accounts to normalized form.
"""
data = user_logic.getAll(user_model.User.all())
for user in data:
normalized = accounts.normalizeAccount(user.account)
if user.account != normalized:
user.account = normalized
user.put()
return http.HttpResponse('Done')
|
Add a script to normalize user accounts
|
Add a script to normalize user accounts
Patch by: Sverre Rabbelier
--HG--
extra : convert_revision : svn%3A32761e7d-7263-4528-b7be-7235b26367ec/trunk%402240
|
Python
|
apache-2.0
|
SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange,SRabbelier/Melange
|
Add a script to normalize user accounts
Patch by: Sverre Rabbelier
--HG--
extra : convert_revision : svn%3A32761e7d-7263-4528-b7be-7235b26367ec/trunk%402240
|
#!/usr/bin/python2.5
#
# Copyright 2008 the Melange authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Converts the DB from an old scheme to a new one.
"""
__authors__ = [
'"Sverre Rabbelier" <sverre@rabbelier.nl>',
]
from google.appengine.api import users
from django import http
from soc.models import user as user_model
from soc.logic import accounts
from soc.logic.models.user import logic as user_logic
def convert_user_accounts(*args, **kwargs):
"""Converts all current user accounts to normalized form.
"""
data = user_logic.getAll(user_model.User.all())
for user in data:
normalized = accounts.normalizeAccount(user.account)
if user.account != normalized:
user.account = normalized
user.put()
return http.HttpResponse('Done')
|
<commit_before><commit_msg>Add a script to normalize user accounts
Patch by: Sverre Rabbelier
--HG--
extra : convert_revision : svn%3A32761e7d-7263-4528-b7be-7235b26367ec/trunk%402240<commit_after>
|
#!/usr/bin/python2.5
#
# Copyright 2008 the Melange authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Converts the DB from an old scheme to a new one.
"""
__authors__ = [
'"Sverre Rabbelier" <sverre@rabbelier.nl>',
]
from google.appengine.api import users
from django import http
from soc.models import user as user_model
from soc.logic import accounts
from soc.logic.models.user import logic as user_logic
def convert_user_accounts(*args, **kwargs):
"""Converts all current user accounts to normalized form.
"""
data = user_logic.getAll(user_model.User.all())
for user in data:
normalized = accounts.normalizeAccount(user.account)
if user.account != normalized:
user.account = normalized
user.put()
return http.HttpResponse('Done')
|
Add a script to normalize user accounts
Patch by: Sverre Rabbelier
--HG--
extra : convert_revision : svn%3A32761e7d-7263-4528-b7be-7235b26367ec/trunk%402240#!/usr/bin/python2.5
#
# Copyright 2008 the Melange authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Converts the DB from an old scheme to a new one.
"""
__authors__ = [
'"Sverre Rabbelier" <sverre@rabbelier.nl>',
]
from google.appengine.api import users
from django import http
from soc.models import user as user_model
from soc.logic import accounts
from soc.logic.models.user import logic as user_logic
def convert_user_accounts(*args, **kwargs):
"""Converts all current user accounts to normalized form.
"""
data = user_logic.getAll(user_model.User.all())
for user in data:
normalized = accounts.normalizeAccount(user.account)
if user.account != normalized:
user.account = normalized
user.put()
return http.HttpResponse('Done')
|
<commit_before><commit_msg>Add a script to normalize user accounts
Patch by: Sverre Rabbelier
--HG--
extra : convert_revision : svn%3A32761e7d-7263-4528-b7be-7235b26367ec/trunk%402240<commit_after>#!/usr/bin/python2.5
#
# Copyright 2008 the Melange authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Converts the DB from an old scheme to a new one.
"""
__authors__ = [
'"Sverre Rabbelier" <sverre@rabbelier.nl>',
]
from google.appengine.api import users
from django import http
from soc.models import user as user_model
from soc.logic import accounts
from soc.logic.models.user import logic as user_logic
def convert_user_accounts(*args, **kwargs):
"""Converts all current user accounts to normalized form.
"""
data = user_logic.getAll(user_model.User.all())
for user in data:
normalized = accounts.normalizeAccount(user.account)
if user.account != normalized:
user.account = normalized
user.put()
return http.HttpResponse('Done')
|
|
60c9ba08b64af27fa79abd0e995cc0967ce2ff34
|
site_embedding.py
|
site_embedding.py
|
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
all_data = pd.read_csv('data/main_data_fake_news.csv')
all_data.content.fillna(' ', inplace=True)
corpus = all_data.groupby('host').aggregate(lambda x: ' '.join(x.content))
stopwords = []
vectorizer = TfidfVectorizer(
stop_words = stopwords,
ngram_range=(1,1),
min_df=100)
tsne = TSNE(metric='cosine')
embedded = tsne.fit_transform(X.toarray())
|
Add host embedding (embeds host by dictionary used in lower dimensions)
|
Add host embedding (embeds host by dictionary used in lower dimensions)
|
Python
|
mit
|
HristoBuyukliev/fakenews
|
Add host embedding (embeds host by dictionary used in lower dimensions)
|
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
all_data = pd.read_csv('data/main_data_fake_news.csv')
all_data.content.fillna(' ', inplace=True)
corpus = all_data.groupby('host').aggregate(lambda x: ' '.join(x.content))
stopwords = []
vectorizer = TfidfVectorizer(
stop_words = stopwords,
ngram_range=(1,1),
min_df=100)
tsne = TSNE(metric='cosine')
embedded = tsne.fit_transform(X.toarray())
|
<commit_before><commit_msg>Add host embedding (embeds host by dictionary used in lower dimensions)<commit_after>
|
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
all_data = pd.read_csv('data/main_data_fake_news.csv')
all_data.content.fillna(' ', inplace=True)
corpus = all_data.groupby('host').aggregate(lambda x: ' '.join(x.content))
stopwords = []
vectorizer = TfidfVectorizer(
stop_words = stopwords,
ngram_range=(1,1),
min_df=100)
tsne = TSNE(metric='cosine')
embedded = tsne.fit_transform(X.toarray())
|
Add host embedding (embeds host by dictionary used in lower dimensions)import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
all_data = pd.read_csv('data/main_data_fake_news.csv')
all_data.content.fillna(' ', inplace=True)
corpus = all_data.groupby('host').aggregate(lambda x: ' '.join(x.content))
stopwords = []
vectorizer = TfidfVectorizer(
stop_words = stopwords,
ngram_range=(1,1),
min_df=100)
tsne = TSNE(metric='cosine')
embedded = tsne.fit_transform(X.toarray())
|
<commit_before><commit_msg>Add host embedding (embeds host by dictionary used in lower dimensions)<commit_after>import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.manifold import TSNE
all_data = pd.read_csv('data/main_data_fake_news.csv')
all_data.content.fillna(' ', inplace=True)
corpus = all_data.groupby('host').aggregate(lambda x: ' '.join(x.content))
stopwords = []
vectorizer = TfidfVectorizer(
stop_words = stopwords,
ngram_range=(1,1),
min_df=100)
tsne = TSNE(metric='cosine')
embedded = tsne.fit_transform(X.toarray())
|
|
cc4eef5566c7017da3b8a16265776ec04d6a571c
|
python/repl.py
|
python/repl.py
|
my_globals = globals().copy()
my_locals = locals().copy()
while True:
try:
x = input('--> ')
except:
break
try:
print(eval(x, my_globals, my_locals))
except:
exec(x, my_globals, my_locals)
|
Implement simple REPL behaviour in Python.
|
Implement simple REPL behaviour in Python.
|
Python
|
apache-2.0
|
pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff,pdbartlett/misc-stuff
|
Implement simple REPL behaviour in Python.
|
my_globals = globals().copy()
my_locals = locals().copy()
while True:
try:
x = input('--> ')
except:
break
try:
print(eval(x, my_globals, my_locals))
except:
exec(x, my_globals, my_locals)
|
<commit_before><commit_msg>Implement simple REPL behaviour in Python.<commit_after>
|
my_globals = globals().copy()
my_locals = locals().copy()
while True:
try:
x = input('--> ')
except:
break
try:
print(eval(x, my_globals, my_locals))
except:
exec(x, my_globals, my_locals)
|
Implement simple REPL behaviour in Python.my_globals = globals().copy()
my_locals = locals().copy()
while True:
try:
x = input('--> ')
except:
break
try:
print(eval(x, my_globals, my_locals))
except:
exec(x, my_globals, my_locals)
|
<commit_before><commit_msg>Implement simple REPL behaviour in Python.<commit_after>my_globals = globals().copy()
my_locals = locals().copy()
while True:
try:
x = input('--> ')
except:
break
try:
print(eval(x, my_globals, my_locals))
except:
exec(x, my_globals, my_locals)
|
|
2818f378ead064df7ceb4323851a39b7a1cbc0af
|
tests/chainer_tests/functions_tests/array_tests/test_where.py
|
tests/chainer_tests/functions_tests/array_tests/test_where.py
|
import unittest
import numpy
import chainer
from chainer import cuda
import chainer.functions as F
from chainer.testing import attr
class TestWhere(unittest.TestCase):
shape = (3, 2, 4)
def setUp(self):
self.c_data = numpy.random.uniform(-1, 1, self.shape) > 0
self.x_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
self.y_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
def check_forward(self, c_data, x_data, y_data):
c = chainer.Variable(c_data)
x = chainer.Variable(x_data)
y = chainer.Variable(y_data)
z = F.where(c, x, y)
self.assertEqual(x.data.shape, z.data.shape)
for c, x, y, z in zip(c.data.flatten(), x.data.flatten(),
y.data.flatten(), z.data.flatten()):
if c:
self.assertEqual(x, z)
else:
self.assertEqual(y, z)
def test_forward_cpu(self):
self.check_forward(self.c_data, self.x_data, self.y_data)
@attr.gpu
def test_forward_gpu(self):
self.check_forward(cuda.to_gpu(self.c_data),
cuda.to_gpu(self.x_data),
cuda.to_gpu(self.y_data))
|
Make a test for where
|
Make a test for where
|
Python
|
mit
|
niboshi/chainer,wkentaro/chainer,okuta/chainer,ktnyt/chainer,jnishi/chainer,kikusu/chainer,sinhrks/chainer,wkentaro/chainer,cemoody/chainer,ktnyt/chainer,hvy/chainer,t-abe/chainer,niboshi/chainer,hvy/chainer,AlpacaDB/chainer,kashif/chainer,truongdq/chainer,chainer/chainer,benob/chainer,rezoo/chainer,tkerola/chainer,jnishi/chainer,wkentaro/chainer,kikusu/chainer,cupy/cupy,okuta/chainer,anaruse/chainer,truongdq/chainer,aonotas/chainer,keisuke-umezawa/chainer,chainer/chainer,chainer/chainer,keisuke-umezawa/chainer,ktnyt/chainer,niboshi/chainer,t-abe/chainer,jnishi/chainer,okuta/chainer,cupy/cupy,jnishi/chainer,delta2323/chainer,muupan/chainer,niboshi/chainer,sinhrks/chainer,wkentaro/chainer,okuta/chainer,keisuke-umezawa/chainer,ysekky/chainer,kiyukuta/chainer,chainer/chainer,cupy/cupy,benob/chainer,ktnyt/chainer,cupy/cupy,keisuke-umezawa/chainer,AlpacaDB/chainer,hvy/chainer,hvy/chainer,pfnet/chainer,muupan/chainer,ronekko/chainer
|
Make a test for where
|
import unittest
import numpy
import chainer
from chainer import cuda
import chainer.functions as F
from chainer.testing import attr
class TestWhere(unittest.TestCase):
shape = (3, 2, 4)
def setUp(self):
self.c_data = numpy.random.uniform(-1, 1, self.shape) > 0
self.x_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
self.y_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
def check_forward(self, c_data, x_data, y_data):
c = chainer.Variable(c_data)
x = chainer.Variable(x_data)
y = chainer.Variable(y_data)
z = F.where(c, x, y)
self.assertEqual(x.data.shape, z.data.shape)
for c, x, y, z in zip(c.data.flatten(), x.data.flatten(),
y.data.flatten(), z.data.flatten()):
if c:
self.assertEqual(x, z)
else:
self.assertEqual(y, z)
def test_forward_cpu(self):
self.check_forward(self.c_data, self.x_data, self.y_data)
@attr.gpu
def test_forward_gpu(self):
self.check_forward(cuda.to_gpu(self.c_data),
cuda.to_gpu(self.x_data),
cuda.to_gpu(self.y_data))
|
<commit_before><commit_msg>Make a test for where<commit_after>
|
import unittest
import numpy
import chainer
from chainer import cuda
import chainer.functions as F
from chainer.testing import attr
class TestWhere(unittest.TestCase):
shape = (3, 2, 4)
def setUp(self):
self.c_data = numpy.random.uniform(-1, 1, self.shape) > 0
self.x_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
self.y_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
def check_forward(self, c_data, x_data, y_data):
c = chainer.Variable(c_data)
x = chainer.Variable(x_data)
y = chainer.Variable(y_data)
z = F.where(c, x, y)
self.assertEqual(x.data.shape, z.data.shape)
for c, x, y, z in zip(c.data.flatten(), x.data.flatten(),
y.data.flatten(), z.data.flatten()):
if c:
self.assertEqual(x, z)
else:
self.assertEqual(y, z)
def test_forward_cpu(self):
self.check_forward(self.c_data, self.x_data, self.y_data)
@attr.gpu
def test_forward_gpu(self):
self.check_forward(cuda.to_gpu(self.c_data),
cuda.to_gpu(self.x_data),
cuda.to_gpu(self.y_data))
|
Make a test for whereimport unittest
import numpy
import chainer
from chainer import cuda
import chainer.functions as F
from chainer.testing import attr
class TestWhere(unittest.TestCase):
shape = (3, 2, 4)
def setUp(self):
self.c_data = numpy.random.uniform(-1, 1, self.shape) > 0
self.x_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
self.y_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
def check_forward(self, c_data, x_data, y_data):
c = chainer.Variable(c_data)
x = chainer.Variable(x_data)
y = chainer.Variable(y_data)
z = F.where(c, x, y)
self.assertEqual(x.data.shape, z.data.shape)
for c, x, y, z in zip(c.data.flatten(), x.data.flatten(),
y.data.flatten(), z.data.flatten()):
if c:
self.assertEqual(x, z)
else:
self.assertEqual(y, z)
def test_forward_cpu(self):
self.check_forward(self.c_data, self.x_data, self.y_data)
@attr.gpu
def test_forward_gpu(self):
self.check_forward(cuda.to_gpu(self.c_data),
cuda.to_gpu(self.x_data),
cuda.to_gpu(self.y_data))
|
<commit_before><commit_msg>Make a test for where<commit_after>import unittest
import numpy
import chainer
from chainer import cuda
import chainer.functions as F
from chainer.testing import attr
class TestWhere(unittest.TestCase):
shape = (3, 2, 4)
def setUp(self):
self.c_data = numpy.random.uniform(-1, 1, self.shape) > 0
self.x_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
self.y_data = \
numpy.random.uniform(-1, 1, self.shape).astype(numpy.float32)
def check_forward(self, c_data, x_data, y_data):
c = chainer.Variable(c_data)
x = chainer.Variable(x_data)
y = chainer.Variable(y_data)
z = F.where(c, x, y)
self.assertEqual(x.data.shape, z.data.shape)
for c, x, y, z in zip(c.data.flatten(), x.data.flatten(),
y.data.flatten(), z.data.flatten()):
if c:
self.assertEqual(x, z)
else:
self.assertEqual(y, z)
def test_forward_cpu(self):
self.check_forward(self.c_data, self.x_data, self.y_data)
@attr.gpu
def test_forward_gpu(self):
self.check_forward(cuda.to_gpu(self.c_data),
cuda.to_gpu(self.x_data),
cuda.to_gpu(self.y_data))
|
|
c2e2b94027ba2daa19d02ad8fdfb29eaca363026
|
examples/list_vms_on_host.py
|
examples/list_vms_on_host.py
|
#!/usr/bin/python
# Copyright 2010 Jonathan Kinred
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A simple script to list the VMs on a host
#
# Example usage:
# python ./examples/list_vms_on_host.py --server <server> --username <user> --password <pass> --hostsystem <hostsystem>
import sys
from psphere.client import Client
from psphere.managedobjects import HostSystem
def main(options):
client = Client(server=options.server, username=options.username,
password=options.password)
print('Successfully connected to %s' % client.server)
# Get a HostSystem object representing the host
host = HostSystem.get(client, name=options.hostsystem)
# Preload the name attribute of all items in the vm attribute. Read the
# manual as this significantly speeds up queries for ManagedObject's
host.preload("vm", properties=["name"])
# Iterate over the items in host.vm and print their names
for vm in sorted(host.vm):
print(vm.name)
# Close the connection
client.logout()
if __name__ == "__main__":
from optparse import OptionParser
usage = "Usage: %prog [options]"
parser = OptionParser(usage=usage)
parser.add_option("--server", dest="server",
help="The server to connect to")
parser.add_option("--username", dest="username",
help="The username used to connect to the server")
parser.add_option("--password", dest="password",
help="The password used to connect to the server")
parser.add_option("--hostsystem", dest="hostsystem",
help="The host from which to list VMs")
(options, args) = parser.parse_args()
if options.server is None:
print("You must specify a server")
parser.print_help()
sys.exit(1)
if options.username is None:
print("You must specify a username")
parser.print_help()
sys.exit(1)
if options.password is None:
print("You must specify a password")
parser.print_help()
sys.exit(1)
if options.hostsystem is None:
print("You must specify a host system")
parser.print_help()
sys.exit(1)
main(options)
|
Add an example showing how to list VMs on a host
|
Add an example showing how to list VMs on a host
|
Python
|
apache-2.0
|
graphite-server/psphere,jkinred/psphere
|
Add an example showing how to list VMs on a host
|
#!/usr/bin/python
# Copyright 2010 Jonathan Kinred
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A simple script to list the VMs on a host
#
# Example usage:
# python ./examples/list_vms_on_host.py --server <server> --username <user> --password <pass> --hostsystem <hostsystem>
import sys
from psphere.client import Client
from psphere.managedobjects import HostSystem
def main(options):
client = Client(server=options.server, username=options.username,
password=options.password)
print('Successfully connected to %s' % client.server)
# Get a HostSystem object representing the host
host = HostSystem.get(client, name=options.hostsystem)
# Preload the name attribute of all items in the vm attribute. Read the
# manual as this significantly speeds up queries for ManagedObject's
host.preload("vm", properties=["name"])
# Iterate over the items in host.vm and print their names
for vm in sorted(host.vm):
print(vm.name)
# Close the connection
client.logout()
if __name__ == "__main__":
from optparse import OptionParser
usage = "Usage: %prog [options]"
parser = OptionParser(usage=usage)
parser.add_option("--server", dest="server",
help="The server to connect to")
parser.add_option("--username", dest="username",
help="The username used to connect to the server")
parser.add_option("--password", dest="password",
help="The password used to connect to the server")
parser.add_option("--hostsystem", dest="hostsystem",
help="The host from which to list VMs")
(options, args) = parser.parse_args()
if options.server is None:
print("You must specify a server")
parser.print_help()
sys.exit(1)
if options.username is None:
print("You must specify a username")
parser.print_help()
sys.exit(1)
if options.password is None:
print("You must specify a password")
parser.print_help()
sys.exit(1)
if options.hostsystem is None:
print("You must specify a host system")
parser.print_help()
sys.exit(1)
main(options)
|
<commit_before><commit_msg>Add an example showing how to list VMs on a host<commit_after>
|
#!/usr/bin/python
# Copyright 2010 Jonathan Kinred
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A simple script to list the VMs on a host
#
# Example usage:
# python ./examples/list_vms_on_host.py --server <server> --username <user> --password <pass> --hostsystem <hostsystem>
import sys
from psphere.client import Client
from psphere.managedobjects import HostSystem
def main(options):
client = Client(server=options.server, username=options.username,
password=options.password)
print('Successfully connected to %s' % client.server)
# Get a HostSystem object representing the host
host = HostSystem.get(client, name=options.hostsystem)
# Preload the name attribute of all items in the vm attribute. Read the
# manual as this significantly speeds up queries for ManagedObject's
host.preload("vm", properties=["name"])
# Iterate over the items in host.vm and print their names
for vm in sorted(host.vm):
print(vm.name)
# Close the connection
client.logout()
if __name__ == "__main__":
from optparse import OptionParser
usage = "Usage: %prog [options]"
parser = OptionParser(usage=usage)
parser.add_option("--server", dest="server",
help="The server to connect to")
parser.add_option("--username", dest="username",
help="The username used to connect to the server")
parser.add_option("--password", dest="password",
help="The password used to connect to the server")
parser.add_option("--hostsystem", dest="hostsystem",
help="The host from which to list VMs")
(options, args) = parser.parse_args()
if options.server is None:
print("You must specify a server")
parser.print_help()
sys.exit(1)
if options.username is None:
print("You must specify a username")
parser.print_help()
sys.exit(1)
if options.password is None:
print("You must specify a password")
parser.print_help()
sys.exit(1)
if options.hostsystem is None:
print("You must specify a host system")
parser.print_help()
sys.exit(1)
main(options)
|
Add an example showing how to list VMs on a host#!/usr/bin/python
# Copyright 2010 Jonathan Kinred
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A simple script to list the VMs on a host
#
# Example usage:
# python ./examples/list_vms_on_host.py --server <server> --username <user> --password <pass> --hostsystem <hostsystem>
import sys
from psphere.client import Client
from psphere.managedobjects import HostSystem
def main(options):
client = Client(server=options.server, username=options.username,
password=options.password)
print('Successfully connected to %s' % client.server)
# Get a HostSystem object representing the host
host = HostSystem.get(client, name=options.hostsystem)
# Preload the name attribute of all items in the vm attribute. Read the
# manual as this significantly speeds up queries for ManagedObject's
host.preload("vm", properties=["name"])
# Iterate over the items in host.vm and print their names
for vm in sorted(host.vm):
print(vm.name)
# Close the connection
client.logout()
if __name__ == "__main__":
from optparse import OptionParser
usage = "Usage: %prog [options]"
parser = OptionParser(usage=usage)
parser.add_option("--server", dest="server",
help="The server to connect to")
parser.add_option("--username", dest="username",
help="The username used to connect to the server")
parser.add_option("--password", dest="password",
help="The password used to connect to the server")
parser.add_option("--hostsystem", dest="hostsystem",
help="The host from which to list VMs")
(options, args) = parser.parse_args()
if options.server is None:
print("You must specify a server")
parser.print_help()
sys.exit(1)
if options.username is None:
print("You must specify a username")
parser.print_help()
sys.exit(1)
if options.password is None:
print("You must specify a password")
parser.print_help()
sys.exit(1)
if options.hostsystem is None:
print("You must specify a host system")
parser.print_help()
sys.exit(1)
main(options)
|
<commit_before><commit_msg>Add an example showing how to list VMs on a host<commit_after>#!/usr/bin/python
# Copyright 2010 Jonathan Kinred
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at:
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A simple script to list the VMs on a host
#
# Example usage:
# python ./examples/list_vms_on_host.py --server <server> --username <user> --password <pass> --hostsystem <hostsystem>
import sys
from psphere.client import Client
from psphere.managedobjects import HostSystem
def main(options):
client = Client(server=options.server, username=options.username,
password=options.password)
print('Successfully connected to %s' % client.server)
# Get a HostSystem object representing the host
host = HostSystem.get(client, name=options.hostsystem)
# Preload the name attribute of all items in the vm attribute. Read the
# manual as this significantly speeds up queries for ManagedObject's
host.preload("vm", properties=["name"])
# Iterate over the items in host.vm and print their names
for vm in sorted(host.vm):
print(vm.name)
# Close the connection
client.logout()
if __name__ == "__main__":
from optparse import OptionParser
usage = "Usage: %prog [options]"
parser = OptionParser(usage=usage)
parser.add_option("--server", dest="server",
help="The server to connect to")
parser.add_option("--username", dest="username",
help="The username used to connect to the server")
parser.add_option("--password", dest="password",
help="The password used to connect to the server")
parser.add_option("--hostsystem", dest="hostsystem",
help="The host from which to list VMs")
(options, args) = parser.parse_args()
if options.server is None:
print("You must specify a server")
parser.print_help()
sys.exit(1)
if options.username is None:
print("You must specify a username")
parser.print_help()
sys.exit(1)
if options.password is None:
print("You must specify a password")
parser.print_help()
sys.exit(1)
if options.hostsystem is None:
print("You must specify a host system")
parser.print_help()
sys.exit(1)
main(options)
|
|
e41d249ccd01516cdfd79a31d0ae22f8dd2bb4a9
|
tests/distributions/test_combinators.py
|
tests/distributions/test_combinators.py
|
import numpy as np
from tensorprob import Model, Parameter, Normal, Exponential, Mix2
def test_mix2_fit():
with Model() as model:
mu = Parameter()
sigma = Parameter(lower=1)
a = Parameter(lower=0)
f = Parameter(lower=0, upper=1)
X1 = Normal(mu, sigma, bounds=[(-np.inf, 21), (22, np.inf)])
X2 = Exponential(a, bounds=[(-np.inf, 8), (10, np.inf)])
X12 = Mix2(f, X1, X2, bounds=[(6, 17), (18, 36)])
model.observed(X12)
model.initialize({
mu: 23,
sigma: 1.2,
a: 0.2,
f: 0.3,
})
# Generate some data to fit
np.random.seed(42)
exp_data = np.random.exponential(10, 2000000)
exp_data = exp_data[(exp_data < 8) | (10 < exp_data)]
# Include the data blinded by the Mix2 bounds as we use the len(norm1_data)
norm1_data = np.random.normal(19, 2, 1000000)
norm1_data = norm1_data[
((6 < norm1_data) & (norm1_data < 17)) |
((18 < norm1_data) & (norm1_data < 21)) |
((22 < norm1_data) & (norm1_data < 36))
]
data = np.concatenate([exp_data, norm1_data])
data = data[((6 < data) & (data < 17)) | ((18 < data) & (data < 36))]
model.fit(data)
assert model.state[mu] - 19 < 5e-2
assert model.state[sigma] - 2 < 5e-2
assert model.state[a] - 0.1 < 5e-4
assert model.state[f] - (len(norm1_data)/len(data)) < 5e-4
|
Add basic test for Mix2
|
Add basic test for Mix2
|
Python
|
mit
|
ibab/tensorprob,tensorprob/tensorprob,ibab/tensorfit
|
Add basic test for Mix2
|
import numpy as np
from tensorprob import Model, Parameter, Normal, Exponential, Mix2
def test_mix2_fit():
with Model() as model:
mu = Parameter()
sigma = Parameter(lower=1)
a = Parameter(lower=0)
f = Parameter(lower=0, upper=1)
X1 = Normal(mu, sigma, bounds=[(-np.inf, 21), (22, np.inf)])
X2 = Exponential(a, bounds=[(-np.inf, 8), (10, np.inf)])
X12 = Mix2(f, X1, X2, bounds=[(6, 17), (18, 36)])
model.observed(X12)
model.initialize({
mu: 23,
sigma: 1.2,
a: 0.2,
f: 0.3,
})
# Generate some data to fit
np.random.seed(42)
exp_data = np.random.exponential(10, 2000000)
exp_data = exp_data[(exp_data < 8) | (10 < exp_data)]
# Include the data blinded by the Mix2 bounds as we use the len(norm1_data)
norm1_data = np.random.normal(19, 2, 1000000)
norm1_data = norm1_data[
((6 < norm1_data) & (norm1_data < 17)) |
((18 < norm1_data) & (norm1_data < 21)) |
((22 < norm1_data) & (norm1_data < 36))
]
data = np.concatenate([exp_data, norm1_data])
data = data[((6 < data) & (data < 17)) | ((18 < data) & (data < 36))]
model.fit(data)
assert model.state[mu] - 19 < 5e-2
assert model.state[sigma] - 2 < 5e-2
assert model.state[a] - 0.1 < 5e-4
assert model.state[f] - (len(norm1_data)/len(data)) < 5e-4
|
<commit_before><commit_msg>Add basic test for Mix2<commit_after>
|
import numpy as np
from tensorprob import Model, Parameter, Normal, Exponential, Mix2
def test_mix2_fit():
with Model() as model:
mu = Parameter()
sigma = Parameter(lower=1)
a = Parameter(lower=0)
f = Parameter(lower=0, upper=1)
X1 = Normal(mu, sigma, bounds=[(-np.inf, 21), (22, np.inf)])
X2 = Exponential(a, bounds=[(-np.inf, 8), (10, np.inf)])
X12 = Mix2(f, X1, X2, bounds=[(6, 17), (18, 36)])
model.observed(X12)
model.initialize({
mu: 23,
sigma: 1.2,
a: 0.2,
f: 0.3,
})
# Generate some data to fit
np.random.seed(42)
exp_data = np.random.exponential(10, 2000000)
exp_data = exp_data[(exp_data < 8) | (10 < exp_data)]
# Include the data blinded by the Mix2 bounds as we use the len(norm1_data)
norm1_data = np.random.normal(19, 2, 1000000)
norm1_data = norm1_data[
((6 < norm1_data) & (norm1_data < 17)) |
((18 < norm1_data) & (norm1_data < 21)) |
((22 < norm1_data) & (norm1_data < 36))
]
data = np.concatenate([exp_data, norm1_data])
data = data[((6 < data) & (data < 17)) | ((18 < data) & (data < 36))]
model.fit(data)
assert model.state[mu] - 19 < 5e-2
assert model.state[sigma] - 2 < 5e-2
assert model.state[a] - 0.1 < 5e-4
assert model.state[f] - (len(norm1_data)/len(data)) < 5e-4
|
Add basic test for Mix2import numpy as np
from tensorprob import Model, Parameter, Normal, Exponential, Mix2
def test_mix2_fit():
with Model() as model:
mu = Parameter()
sigma = Parameter(lower=1)
a = Parameter(lower=0)
f = Parameter(lower=0, upper=1)
X1 = Normal(mu, sigma, bounds=[(-np.inf, 21), (22, np.inf)])
X2 = Exponential(a, bounds=[(-np.inf, 8), (10, np.inf)])
X12 = Mix2(f, X1, X2, bounds=[(6, 17), (18, 36)])
model.observed(X12)
model.initialize({
mu: 23,
sigma: 1.2,
a: 0.2,
f: 0.3,
})
# Generate some data to fit
np.random.seed(42)
exp_data = np.random.exponential(10, 2000000)
exp_data = exp_data[(exp_data < 8) | (10 < exp_data)]
# Include the data blinded by the Mix2 bounds as we use the len(norm1_data)
norm1_data = np.random.normal(19, 2, 1000000)
norm1_data = norm1_data[
((6 < norm1_data) & (norm1_data < 17)) |
((18 < norm1_data) & (norm1_data < 21)) |
((22 < norm1_data) & (norm1_data < 36))
]
data = np.concatenate([exp_data, norm1_data])
data = data[((6 < data) & (data < 17)) | ((18 < data) & (data < 36))]
model.fit(data)
assert model.state[mu] - 19 < 5e-2
assert model.state[sigma] - 2 < 5e-2
assert model.state[a] - 0.1 < 5e-4
assert model.state[f] - (len(norm1_data)/len(data)) < 5e-4
|
<commit_before><commit_msg>Add basic test for Mix2<commit_after>import numpy as np
from tensorprob import Model, Parameter, Normal, Exponential, Mix2
def test_mix2_fit():
with Model() as model:
mu = Parameter()
sigma = Parameter(lower=1)
a = Parameter(lower=0)
f = Parameter(lower=0, upper=1)
X1 = Normal(mu, sigma, bounds=[(-np.inf, 21), (22, np.inf)])
X2 = Exponential(a, bounds=[(-np.inf, 8), (10, np.inf)])
X12 = Mix2(f, X1, X2, bounds=[(6, 17), (18, 36)])
model.observed(X12)
model.initialize({
mu: 23,
sigma: 1.2,
a: 0.2,
f: 0.3,
})
# Generate some data to fit
np.random.seed(42)
exp_data = np.random.exponential(10, 2000000)
exp_data = exp_data[(exp_data < 8) | (10 < exp_data)]
# Include the data blinded by the Mix2 bounds as we use the len(norm1_data)
norm1_data = np.random.normal(19, 2, 1000000)
norm1_data = norm1_data[
((6 < norm1_data) & (norm1_data < 17)) |
((18 < norm1_data) & (norm1_data < 21)) |
((22 < norm1_data) & (norm1_data < 36))
]
data = np.concatenate([exp_data, norm1_data])
data = data[((6 < data) & (data < 17)) | ((18 < data) & (data < 36))]
model.fit(data)
assert model.state[mu] - 19 < 5e-2
assert model.state[sigma] - 2 < 5e-2
assert model.state[a] - 0.1 < 5e-4
assert model.state[f] - (len(norm1_data)/len(data)) < 5e-4
|
|
52d736e154f74342dde91acb069c0ea7cb9068ec
|
py/insert-delete-getrandom-o1-duplicates-allowed.py
|
py/insert-delete-getrandom-o1-duplicates-allowed.py
|
from collections import Counter
import random
class RandomizedCollection(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.counter = Counter()
self.redundant = Counter()
self.array = []
def insert(self, val):
"""
Inserts a value to the collection. Returns true if the collection did not already contain the specified element.
:type val: int
:rtype: bool
"""
self.counter[val] += 1
if self.redundant[val] == 0:
self.array.append(val)
else:
self.redundant[val] -= 1
return self.counter[val] == 1
def remove(self, val):
"""
Removes a value from the collection. Returns true if the collection contained the specified element.
:type val: int
:rtype: bool
"""
ret = False
if self.counter[val]:
ret = True
self.counter[val] -= 1
self.redundant[val] += 1
return ret
def getRandom(self):
"""
Get a random element from the collection.
:rtype: int
"""
while True:
idx = random.randint(0, len(self.array) - 1)
v = self.array[idx]
if self.counter[v] and (self.redundant[v] == 0 or random.random() * (self.counter[v] + self.redundant[v]) < self.counter[v]):
break
else:
self.array[idx] = self.array[-1]
self.array.pop()
self.redundant[v] -= 1
return v
# Your RandomizedCollection object will be instantiated and called as such:
# obj = RandomizedCollection()
# param_1 = obj.insert(val)
# param_2 = obj.remove(val)
# param_3 = obj.getRandom()
|
Add py solution for 381. Insert Delete GetRandom O(1) - Duplicates allowed
|
Add py solution for 381. Insert Delete GetRandom O(1) - Duplicates allowed
381. Insert Delete GetRandom O(1) - Duplicates allowed: https://leetcode.com/problems/insert-delete-getrandom-o1-duplicates-allowed/
getRandom may not be O(1). I'm not sure..
|
Python
|
apache-2.0
|
ckclark/leetcode,ckclark/leetcode,ckclark/leetcode,ckclark/leetcode,ckclark/leetcode,ckclark/leetcode
|
Add py solution for 381. Insert Delete GetRandom O(1) - Duplicates allowed
381. Insert Delete GetRandom O(1) - Duplicates allowed: https://leetcode.com/problems/insert-delete-getrandom-o1-duplicates-allowed/
getRandom may not be O(1). I'm not sure..
|
from collections import Counter
import random
class RandomizedCollection(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.counter = Counter()
self.redundant = Counter()
self.array = []
def insert(self, val):
"""
Inserts a value to the collection. Returns true if the collection did not already contain the specified element.
:type val: int
:rtype: bool
"""
self.counter[val] += 1
if self.redundant[val] == 0:
self.array.append(val)
else:
self.redundant[val] -= 1
return self.counter[val] == 1
def remove(self, val):
"""
Removes a value from the collection. Returns true if the collection contained the specified element.
:type val: int
:rtype: bool
"""
ret = False
if self.counter[val]:
ret = True
self.counter[val] -= 1
self.redundant[val] += 1
return ret
def getRandom(self):
"""
Get a random element from the collection.
:rtype: int
"""
while True:
idx = random.randint(0, len(self.array) - 1)
v = self.array[idx]
if self.counter[v] and (self.redundant[v] == 0 or random.random() * (self.counter[v] + self.redundant[v]) < self.counter[v]):
break
else:
self.array[idx] = self.array[-1]
self.array.pop()
self.redundant[v] -= 1
return v
# Your RandomizedCollection object will be instantiated and called as such:
# obj = RandomizedCollection()
# param_1 = obj.insert(val)
# param_2 = obj.remove(val)
# param_3 = obj.getRandom()
|
<commit_before><commit_msg>Add py solution for 381. Insert Delete GetRandom O(1) - Duplicates allowed
381. Insert Delete GetRandom O(1) - Duplicates allowed: https://leetcode.com/problems/insert-delete-getrandom-o1-duplicates-allowed/
getRandom may not be O(1). I'm not sure..<commit_after>
|
from collections import Counter
import random
class RandomizedCollection(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.counter = Counter()
self.redundant = Counter()
self.array = []
def insert(self, val):
"""
Inserts a value to the collection. Returns true if the collection did not already contain the specified element.
:type val: int
:rtype: bool
"""
self.counter[val] += 1
if self.redundant[val] == 0:
self.array.append(val)
else:
self.redundant[val] -= 1
return self.counter[val] == 1
def remove(self, val):
"""
Removes a value from the collection. Returns true if the collection contained the specified element.
:type val: int
:rtype: bool
"""
ret = False
if self.counter[val]:
ret = True
self.counter[val] -= 1
self.redundant[val] += 1
return ret
def getRandom(self):
"""
Get a random element from the collection.
:rtype: int
"""
while True:
idx = random.randint(0, len(self.array) - 1)
v = self.array[idx]
if self.counter[v] and (self.redundant[v] == 0 or random.random() * (self.counter[v] + self.redundant[v]) < self.counter[v]):
break
else:
self.array[idx] = self.array[-1]
self.array.pop()
self.redundant[v] -= 1
return v
# Your RandomizedCollection object will be instantiated and called as such:
# obj = RandomizedCollection()
# param_1 = obj.insert(val)
# param_2 = obj.remove(val)
# param_3 = obj.getRandom()
|
Add py solution for 381. Insert Delete GetRandom O(1) - Duplicates allowed
381. Insert Delete GetRandom O(1) - Duplicates allowed: https://leetcode.com/problems/insert-delete-getrandom-o1-duplicates-allowed/
getRandom may not be O(1). I'm not sure..from collections import Counter
import random
class RandomizedCollection(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.counter = Counter()
self.redundant = Counter()
self.array = []
def insert(self, val):
"""
Inserts a value to the collection. Returns true if the collection did not already contain the specified element.
:type val: int
:rtype: bool
"""
self.counter[val] += 1
if self.redundant[val] == 0:
self.array.append(val)
else:
self.redundant[val] -= 1
return self.counter[val] == 1
def remove(self, val):
"""
Removes a value from the collection. Returns true if the collection contained the specified element.
:type val: int
:rtype: bool
"""
ret = False
if self.counter[val]:
ret = True
self.counter[val] -= 1
self.redundant[val] += 1
return ret
def getRandom(self):
"""
Get a random element from the collection.
:rtype: int
"""
while True:
idx = random.randint(0, len(self.array) - 1)
v = self.array[idx]
if self.counter[v] and (self.redundant[v] == 0 or random.random() * (self.counter[v] + self.redundant[v]) < self.counter[v]):
break
else:
self.array[idx] = self.array[-1]
self.array.pop()
self.redundant[v] -= 1
return v
# Your RandomizedCollection object will be instantiated and called as such:
# obj = RandomizedCollection()
# param_1 = obj.insert(val)
# param_2 = obj.remove(val)
# param_3 = obj.getRandom()
|
<commit_before><commit_msg>Add py solution for 381. Insert Delete GetRandom O(1) - Duplicates allowed
381. Insert Delete GetRandom O(1) - Duplicates allowed: https://leetcode.com/problems/insert-delete-getrandom-o1-duplicates-allowed/
getRandom may not be O(1). I'm not sure..<commit_after>from collections import Counter
import random
class RandomizedCollection(object):
def __init__(self):
"""
Initialize your data structure here.
"""
self.counter = Counter()
self.redundant = Counter()
self.array = []
def insert(self, val):
"""
Inserts a value to the collection. Returns true if the collection did not already contain the specified element.
:type val: int
:rtype: bool
"""
self.counter[val] += 1
if self.redundant[val] == 0:
self.array.append(val)
else:
self.redundant[val] -= 1
return self.counter[val] == 1
def remove(self, val):
"""
Removes a value from the collection. Returns true if the collection contained the specified element.
:type val: int
:rtype: bool
"""
ret = False
if self.counter[val]:
ret = True
self.counter[val] -= 1
self.redundant[val] += 1
return ret
def getRandom(self):
"""
Get a random element from the collection.
:rtype: int
"""
while True:
idx = random.randint(0, len(self.array) - 1)
v = self.array[idx]
if self.counter[v] and (self.redundant[v] == 0 or random.random() * (self.counter[v] + self.redundant[v]) < self.counter[v]):
break
else:
self.array[idx] = self.array[-1]
self.array.pop()
self.redundant[v] -= 1
return v
# Your RandomizedCollection object will be instantiated and called as such:
# obj = RandomizedCollection()
# param_1 = obj.insert(val)
# param_2 = obj.remove(val)
# param_3 = obj.getRandom()
|
|
7a7e94b1f688b43bcbbd986cb27886f897abe021
|
pytac/mini_project.py
|
pytac/mini_project.py
|
import pytac.load_csv
import pytac.epics
def main():
lattice = pytac.load_csv.load('VMX', pytac.epics.EpicsControlSystem())
bpms = lattice.get_elements('BPM')
bpms_n = 0
try:
for bpm in bpms:
bpm.get_pv_name('y')
bpms_n += 1
print 'There exist {0} BPMy elements in the ring.'.format(bpms_n)
except:
print 'Warning! There exists a bpm with no y field.'
if __name__=='__main__':
main()
|
Print the number of bpm y elements in the machine
|
Print the number of bpm y elements in the machine
|
Python
|
apache-2.0
|
razvanvasile/Work-Mini-Projects,razvanvasile/Work-Mini-Projects,razvanvasile/Work-Mini-Projects
|
Print the number of bpm y elements in the machine
|
import pytac.load_csv
import pytac.epics
def main():
lattice = pytac.load_csv.load('VMX', pytac.epics.EpicsControlSystem())
bpms = lattice.get_elements('BPM')
bpms_n = 0
try:
for bpm in bpms:
bpm.get_pv_name('y')
bpms_n += 1
print 'There exist {0} BPMy elements in the ring.'.format(bpms_n)
except:
print 'Warning! There exists a bpm with no y field.'
if __name__=='__main__':
main()
|
<commit_before><commit_msg>Print the number of bpm y elements in the machine<commit_after>
|
import pytac.load_csv
import pytac.epics
def main():
lattice = pytac.load_csv.load('VMX', pytac.epics.EpicsControlSystem())
bpms = lattice.get_elements('BPM')
bpms_n = 0
try:
for bpm in bpms:
bpm.get_pv_name('y')
bpms_n += 1
print 'There exist {0} BPMy elements in the ring.'.format(bpms_n)
except:
print 'Warning! There exists a bpm with no y field.'
if __name__=='__main__':
main()
|
Print the number of bpm y elements in the machineimport pytac.load_csv
import pytac.epics
def main():
lattice = pytac.load_csv.load('VMX', pytac.epics.EpicsControlSystem())
bpms = lattice.get_elements('BPM')
bpms_n = 0
try:
for bpm in bpms:
bpm.get_pv_name('y')
bpms_n += 1
print 'There exist {0} BPMy elements in the ring.'.format(bpms_n)
except:
print 'Warning! There exists a bpm with no y field.'
if __name__=='__main__':
main()
|
<commit_before><commit_msg>Print the number of bpm y elements in the machine<commit_after>import pytac.load_csv
import pytac.epics
def main():
lattice = pytac.load_csv.load('VMX', pytac.epics.EpicsControlSystem())
bpms = lattice.get_elements('BPM')
bpms_n = 0
try:
for bpm in bpms:
bpm.get_pv_name('y')
bpms_n += 1
print 'There exist {0} BPMy elements in the ring.'.format(bpms_n)
except:
print 'Warning! There exists a bpm with no y field.'
if __name__=='__main__':
main()
|
|
f828d9e3472d54e1df32856dc9679ebb4a40de2a
|
recon_to_xyzv_parq.py
|
recon_to_xyzv_parq.py
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Imports
from skimage import io
import pandas as pd
import fastparquet as fp
import numpy as np
import os
import tempfile
def readTiff(filename):
"""Read data from the tiff file and return a Pandas dataframe"""
filenamePrefix = os.path.splitext(os.path.basename(filename))[0]
im = io.imread(filename)
# Reshape 3D to one giant 1D
imgdata1d = im.reshape(im.shape[0] * im.shape[1] * im.shape[2])
dataSize = im.shape[0] * im.shape[1] * im.shape[2]
sliceSize = im.shape[2] * im.shape[1]
data = {
'x': [(i % im.shape[2]) for i in range(0, dataSize)],
'y': [(i / im.shape[2] % im.shape[1]) for i in range(0, dataSize)],
'z': [int(i / sliceSize) for i in range(0, dataSize)],
'value': imgdata1d.astype(np.int32),
}
# Convert to Pandas dataframe
df = pd.DataFrame(data)
return df
def writeParquet(inputFilename, df):
"""Export Pandas dataframe as Parquet"""
filenamePrefix = os.path.splitext(os.path.basename(inputFilename))[0]
outFilepath = os.path.join(tempfile.gettempdir(), ''.join([filenamePrefix, '.parq']))
fp.write(outFilepath, df, compression='GZIP')
print outFilepath
return outFilepath
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Script to convert data in tiff format to Parquet format')
parser.add_argument('--tiff', dest='filename', help='input tiff file')
args = parser.parse_args()
# Read TIFF file and convert it into a Pandas dataframe
df = readTiff(args.filename)
# Export dataframe as parquet
outFilepath = writeParquet(args.filename, df)
|
Add converter to xyz value data frame
|
Add converter to xyz value data frame
|
Python
|
apache-2.0
|
OpenDataAnalytics/etl,OpenDataAnalytics/etl
|
Add converter to xyz value data frame
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Imports
from skimage import io
import pandas as pd
import fastparquet as fp
import numpy as np
import os
import tempfile
def readTiff(filename):
"""Read data from the tiff file and return a Pandas dataframe"""
filenamePrefix = os.path.splitext(os.path.basename(filename))[0]
im = io.imread(filename)
# Reshape 3D to one giant 1D
imgdata1d = im.reshape(im.shape[0] * im.shape[1] * im.shape[2])
dataSize = im.shape[0] * im.shape[1] * im.shape[2]
sliceSize = im.shape[2] * im.shape[1]
data = {
'x': [(i % im.shape[2]) for i in range(0, dataSize)],
'y': [(i / im.shape[2] % im.shape[1]) for i in range(0, dataSize)],
'z': [int(i / sliceSize) for i in range(0, dataSize)],
'value': imgdata1d.astype(np.int32),
}
# Convert to Pandas dataframe
df = pd.DataFrame(data)
return df
def writeParquet(inputFilename, df):
"""Export Pandas dataframe as Parquet"""
filenamePrefix = os.path.splitext(os.path.basename(inputFilename))[0]
outFilepath = os.path.join(tempfile.gettempdir(), ''.join([filenamePrefix, '.parq']))
fp.write(outFilepath, df, compression='GZIP')
print outFilepath
return outFilepath
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Script to convert data in tiff format to Parquet format')
parser.add_argument('--tiff', dest='filename', help='input tiff file')
args = parser.parse_args()
# Read TIFF file and convert it into a Pandas dataframe
df = readTiff(args.filename)
# Export dataframe as parquet
outFilepath = writeParquet(args.filename, df)
|
<commit_before><commit_msg>Add converter to xyz value data frame<commit_after>
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Imports
from skimage import io
import pandas as pd
import fastparquet as fp
import numpy as np
import os
import tempfile
def readTiff(filename):
"""Read data from the tiff file and return a Pandas dataframe"""
filenamePrefix = os.path.splitext(os.path.basename(filename))[0]
im = io.imread(filename)
# Reshape 3D to one giant 1D
imgdata1d = im.reshape(im.shape[0] * im.shape[1] * im.shape[2])
dataSize = im.shape[0] * im.shape[1] * im.shape[2]
sliceSize = im.shape[2] * im.shape[1]
data = {
'x': [(i % im.shape[2]) for i in range(0, dataSize)],
'y': [(i / im.shape[2] % im.shape[1]) for i in range(0, dataSize)],
'z': [int(i / sliceSize) for i in range(0, dataSize)],
'value': imgdata1d.astype(np.int32),
}
# Convert to Pandas dataframe
df = pd.DataFrame(data)
return df
def writeParquet(inputFilename, df):
"""Export Pandas dataframe as Parquet"""
filenamePrefix = os.path.splitext(os.path.basename(inputFilename))[0]
outFilepath = os.path.join(tempfile.gettempdir(), ''.join([filenamePrefix, '.parq']))
fp.write(outFilepath, df, compression='GZIP')
print outFilepath
return outFilepath
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Script to convert data in tiff format to Parquet format')
parser.add_argument('--tiff', dest='filename', help='input tiff file')
args = parser.parse_args()
# Read TIFF file and convert it into a Pandas dataframe
df = readTiff(args.filename)
# Export dataframe as parquet
outFilepath = writeParquet(args.filename, df)
|
Add converter to xyz value data frame#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Imports
from skimage import io
import pandas as pd
import fastparquet as fp
import numpy as np
import os
import tempfile
def readTiff(filename):
"""Read data from the tiff file and return a Pandas dataframe"""
filenamePrefix = os.path.splitext(os.path.basename(filename))[0]
im = io.imread(filename)
# Reshape 3D to one giant 1D
imgdata1d = im.reshape(im.shape[0] * im.shape[1] * im.shape[2])
dataSize = im.shape[0] * im.shape[1] * im.shape[2]
sliceSize = im.shape[2] * im.shape[1]
data = {
'x': [(i % im.shape[2]) for i in range(0, dataSize)],
'y': [(i / im.shape[2] % im.shape[1]) for i in range(0, dataSize)],
'z': [int(i / sliceSize) for i in range(0, dataSize)],
'value': imgdata1d.astype(np.int32),
}
# Convert to Pandas dataframe
df = pd.DataFrame(data)
return df
def writeParquet(inputFilename, df):
"""Export Pandas dataframe as Parquet"""
filenamePrefix = os.path.splitext(os.path.basename(inputFilename))[0]
outFilepath = os.path.join(tempfile.gettempdir(), ''.join([filenamePrefix, '.parq']))
fp.write(outFilepath, df, compression='GZIP')
print outFilepath
return outFilepath
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Script to convert data in tiff format to Parquet format')
parser.add_argument('--tiff', dest='filename', help='input tiff file')
args = parser.parse_args()
# Read TIFF file and convert it into a Pandas dataframe
df = readTiff(args.filename)
# Export dataframe as parquet
outFilepath = writeParquet(args.filename, df)
|
<commit_before><commit_msg>Add converter to xyz value data frame<commit_after>#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Imports
from skimage import io
import pandas as pd
import fastparquet as fp
import numpy as np
import os
import tempfile
def readTiff(filename):
"""Read data from the tiff file and return a Pandas dataframe"""
filenamePrefix = os.path.splitext(os.path.basename(filename))[0]
im = io.imread(filename)
# Reshape 3D to one giant 1D
imgdata1d = im.reshape(im.shape[0] * im.shape[1] * im.shape[2])
dataSize = im.shape[0] * im.shape[1] * im.shape[2]
sliceSize = im.shape[2] * im.shape[1]
data = {
'x': [(i % im.shape[2]) for i in range(0, dataSize)],
'y': [(i / im.shape[2] % im.shape[1]) for i in range(0, dataSize)],
'z': [int(i / sliceSize) for i in range(0, dataSize)],
'value': imgdata1d.astype(np.int32),
}
# Convert to Pandas dataframe
df = pd.DataFrame(data)
return df
def writeParquet(inputFilename, df):
"""Export Pandas dataframe as Parquet"""
filenamePrefix = os.path.splitext(os.path.basename(inputFilename))[0]
outFilepath = os.path.join(tempfile.gettempdir(), ''.join([filenamePrefix, '.parq']))
fp.write(outFilepath, df, compression='GZIP')
print outFilepath
return outFilepath
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Script to convert data in tiff format to Parquet format')
parser.add_argument('--tiff', dest='filename', help='input tiff file')
args = parser.parse_args()
# Read TIFF file and convert it into a Pandas dataframe
df = readTiff(args.filename)
# Export dataframe as parquet
outFilepath = writeParquet(args.filename, df)
|
|
8c3b78d8c9f4beda07959f61c09de3f67023461e
|
corehq/apps/reminders/management/commands/check_for_old_reminders.py
|
corehq/apps/reminders/management/commands/check_for_old_reminders.py
|
from django.core.management.base import BaseCommand
from corehq.apps.reminders.models import (CaseReminderHandler,
UI_COMPLEX, ON_DATETIME, REMINDER_TYPE_DEFAULT,
RECIPIENT_SURVEY_SAMPLE)
from dimagi.utils.couch.database import iter_docs
class Command(BaseCommand):
"""
The new reminders UI removes support for some edge cases, and the
purpose of this script is to confirm that there are no reminder
definitions which use those edge use cases.
Usage:
python manage.py check_for_old_reminders
"""
args = ""
help = ("A command which checks for edge use cases that are no longer"
"supported in the new reminders UI")
def get_reminder_definition_ids(self):
result = CaseReminderHandler.view(
"reminders/handlers_by_domain_case_type",
include_docs=False,
).all()
return [row["id"] for row in result]
def check_for_ui_type(self, handler):
if handler.ui_type != UI_COMPLEX:
print "%s: Handler %s does not have advanced ui flag set" % (
handler.domain, handler._id)
def check_for_multiple_fire_time_types(self, handler):
if len(handler.events) > 1:
fire_time_type = handler.events[0].fire_time_type
for event in handler.events[1:]:
if event.fire_time_type != fire_time_type:
print ("%s: Handler %s references multiple fire time "
"types" % (handler.domain, handler._id))
def check_for_datetime_criteria(self, handler):
if handler.start_condition_type == ON_DATETIME:
print ("%s: Handler %s starts on datetime criteria and is not a "
"broadcast" % (handler.domain, handler._id))
def check_for_case_group_recipient(self, handler):
if handler.recipient == RECIPIENT_SURVEY_SAMPLE:
print ("%s: Handler %s sends to case group and is not a "
"broadcast" % (handler.domain, handler._id))
def handle(self, *args, **options):
ids = self.get_reminder_definition_ids()
for handler_doc in iter_docs(CaseReminderHandler.get_db(), ids):
handler = CaseReminderHandler.wrap(handler_doc)
if handler.reminder_type == REMINDER_TYPE_DEFAULT:
self.check_for_ui_type(handler)
self.check_for_multiple_fire_time_types(handler)
self.check_for_datetime_criteria(handler)
self.check_for_case_group_recipient(handler)
|
Add script to check for unsupported edge cases
|
Add script to check for unsupported edge cases
|
Python
|
bsd-3-clause
|
qedsoftware/commcare-hq,qedsoftware/commcare-hq,qedsoftware/commcare-hq,puttarajubr/commcare-hq,dimagi/commcare-hq,dimagi/commcare-hq,puttarajubr/commcare-hq,puttarajubr/commcare-hq,dimagi/commcare-hq,dimagi/commcare-hq,puttarajubr/commcare-hq,qedsoftware/commcare-hq,qedsoftware/commcare-hq,dimagi/commcare-hq
|
Add script to check for unsupported edge cases
|
from django.core.management.base import BaseCommand
from corehq.apps.reminders.models import (CaseReminderHandler,
UI_COMPLEX, ON_DATETIME, REMINDER_TYPE_DEFAULT,
RECIPIENT_SURVEY_SAMPLE)
from dimagi.utils.couch.database import iter_docs
class Command(BaseCommand):
"""
The new reminders UI removes support for some edge cases, and the
purpose of this script is to confirm that there are no reminder
definitions which use those edge use cases.
Usage:
python manage.py check_for_old_reminders
"""
args = ""
help = ("A command which checks for edge use cases that are no longer"
"supported in the new reminders UI")
def get_reminder_definition_ids(self):
result = CaseReminderHandler.view(
"reminders/handlers_by_domain_case_type",
include_docs=False,
).all()
return [row["id"] for row in result]
def check_for_ui_type(self, handler):
if handler.ui_type != UI_COMPLEX:
print "%s: Handler %s does not have advanced ui flag set" % (
handler.domain, handler._id)
def check_for_multiple_fire_time_types(self, handler):
if len(handler.events) > 1:
fire_time_type = handler.events[0].fire_time_type
for event in handler.events[1:]:
if event.fire_time_type != fire_time_type:
print ("%s: Handler %s references multiple fire time "
"types" % (handler.domain, handler._id))
def check_for_datetime_criteria(self, handler):
if handler.start_condition_type == ON_DATETIME:
print ("%s: Handler %s starts on datetime criteria and is not a "
"broadcast" % (handler.domain, handler._id))
def check_for_case_group_recipient(self, handler):
if handler.recipient == RECIPIENT_SURVEY_SAMPLE:
print ("%s: Handler %s sends to case group and is not a "
"broadcast" % (handler.domain, handler._id))
def handle(self, *args, **options):
ids = self.get_reminder_definition_ids()
for handler_doc in iter_docs(CaseReminderHandler.get_db(), ids):
handler = CaseReminderHandler.wrap(handler_doc)
if handler.reminder_type == REMINDER_TYPE_DEFAULT:
self.check_for_ui_type(handler)
self.check_for_multiple_fire_time_types(handler)
self.check_for_datetime_criteria(handler)
self.check_for_case_group_recipient(handler)
|
<commit_before><commit_msg>Add script to check for unsupported edge cases<commit_after>
|
from django.core.management.base import BaseCommand
from corehq.apps.reminders.models import (CaseReminderHandler,
UI_COMPLEX, ON_DATETIME, REMINDER_TYPE_DEFAULT,
RECIPIENT_SURVEY_SAMPLE)
from dimagi.utils.couch.database import iter_docs
class Command(BaseCommand):
"""
The new reminders UI removes support for some edge cases, and the
purpose of this script is to confirm that there are no reminder
definitions which use those edge use cases.
Usage:
python manage.py check_for_old_reminders
"""
args = ""
help = ("A command which checks for edge use cases that are no longer"
"supported in the new reminders UI")
def get_reminder_definition_ids(self):
result = CaseReminderHandler.view(
"reminders/handlers_by_domain_case_type",
include_docs=False,
).all()
return [row["id"] for row in result]
def check_for_ui_type(self, handler):
if handler.ui_type != UI_COMPLEX:
print "%s: Handler %s does not have advanced ui flag set" % (
handler.domain, handler._id)
def check_for_multiple_fire_time_types(self, handler):
if len(handler.events) > 1:
fire_time_type = handler.events[0].fire_time_type
for event in handler.events[1:]:
if event.fire_time_type != fire_time_type:
print ("%s: Handler %s references multiple fire time "
"types" % (handler.domain, handler._id))
def check_for_datetime_criteria(self, handler):
if handler.start_condition_type == ON_DATETIME:
print ("%s: Handler %s starts on datetime criteria and is not a "
"broadcast" % (handler.domain, handler._id))
def check_for_case_group_recipient(self, handler):
if handler.recipient == RECIPIENT_SURVEY_SAMPLE:
print ("%s: Handler %s sends to case group and is not a "
"broadcast" % (handler.domain, handler._id))
def handle(self, *args, **options):
ids = self.get_reminder_definition_ids()
for handler_doc in iter_docs(CaseReminderHandler.get_db(), ids):
handler = CaseReminderHandler.wrap(handler_doc)
if handler.reminder_type == REMINDER_TYPE_DEFAULT:
self.check_for_ui_type(handler)
self.check_for_multiple_fire_time_types(handler)
self.check_for_datetime_criteria(handler)
self.check_for_case_group_recipient(handler)
|
Add script to check for unsupported edge casesfrom django.core.management.base import BaseCommand
from corehq.apps.reminders.models import (CaseReminderHandler,
UI_COMPLEX, ON_DATETIME, REMINDER_TYPE_DEFAULT,
RECIPIENT_SURVEY_SAMPLE)
from dimagi.utils.couch.database import iter_docs
class Command(BaseCommand):
"""
The new reminders UI removes support for some edge cases, and the
purpose of this script is to confirm that there are no reminder
definitions which use those edge use cases.
Usage:
python manage.py check_for_old_reminders
"""
args = ""
help = ("A command which checks for edge use cases that are no longer"
"supported in the new reminders UI")
def get_reminder_definition_ids(self):
result = CaseReminderHandler.view(
"reminders/handlers_by_domain_case_type",
include_docs=False,
).all()
return [row["id"] for row in result]
def check_for_ui_type(self, handler):
if handler.ui_type != UI_COMPLEX:
print "%s: Handler %s does not have advanced ui flag set" % (
handler.domain, handler._id)
def check_for_multiple_fire_time_types(self, handler):
if len(handler.events) > 1:
fire_time_type = handler.events[0].fire_time_type
for event in handler.events[1:]:
if event.fire_time_type != fire_time_type:
print ("%s: Handler %s references multiple fire time "
"types" % (handler.domain, handler._id))
def check_for_datetime_criteria(self, handler):
if handler.start_condition_type == ON_DATETIME:
print ("%s: Handler %s starts on datetime criteria and is not a "
"broadcast" % (handler.domain, handler._id))
def check_for_case_group_recipient(self, handler):
if handler.recipient == RECIPIENT_SURVEY_SAMPLE:
print ("%s: Handler %s sends to case group and is not a "
"broadcast" % (handler.domain, handler._id))
def handle(self, *args, **options):
ids = self.get_reminder_definition_ids()
for handler_doc in iter_docs(CaseReminderHandler.get_db(), ids):
handler = CaseReminderHandler.wrap(handler_doc)
if handler.reminder_type == REMINDER_TYPE_DEFAULT:
self.check_for_ui_type(handler)
self.check_for_multiple_fire_time_types(handler)
self.check_for_datetime_criteria(handler)
self.check_for_case_group_recipient(handler)
|
<commit_before><commit_msg>Add script to check for unsupported edge cases<commit_after>from django.core.management.base import BaseCommand
from corehq.apps.reminders.models import (CaseReminderHandler,
UI_COMPLEX, ON_DATETIME, REMINDER_TYPE_DEFAULT,
RECIPIENT_SURVEY_SAMPLE)
from dimagi.utils.couch.database import iter_docs
class Command(BaseCommand):
"""
The new reminders UI removes support for some edge cases, and the
purpose of this script is to confirm that there are no reminder
definitions which use those edge use cases.
Usage:
python manage.py check_for_old_reminders
"""
args = ""
help = ("A command which checks for edge use cases that are no longer"
"supported in the new reminders UI")
def get_reminder_definition_ids(self):
result = CaseReminderHandler.view(
"reminders/handlers_by_domain_case_type",
include_docs=False,
).all()
return [row["id"] for row in result]
def check_for_ui_type(self, handler):
if handler.ui_type != UI_COMPLEX:
print "%s: Handler %s does not have advanced ui flag set" % (
handler.domain, handler._id)
def check_for_multiple_fire_time_types(self, handler):
if len(handler.events) > 1:
fire_time_type = handler.events[0].fire_time_type
for event in handler.events[1:]:
if event.fire_time_type != fire_time_type:
print ("%s: Handler %s references multiple fire time "
"types" % (handler.domain, handler._id))
def check_for_datetime_criteria(self, handler):
if handler.start_condition_type == ON_DATETIME:
print ("%s: Handler %s starts on datetime criteria and is not a "
"broadcast" % (handler.domain, handler._id))
def check_for_case_group_recipient(self, handler):
if handler.recipient == RECIPIENT_SURVEY_SAMPLE:
print ("%s: Handler %s sends to case group and is not a "
"broadcast" % (handler.domain, handler._id))
def handle(self, *args, **options):
ids = self.get_reminder_definition_ids()
for handler_doc in iter_docs(CaseReminderHandler.get_db(), ids):
handler = CaseReminderHandler.wrap(handler_doc)
if handler.reminder_type == REMINDER_TYPE_DEFAULT:
self.check_for_ui_type(handler)
self.check_for_multiple_fire_time_types(handler)
self.check_for_datetime_criteria(handler)
self.check_for_case_group_recipient(handler)
|
|
670a326755387c240f267b715f8b1c53dda1942c
|
studygroups/migrations/0149_update_feedback_reflection.py
|
studygroups/migrations/0149_update_feedback_reflection.py
|
# Generated by Django 2.2.13 on 2021-05-19 15:21
from django.db import migrations
import json
def update_feedback_reflection(apps, schema_editor):
Feedback = apps.get_model("studygroups", "Feedback")
for feedback in Feedback.objects.exclude(reflection=''):
new_reflection = {
'answer': feedback.reflection,
'promptIndex': 0,
'prompt': 'Anything else you want to share?',
}
feedback.reflection = json.dumps(new_reflection)
feedback.save()
class Migration(migrations.Migration):
dependencies = [
('studygroups', '0148_auto_20210513_0951'),
]
operations = [
migrations.RunPython(update_feedback_reflection),
]
|
Add data migration for storing feedback.reflection in updated format
|
Add data migration for storing feedback.reflection in updated format
|
Python
|
mit
|
p2pu/learning-circles,p2pu/learning-circles,p2pu/learning-circles,p2pu/learning-circles
|
Add data migration for storing feedback.reflection in updated format
|
# Generated by Django 2.2.13 on 2021-05-19 15:21
from django.db import migrations
import json
def update_feedback_reflection(apps, schema_editor):
Feedback = apps.get_model("studygroups", "Feedback")
for feedback in Feedback.objects.exclude(reflection=''):
new_reflection = {
'answer': feedback.reflection,
'promptIndex': 0,
'prompt': 'Anything else you want to share?',
}
feedback.reflection = json.dumps(new_reflection)
feedback.save()
class Migration(migrations.Migration):
dependencies = [
('studygroups', '0148_auto_20210513_0951'),
]
operations = [
migrations.RunPython(update_feedback_reflection),
]
|
<commit_before><commit_msg>Add data migration for storing feedback.reflection in updated format<commit_after>
|
# Generated by Django 2.2.13 on 2021-05-19 15:21
from django.db import migrations
import json
def update_feedback_reflection(apps, schema_editor):
Feedback = apps.get_model("studygroups", "Feedback")
for feedback in Feedback.objects.exclude(reflection=''):
new_reflection = {
'answer': feedback.reflection,
'promptIndex': 0,
'prompt': 'Anything else you want to share?',
}
feedback.reflection = json.dumps(new_reflection)
feedback.save()
class Migration(migrations.Migration):
dependencies = [
('studygroups', '0148_auto_20210513_0951'),
]
operations = [
migrations.RunPython(update_feedback_reflection),
]
|
Add data migration for storing feedback.reflection in updated format# Generated by Django 2.2.13 on 2021-05-19 15:21
from django.db import migrations
import json
def update_feedback_reflection(apps, schema_editor):
Feedback = apps.get_model("studygroups", "Feedback")
for feedback in Feedback.objects.exclude(reflection=''):
new_reflection = {
'answer': feedback.reflection,
'promptIndex': 0,
'prompt': 'Anything else you want to share?',
}
feedback.reflection = json.dumps(new_reflection)
feedback.save()
class Migration(migrations.Migration):
dependencies = [
('studygroups', '0148_auto_20210513_0951'),
]
operations = [
migrations.RunPython(update_feedback_reflection),
]
|
<commit_before><commit_msg>Add data migration for storing feedback.reflection in updated format<commit_after># Generated by Django 2.2.13 on 2021-05-19 15:21
from django.db import migrations
import json
def update_feedback_reflection(apps, schema_editor):
Feedback = apps.get_model("studygroups", "Feedback")
for feedback in Feedback.objects.exclude(reflection=''):
new_reflection = {
'answer': feedback.reflection,
'promptIndex': 0,
'prompt': 'Anything else you want to share?',
}
feedback.reflection = json.dumps(new_reflection)
feedback.save()
class Migration(migrations.Migration):
dependencies = [
('studygroups', '0148_auto_20210513_0951'),
]
operations = [
migrations.RunPython(update_feedback_reflection),
]
|
|
4efcaa43c8c69c7fdbaec74d7af2b71dbc6faea6
|
tests/util/test_treecache.py
|
tests/util/test_treecache.py
|
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .. import unittest
from synapse.util.caches.treecache import TreeCache
class TreeCacheTestCase(unittest.TestCase):
def test_get_set_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.get(("a",)), "A")
self.assertEquals(cache.get(("b",)), "B")
def test_pop_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.pop(("a",)), "A")
self.assertEquals(cache.pop(("a",)), None)
self.assertEquals(cache.get(("b",)), "B")
def test_get_set_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.get(("b", "a")), "BA")
def test_pop_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.pop(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.pop(("b", "a")), "BA")
self.assertEquals(cache.pop(("b", "a")), None)
def test_pop_mixedlevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
cache.pop(("a",))
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), None)
self.assertEquals(cache.get(("b", "a")), "BA")
|
Add tests for treecache directly and test del_multi at the LruCache level too.
|
Add tests for treecache directly and test del_multi at the LruCache level too.
|
Python
|
apache-2.0
|
matrix-org/synapse,matrix-org/synapse,TribeMedia/synapse,TribeMedia/synapse,TribeMedia/synapse,matrix-org/synapse,matrix-org/synapse,TribeMedia/synapse,matrix-org/synapse,matrix-org/synapse,TribeMedia/synapse
|
Add tests for treecache directly and test del_multi at the LruCache level too.
|
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .. import unittest
from synapse.util.caches.treecache import TreeCache
class TreeCacheTestCase(unittest.TestCase):
def test_get_set_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.get(("a",)), "A")
self.assertEquals(cache.get(("b",)), "B")
def test_pop_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.pop(("a",)), "A")
self.assertEquals(cache.pop(("a",)), None)
self.assertEquals(cache.get(("b",)), "B")
def test_get_set_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.get(("b", "a")), "BA")
def test_pop_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.pop(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.pop(("b", "a")), "BA")
self.assertEquals(cache.pop(("b", "a")), None)
def test_pop_mixedlevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
cache.pop(("a",))
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), None)
self.assertEquals(cache.get(("b", "a")), "BA")
|
<commit_before><commit_msg>Add tests for treecache directly and test del_multi at the LruCache level too.<commit_after>
|
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .. import unittest
from synapse.util.caches.treecache import TreeCache
class TreeCacheTestCase(unittest.TestCase):
def test_get_set_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.get(("a",)), "A")
self.assertEquals(cache.get(("b",)), "B")
def test_pop_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.pop(("a",)), "A")
self.assertEquals(cache.pop(("a",)), None)
self.assertEquals(cache.get(("b",)), "B")
def test_get_set_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.get(("b", "a")), "BA")
def test_pop_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.pop(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.pop(("b", "a")), "BA")
self.assertEquals(cache.pop(("b", "a")), None)
def test_pop_mixedlevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
cache.pop(("a",))
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), None)
self.assertEquals(cache.get(("b", "a")), "BA")
|
Add tests for treecache directly and test del_multi at the LruCache level too.# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .. import unittest
from synapse.util.caches.treecache import TreeCache
class TreeCacheTestCase(unittest.TestCase):
def test_get_set_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.get(("a",)), "A")
self.assertEquals(cache.get(("b",)), "B")
def test_pop_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.pop(("a",)), "A")
self.assertEquals(cache.pop(("a",)), None)
self.assertEquals(cache.get(("b",)), "B")
def test_get_set_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.get(("b", "a")), "BA")
def test_pop_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.pop(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.pop(("b", "a")), "BA")
self.assertEquals(cache.pop(("b", "a")), None)
def test_pop_mixedlevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
cache.pop(("a",))
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), None)
self.assertEquals(cache.get(("b", "a")), "BA")
|
<commit_before><commit_msg>Add tests for treecache directly and test del_multi at the LruCache level too.<commit_after># -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .. import unittest
from synapse.util.caches.treecache import TreeCache
class TreeCacheTestCase(unittest.TestCase):
def test_get_set_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.get(("a",)), "A")
self.assertEquals(cache.get(("b",)), "B")
def test_pop_onelevel(self):
cache = TreeCache()
cache[("a",)] = "A"
cache[("b",)] = "B"
self.assertEquals(cache.pop(("a",)), "A")
self.assertEquals(cache.pop(("a",)), None)
self.assertEquals(cache.get(("b",)), "B")
def test_get_set_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.get(("b", "a")), "BA")
def test_pop_twolevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.pop(("a", "a")), "AA")
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), "AB")
self.assertEquals(cache.pop(("b", "a")), "BA")
self.assertEquals(cache.pop(("b", "a")), None)
def test_pop_mixedlevel(self):
cache = TreeCache()
cache[("a", "a")] = "AA"
cache[("a", "b")] = "AB"
cache[("b", "a")] = "BA"
self.assertEquals(cache.get(("a", "a")), "AA")
cache.pop(("a",))
self.assertEquals(cache.get(("a", "a")), None)
self.assertEquals(cache.get(("a", "b")), None)
self.assertEquals(cache.get(("b", "a")), "BA")
|
|
60bf275ab9b913663078d5098ce3754c7d76c23a
|
test_build.py
|
test_build.py
|
# This script should run without errors whenever we update the
# kaggle/python container. It checks that all our most popular packages can
# be loaded and used without errors.
import numpy as np
print("Numpy imported ok")
print("Your lucky number is: " + str(np.random.randint(100)))
import pandas as pd
print("Pandas imported ok")
from sklearn import datasets
print("sklearn imported ok")
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.ensemble import RandomForestClassifier
rf1 = RandomForestClassifier()
rf1.fit(X,y)
print("sklearn RandomForestClassifier: ok")
from xgboost import XGBClassifier
xgb1 = XGBClassifier(n_estimators=3)
xgb1.fit(X[0:70],y[0:70])
print("xgboost XGBClassifier: ok")
import matplotlib.pyplot as plt
plt.plot(np.linspace(0,1,50), np.random.rand(50))
plt.savefig("plot1.png")
print("matplotlib.pyplot ok")
from mpl_toolkits.basemap import Basemap
print("Basemap ok")
import theano
print("Theano ok")
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD
print("keras ok")
import nltk
from nltk.stem import WordNetLemmatizer
print("nltk ok")
import tensorflow as tf
hello = tf.constant('TensorFlow ok')
sess = tf.Session()
print(sess.run(hello))
import cv2
img = cv2.imread('plot1.png',0)
print("OpenCV ok")
|
Add test script to build repo
|
Add test script to build repo
|
Python
|
apache-2.0
|
Kaggle/docker-python,Kaggle/docker-python
|
Add test script to build repo
|
# This script should run without errors whenever we update the
# kaggle/python container. It checks that all our most popular packages can
# be loaded and used without errors.
import numpy as np
print("Numpy imported ok")
print("Your lucky number is: " + str(np.random.randint(100)))
import pandas as pd
print("Pandas imported ok")
from sklearn import datasets
print("sklearn imported ok")
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.ensemble import RandomForestClassifier
rf1 = RandomForestClassifier()
rf1.fit(X,y)
print("sklearn RandomForestClassifier: ok")
from xgboost import XGBClassifier
xgb1 = XGBClassifier(n_estimators=3)
xgb1.fit(X[0:70],y[0:70])
print("xgboost XGBClassifier: ok")
import matplotlib.pyplot as plt
plt.plot(np.linspace(0,1,50), np.random.rand(50))
plt.savefig("plot1.png")
print("matplotlib.pyplot ok")
from mpl_toolkits.basemap import Basemap
print("Basemap ok")
import theano
print("Theano ok")
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD
print("keras ok")
import nltk
from nltk.stem import WordNetLemmatizer
print("nltk ok")
import tensorflow as tf
hello = tf.constant('TensorFlow ok')
sess = tf.Session()
print(sess.run(hello))
import cv2
img = cv2.imread('plot1.png',0)
print("OpenCV ok")
|
<commit_before><commit_msg>Add test script to build repo<commit_after>
|
# This script should run without errors whenever we update the
# kaggle/python container. It checks that all our most popular packages can
# be loaded and used without errors.
import numpy as np
print("Numpy imported ok")
print("Your lucky number is: " + str(np.random.randint(100)))
import pandas as pd
print("Pandas imported ok")
from sklearn import datasets
print("sklearn imported ok")
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.ensemble import RandomForestClassifier
rf1 = RandomForestClassifier()
rf1.fit(X,y)
print("sklearn RandomForestClassifier: ok")
from xgboost import XGBClassifier
xgb1 = XGBClassifier(n_estimators=3)
xgb1.fit(X[0:70],y[0:70])
print("xgboost XGBClassifier: ok")
import matplotlib.pyplot as plt
plt.plot(np.linspace(0,1,50), np.random.rand(50))
plt.savefig("plot1.png")
print("matplotlib.pyplot ok")
from mpl_toolkits.basemap import Basemap
print("Basemap ok")
import theano
print("Theano ok")
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD
print("keras ok")
import nltk
from nltk.stem import WordNetLemmatizer
print("nltk ok")
import tensorflow as tf
hello = tf.constant('TensorFlow ok')
sess = tf.Session()
print(sess.run(hello))
import cv2
img = cv2.imread('plot1.png',0)
print("OpenCV ok")
|
Add test script to build repo# This script should run without errors whenever we update the
# kaggle/python container. It checks that all our most popular packages can
# be loaded and used without errors.
import numpy as np
print("Numpy imported ok")
print("Your lucky number is: " + str(np.random.randint(100)))
import pandas as pd
print("Pandas imported ok")
from sklearn import datasets
print("sklearn imported ok")
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.ensemble import RandomForestClassifier
rf1 = RandomForestClassifier()
rf1.fit(X,y)
print("sklearn RandomForestClassifier: ok")
from xgboost import XGBClassifier
xgb1 = XGBClassifier(n_estimators=3)
xgb1.fit(X[0:70],y[0:70])
print("xgboost XGBClassifier: ok")
import matplotlib.pyplot as plt
plt.plot(np.linspace(0,1,50), np.random.rand(50))
plt.savefig("plot1.png")
print("matplotlib.pyplot ok")
from mpl_toolkits.basemap import Basemap
print("Basemap ok")
import theano
print("Theano ok")
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD
print("keras ok")
import nltk
from nltk.stem import WordNetLemmatizer
print("nltk ok")
import tensorflow as tf
hello = tf.constant('TensorFlow ok')
sess = tf.Session()
print(sess.run(hello))
import cv2
img = cv2.imread('plot1.png',0)
print("OpenCV ok")
|
<commit_before><commit_msg>Add test script to build repo<commit_after># This script should run without errors whenever we update the
# kaggle/python container. It checks that all our most popular packages can
# be loaded and used without errors.
import numpy as np
print("Numpy imported ok")
print("Your lucky number is: " + str(np.random.randint(100)))
import pandas as pd
print("Pandas imported ok")
from sklearn import datasets
print("sklearn imported ok")
iris = datasets.load_iris()
X, y = iris.data, iris.target
from sklearn.ensemble import RandomForestClassifier
rf1 = RandomForestClassifier()
rf1.fit(X,y)
print("sklearn RandomForestClassifier: ok")
from xgboost import XGBClassifier
xgb1 = XGBClassifier(n_estimators=3)
xgb1.fit(X[0:70],y[0:70])
print("xgboost XGBClassifier: ok")
import matplotlib.pyplot as plt
plt.plot(np.linspace(0,1,50), np.random.rand(50))
plt.savefig("plot1.png")
print("matplotlib.pyplot ok")
from mpl_toolkits.basemap import Basemap
print("Basemap ok")
import theano
print("Theano ok")
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.optimizers import SGD
print("keras ok")
import nltk
from nltk.stem import WordNetLemmatizer
print("nltk ok")
import tensorflow as tf
hello = tf.constant('TensorFlow ok')
sess = tf.Session()
print(sess.run(hello))
import cv2
img = cv2.imread('plot1.png',0)
print("OpenCV ok")
|
|
dc184a5a46e4695ddd5e1886912fc18de68558b7
|
extensions.py
|
extensions.py
|
valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
|
valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext', 'dvb_subtitle']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
|
Add dvb_subtitle to the bad codec list.
|
Add dvb_subtitle to the bad codec list.
|
Python
|
mit
|
Filechaser/sickbeard_mp4_automator,Collisionc/sickbeard_mp4_automator,phtagn/sickbeard_mp4_automator,phtagn/sickbeard_mp4_automator,Collisionc/sickbeard_mp4_automator,Filechaser/sickbeard_mp4_automator
|
valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
Add dvb_subtitle to the bad codec list.
|
valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext', 'dvb_subtitle']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
|
<commit_before>valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
<commit_msg>Add dvb_subtitle to the bad codec list.<commit_after>
|
valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext', 'dvb_subtitle']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
|
valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
Add dvb_subtitle to the bad codec list.valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext', 'dvb_subtitle']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
|
<commit_before>valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
<commit_msg>Add dvb_subtitle to the bad codec list.<commit_after>valid_input_extensions = ['mkv', 'avi', 'ts', 'mov', 'vob', 'mpg', 'mts']
valid_output_extensions = ['mp4', 'm4v']
valid_audio_codecs = ['aac', 'ac3', 'dts', 'eac3']
valid_poster_extensions = ['jpg', 'png']
valid_subtitle_extensions = ['srt', 'vtt', 'ass']
valid_formats = ['mp4', 'mov']
bad_subtitle_codecs = ['pgssub', 'dvdsub', 's_hdmv/pgs', 'hdmv_pgs_subtitle', 'dvd_subtitle', 'pgssub', 'dvb_teletext', 'dvb_subtitle']
tmdb_api_key = "45e408d2851e968e6e4d0353ce621c66"
valid_internal_subcodecs = ['mov_text']
valid_external_subcodecs = ['srt', 'webvtt', 'ass']
subtitle_codec_extensions = {'srt': 'srt',
'webvtt': 'vtt',
'ass': 'ass'}
bad_post_files = ['resources', '.DS_Store']
bad_post_extensions = ['.txt', '.log', '.pyc']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.