text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
On Fri, 21 Jul 2000, Kristoffer Lawson wrote: > Just tested the software out briefly and based on what I've seen of it the > programme really does look and feel good. > > A few initial questions: > > - As you use Tcl throughout the system (a good thing indeed!) I wondered > if it might be possible to build a parser in Tcl instead of > C? Specifically, is there a Tcl API like the C API for handling the > project database? (True, building one on top of the C API is not a big > chore). Specifically I would like support for XOTcl () and > OTcl. See the documentation for the Tcl interface to the DB. That should at least get your started. As for extending the Tcl support, we have talked about this at length and we think the best approach would be to use the Tcl parser API exposed by regular Tcl at the C layer. We currently use our own parser written for Tcl. To be honest, it would be better to toss out our Tcl parser and use the one from the Tcl core, and since the Tcl core is already part of SN, it is just waiting for someone to come along and use it :) > OTOH it probably wouldn't be horribly difficult to edit the Tcl parser. > The idea would be that: > > Class Foo -superclass Bar ;# Create Foo class, the superclass is Bar > Foo instproc ;# Create method in Foo > Foo proc ;# Create procedure in Foo (not seen in the object instance) > ;# Basically just means the parser should handle local variables > ;# as local instead of global (as it does now with anything that > ;# doesn't occur inside a proc command parameter). > Foo ob ;# Create object 'ob' of type 'Foo' > > The simple case should not be difficult but of course the problem with the > dynamic nature of XOTcl is that you can add and remove methods, change an > object's class or superclass at any point during run-time. How have you dealt > with it when dealing with Tcl namespaces, which are quite dynamic > themselves? Yes, this is another weak area in our Tcl parser. It was written before Tcl namespaces were in Tcl, so it has no namespace support. > - Is the Tcl parser behaviour correct when assuming that any "set" > statement inside curly brackets is actually setting a global variable? I > think by default it shouldn't do anything (ie. not add a variable to the > variable list) and have exceptions to the rule when dealing with while, > for, proc etc. The reason for this is that inside curlies I might have > data that might look like I'm setting a variable but actually I'm not. It > might just be plain data, or maybe I'm sending the code to another > interpreter or whatever. Tcl is very dynamic, so in general you can not assume that any command does anything. You need to make some assumptions otherwise you will get nowhere fast. Set should at least know if it is a local var, a global var, or a class instance or class common (static) var, but, this is not always possible. > This is related to the previous comment because currently XOTcl instance > variables appear as "global variables" which is not correct. While it > naturally would be nice if the environment recognized them as instance > variables I believe it's better not to recognize them at all than to mark > them as globals. You will need to hack it to fix that. > - I seem to be having problems with emacs/Xemacs and the IDE (btw. I think > it's great that you have put in extra effort to get emacs to interact with > the IDE). When looking up for a symbol with M-. I get the following error: > > (1) (error/warning) Error in process filter: (void-variable DisplayableOb) The emacs/xemacs stuff is in there for hackers, so if you have any problems you will need to fix them on your own. When you do, please send us the patches so the next guy will not need to. later Mo DeJong Red Hat Inc
|
https://sourceware.org/pipermail/sourcenav/2000q3/000053.html
|
CC-MAIN-2021-25
|
en
|
refinedweb
|
Get the each coodinates of all corner of 2 rectangle
May i know how to get the coordinate of each corner rectangle? Means i need get 8 coordinates of corner from there.
May i know how to get the coordinate of each corner rectangle? Means i need get 8 coordinates of corner from there.
i hope the code below helps you.
#include "opencv2/imgproc.hpp" #include "opencv2/highgui.hpp" using namespace cv; int main() { Mat src = Mat::zeros( 400, 400, CV_8UC3); Rect r(100,100,50,50); Point point0 = Point(r.x, r.y); Point point1 = Point(r.x + r.width, r.y); Point point2 = Point(r.x + r.width, r.y + r.height); Point point3 = Point(r.x, r.y + r.height); rectangle( src, r, Scalar::all(255)); circle( src, point0, 10, Scalar( 0, 0, 255) ); circle( src, point1, 10, Scalar( 0, 0, 255) ); circle( src, point2, 10, Scalar( 0, 0, 255) ); circle( src, point3, 10, Scalar( 0, 0, 255) ); imshow( "coodinates of all corner of rectangle", src ); waitKey(); return 0; }
May i know what should I change or add this code in my ori code?...
Finally i get the corner from rectangle like this picture. thanks for your help sir
Asked: 2016-01-04 03:23:59 -0500
Seen: 5,294 times
Last updated: Jan 04 '16
fail to "findchessboardcorners" in images taken from a fisheye camera
Display the coordinates of a point in OpenCV C + + urgently
Extract the coordinates of a point OpenCV C + + [closed]
X, Y, Z coordinate in opencv
open cv, vector<DMatch> matches, xy coordinates
Points form one camera system to another
detect the direction of an object
Texture projection, compute UV coordinates
overlay image with offset
show us, what you tried so far.
My full codes was under comments which is "main", "color.h", "color.cpp" and already use the comments code to get the rectangle, and this is the link....
|
https://answers.opencv.org/question/82068/get-the-each-coodinates-of-all-corner-of-2-rectangle/
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
?
Drop this ZIP (unpack it there) into your "hardware" directory. You should find a "hardware" directory in the Sketches directory, then restart the Arduino IDE.
It should find a complete set of boot loaders for the 28 pin and 32 pin versions.
0_1463253058014_atmega328p.zip
You will need to check that the directory is unpacked with all subdirectories. The directory name will be "atmega328p".
@GertSanders My personal observations - some chips are more temperamental than the others. I recently tried to upload the sketch (8Mhz internal clock) and it failed a few times. Then I reflashed Optiboot and used the external clock (8Mhz external clock) and this cured the issue. With other chips I did not have this problem.
Indeed, the internal oscillator on some atmega328's is a bit off from 8MHz and enough so that higher speed transmission of sketches can give a problem.
@GertSanders thanks for this bootloader pack. After one day of trying I was still unable to upload scetches to atmega burned with bootloader 8Mhz image to atmeg328 - 28 pin DIL from this: tutorial.
With your 0_1463253058014_atmega328p.zip pack unzipped to "hardware" directory worked like a charm.
@GertSanders : Ive downloaded the files and boards.txt. I noticed you've made a distinction between the 28p DIP and the 32p TQFP. But in the boards.txt I can find no differences besides that the high fuse bits are global for the 28p version and individual (but all the same!) for the 32p version.
Does that have a reason?
@DavidZH
No particular reason, I did not spend as much effort in defining the options for the 32-pin package as I did for the 28-pin package.
When my free time becomes mine again, I will be able to take this up again. For now I have to be content with a relatively short online presence every week.
@GertSanders
No worries! It just caught my eye and was wondering. I'll keep an eye on this as I'm finalising my node network and have to choose a bootloader about now.
@GertSanders I am finally getting around to trying some of these bootloaders. Thanks for making them! Would you mind posting the latest .zip file to the openhardware site? I started with those files and was getting the same errors as others mentioned. As I got further down this thread I noticed I was missing some files.
Thanks again for making these!
The site will still unpack zips and extract the content. I could disable this, but it would make it harder to upload large projects.
Like @GertSanders propose, allowing (and not uppacking) rar-files might be a better solution.
Ok... rar and gz files is now allowed. They will be categorised as "design" files.
I have added my atmega328 directory in a RAR file and uploaded it to the site.
This compressed file contains a directory structure that contains all bootloaders I use, and the extra files needed to make most of those available from the Arduino IDE.
To use it:
Close the Arduino IDE
Unpack the RAR file
Open the IDE
This compressed file needs to be unpacked inside the "hardware" directory which sits inside the Arduino Sketches directory. If you do not have a "hardware" directory inside the Arduino Sketches directory, then you need to make this first.
Then go inside the "hardware" directory and there unpack the RAR file.
My directory structure looks like this:
[Arduino] [hardware] [atmega328p] [avr] [bootloaders] [myoptiboot] ... all the HEX files ... [variants] [28PinBoard] pins_arduino.h [32PinBoard] pins_arduino.h [40PinBoard] pins_arduino.h [44PinBoard] pins_arduino.h boards.txt platform.txt
I'm finally managing to update my Arduino IDE and would like to ask how you have your boards.txt setup. Do you have your boards.txt file that is zipped up in your config zip file just located in the avr folder without running the default boards.txt file or are we supposed to merge your boards file with the original one? Back when i was developing my own boards, i seem to remember just running your board file without any of the default files....
Is this correct or do we merge them?
@GertSanders Awesome, thank you for uploading it as a .rar!!! So easy now
@Samuel235 You should be able to just copy the entire folder from the .rar file into your Arduino folder. I had to create a "Hardware" folder as it didn't exist. I'm on Windows 10 and it looks like this:
After closing/re-opening the Arduino IDE it looks like this:
The boards .txt and platform.txt files are not complete yet, and I still need to check and change the pins_arduino.h boards for the 40 and 44 pin atmega's. That is still on my to do list, but the structure is there already. I'm still learning from other hardware deployments how to define this in a practical way. Hopefully by sharing this will be improved.
@petewill - I do that, however i do already have the hardware folder (and its a clean install of windows and arduino IDE) and then inside the hardware folder i have the boards.txt file (the default one, the list that shows in the IDE without installing gertsanders'. So, i either need to put the new one in there and rename the old or to merge the two together.
Or am i getting this completely wrong some how?
EDIT: It was a mistake on my own part but will leave this comment here for anyone to see and to not make the same mistake as myself:
I was attempting to put it inside of the "arduino" folder that was inside of the hardware folder. My own fault XD
This post is deleted!
I'm slightly confused here...
I'm using a arduino nano, hooked it up to avrdude and read the fuses as Low:FF, High:DA, Extended:05. I then burn new fuses of Low:FF, High:DE and Extended:07. Burnt fine, so i go into arduino IDE and select the bootloader for 32pin, TQFP, 8MHz external crystal 38K4 D13 and then burn bootloader. I get a little warning but that is only because of the new IDE issues with extended bits ("You probably want to use 0xfd instead of 0x05 (double check with your datasheet first)."). But now the LED on the nano is blinking rapid then stop, then blinking rapid and then stop. I can't upload a sketch to the nano either now. I can however burn the old bootloader back on and upload like normal. What could be the issue here?
It almost seems like there is something wrong with the bootloader as it is uploading to it but the LED is blinking weirdly. May not be the bootloader itself, could just be some corruption as it gets uploaded....
@Samuel235
If you are using a nano, what crystal is mounted on it ? Fast flashing indicates the processor is running faster then the expected 8MHz (expected by the timing routines of the bootloader). Could it be a 16MHz model ?
In that case you could try to upload a sketch at a higher speed: 76K8 baud.
Or you can try to load the bootloader for 16MHz/D13 (which in effect is the standard bootloader of an Arduino).
@Samuel235
It also seems the nano is in a reset loop. I'm at work so I can not check my Mac for the fuse settings I use.
@GertSanders Soon as i say the fast flashing I instantly thought that its running quicker than needed, but i had no idea about the resetting that you have pointed out, so thanks for pointing that one out. The reason i didn't even bother trying the 16MHz settings in fuses would be that because when i read the fuses of the arduino before i did anything, it indicated that it was running with fuses of 8MHz and the blink sketch was perfectly timed on those settings.... Could it be a miss read on the fuse settings maybe? Should i read at a slower speed when using avrdude?
The crystal is so small on the nano and i can't even see the engraving with a microscope properly either.
I can confirm that the 16MHz bootloader is working perfectly on this nano. Thanks.
I have been running your bootloader for all my nodes now and that works very well.
But there is something that surprised me. When I compile the next sketch and compare that with the Moteino bootloader it's substantially larger.
/*************************************************************************************** ** ** Outdoor sensor v1.0 Measuring temperature, humdity, pressure and light level ** Calculating a weather forecast with the height comensated air pressure. ** powered by a solar panel and a 1000mAh Li-Ion battery. ** ** Scraped together by D Hille. MySensors library by Henrik Ekblad et al. ** ** Heat index calculation from DHT library by Adafruit ** MAX44009 bij Rob Tillaart ** Weather forecast based on AN3914 by Freescale ** ****************************************************************************************/ // MySensors definitions //#define MY_DEBUG // Enable debug prints (6 kb flash space) //#define MY_DEBUG_VERBOSE_SIGNING // Comment out when no signing is used or when everything is OK (3 kb flash space) #define MY_BAUD_RATE 57600 // Set serial baudrate #define MY_RADIO_RFM69 // Enable and select radio type attached //#define MY_IS_RFM69HW // Comment out when using standard RFM69W #define MY_NODE_ID 110 // Delete to use automatic ID assignment //#define MY_CORE_ONLY /**************************************************************************/ // Transport /**************************************************************************/ //#define MY_TRANSPORT_WAIT_READY_MS 1000 //Start the node even if no connection to the GW could be made (disable for sensor nodes). #define MY_TRANSPORT_STATE_RETRIES 1 #define MY_TRANSPORT_MAX_TSM_FAILURES (2u) #define MY_TRANSPORT_TIMEOUT_EXT_FAILURE_STATE (60*1000ul) /**************************************************************************/ // Security /**************************************************************************/ //#define MY_SIGNING_ATSHA204 //#define MY_SIGNING_SOFT //#define MY_SIGNING_SOFT_RANDOMSEED_PIN 7 //#define MY_SIGNING_REQUEST_SIGNATURES //#define MY_SIGNING_NODE_WHITELISTING {{.nodeId = 0,.serial = {0xD0,0xB1,0x90,0x99,0xC3,0x03,0x08,0xD1,0x34}}} //#define MY_RFM69_ENABLE_ENCRYPTION /**************************************************************************/ // Sensor definitions /**************************************************************************/ #define BATTERY_POWERED #define battUpdater 300 //Number of TX actions to count before a new battery status update is sent. #define MY_DEFAULT_TX_LED_PIN 5 // Comment out when no LED is attached. #define MY_WITH_LEDS_BLINKING_INVERSE #define SENSOR_TYPE 3 // Sensor types: // 0.: no climate sensor attached // 1.: RFM69 on-die Temperature sensor on the RFM69 module (whole integers only) // 2.: DS18B20 Dallas one-wire sensor(data pin D14) // 3.: HTU21D Temp/humidity (I2C) // 4.: BME280 Temp/humidity/pressure (I2C) (node will wake every minute to keep up with trend measureing) #define SENSOR_UPDATE 2 // Time in minutes the sensor sends an update //#define LIGHT_SENSOR_PRESENT // Comment out when not present. (I2C) (Light sensor will update every 60 seconds.) //#define INTERRUPT_SENSOR_PRESENT // Comment out when not present (e. g. motion sensor connect to D3) #define alti 57 //altitude offset to compensate the barometric pressure /**************************************************************************/ // Debug definitions /**************************************************************************/ //#define LOCAL_DEBUG //Comment out to switch all debug messages off (saves 2 kb flash) #ifdef MY_DEBUG //Differentiate between global debug including radio and local debug #define LOCAL_DEBUG //for the sketch alone. #endif #ifdef LOCAL_DEBUG #define Sprint(a) (Serial.print(a)) // Macro as substitute for serial debug. Will be an empty macro when #define Sprintln(a) (Serial.println(a)) // debug is switched off #else #define Sprint(a) #define Sprintln(a) #endif /**************************************************************************/ #include <Button.h> #include <SPI.h> #include <Wire.h> #include <MySensors.h> //#include <SparkFunBME280.h> //#include <MAX44009.h> // Uncomment library used in sketch #include <Adafruit_HTU21DF.h> //#include <OneWire.h> //#include <DallasTemperature.h> #include <Vcc.h> #define BMEaddr 0x76 #define Max44099Addr 0x4A #define sketchName "sensorNode(living)" #define sketchVer "1.0" #define sensorPowerPin 20 #define digitalSensorPin 14 #define analogSensorPin A0 #define oneWireBusPin 14 #define interruptPin 3 #define altSensorPin 18 #define altAnalogPin A4 #define sensorPowerPin 6 #define chanTemp 0 #define chanHum 1 #define chanHeat 2 #define chanBaro 3 #define chanDelta 4 #define chanLight 5 #define chanRate 6 #define chanInterrupt 8 #define PULLUP false #define INVERT false #define bounceTime 20 #define sleepWait 500 //Time to wait in ms before node sleeps (to be able to receive notification messages). bool battPower = true; unsigned long currTime = 0; unsigned long sleepTime = (60000 * SENSOR_UPDATE); unsigned long lastSensorUpdate; unsigned long nextSensor; unsigned long measureTime; int wakeReason = -1; int sendLoop = 0; bool updated = false; bool ACKed = false; int sensorFunc = 0; float sensorData = 0.0; float heatTemp = 0.0; float heatHum = 0.0; bool interruptState = false; bool lastInterrupt = false; bool sensorPresent = false; bool lightPresent = false; bool interruptPresent = false; int minuteCount = 0; bool firstRound = true; float pressureAvg; // average value is used in forecast algorithm. float pressureAvg2; // average after 2 hours is used as reference value for the next iteration. float dP_dt; const int LAST_SAMPLES_COUNT = 5; float lastPressureSamples[LAST_SAMPLES_COUNT]; bool startUp = true; bool metric = true; int battStatCounter = 0; const float VccMin = 1.8; // Minimum expected Vcc level, in Volts. const float VccMax = 3.0; // Maximum expected Vcc level, in Volts. const float VccCorrection = 1.0/1.0; // Measured Vcc by multimeter divided by reported Vcc /**************************************************************************/ // Library declarations /**************************************************************************/ // Uncomment necessary declarations. //RFM69 wireless; //OneWire OWB(oneWireBusPin); //DallasTemperature DS18(&OWB); //DeviceAddress DS18address; Adafruit_HTU21DF HTU = Adafruit_HTU21DF(); //BME280 BME; //Max44009 lightMax(Max44099Addr); //Button reedContact(interruptPin, PULLUP, INVERT, bounceTime); MyMessage msgHum(chanHum, V_HUM); MyMessage msgTemp(chanTemp, V_TEMP); //MyMessage msgBaro(chanBaro, V_PRESSURE); //MyMessage msgTrend(chanUniversal, V_VAR5); //MyMessage msgLight(chanLight, V_LEVEL); //MyMessage msgIntr(chanInterrupt, V_TRIPPED); Vcc vcc(VccCorrection); /**************************************************************************/ // Error messages /**************************************************************************/ #if (defined INTERRUPT_SENSOR_PRESENT && defined LIGHT_SENSOR_PRESENT && SENSOR_TYPE >= 4) #error Motion sensor can anly be combined with either I2C OR one-wire sensors, not both. #endif #if (defined INTERRUPT_SENSOR_PRESENT && (defined LIGHT_SENSOR_PRESENT || SENSOR_TYPE == 4) && defined BATTERY_POWERED) #error Interrupt sensor is not compatible with trend sensors like 'baro' and 'light' because the timer will misalign. #endif #if (SENSOR_TYPE > 4) #error Not a valid sensor type! #endif /**************************************************************************/ void before(void) { Serial.println("\nReading config..."); #ifndef BATTERY_POWERED battPower = false; #endif sensorFunc = SENSOR_TYPE; #ifdef LIGHT_SENSOR_PRESENT lightPresent = true; #endif #ifdef INTERRUPT_SENSOR_PRESENT interruptPresent = true; lastInterrupt = reedContact.read(); #endif #if (defined MY_SIGNING_SOFT || defined MY_SIGNING_ATSHA204) #define sendPause 100 #else #define sendPause 50 #endif } /**************************************************************************/ void setup(void) { pinMode(sensorPowerPin, OUTPUT); //switch on the sensor power digitalWrite(sensorPowerPin, HIGH); wait(50); //wait 50 ms for the sensors to settle. switch (sensorFunc) { case 0: break; case 1: sensorPresent = true; break; case 2: //DS18.begin(); //DS18.getAddress(DS18address, 0); //DS18.setResolution(DS18address, 10); sensorPresent = true; break; case 3: HTU.begin(); sensorPresent = true; break; case 4: //startBME(); sleepTime = 60000; sensorPresent = true; break; } #if (lightPresent) sleepTime = 60000; sensorPresent = true; #endif #ifdef MY_DEBUG //Differentiate between global debug including radio and local debug sleepTime = 30000; //for the sketch alone. #endif batteryStats(); Serial.println("\nDone. \n\nStarting program.\n"); currTime = millis(); } /**************************************************************************/ void presentation() { Serial.println("Start radio and sensors"); sendSketchInfo(sketchName, sketchVer); Sprint("\nPresent "); if (sensorFunc >= 1) { wait(sendPause); present(chanTemp, S_TEMP, "Climate", true); Sprint("temperature"); } if (sensorFunc >= 3) { wait(sendPause); present(chanHum, S_HUM); Sprint(", humidity"); //wait(sendPause); //present(chanHeat ,S_TEMP); //Sprint(", heatindex"); } if (sensorFunc == 4) { wait(sendPause); present(chanBaro, S_BARO); Sprint(", barometric"); wait(sendPause); present(chanDelta, S_CUSTOM); Sprint(" and rate"); } Sprintln(" sensor."); if (lightPresent) { wait(sendPause); present(chanLight, S_LIGHT_LEVEL, "Light", true); Sprintln("\nLightsensor "); if (sensorFunc < 4) { wait(sendPause); present(chanRate, S_CUSTOM); Sprintln("with rate "); } Sprintln("presented."); } if (interruptPresent) { wait(sendPause); present(chanInterrupt, S_MOTION, "Motion", true); Sprintln("Interrupt sensor presented."); } wait(sendPause); } /**************************************************************************/ void loop(void) { if (wakeReason < 0) { Serial.println("Reading sensors..."); switch (sensorFunc) { case 0: break; case 1: updateRFM(); break; case 2: updateDS18(); break; case 3: updateHTU(); break; case 4: if (sendLoop <= 0) { updateBME(); updated = true; } else { Sprint("\nTrend: "); updateTrend(); sendLoop--; } break; } if (lightPresent) { updateMAX(); if (sensorFunc <= 3) { Sprint("\nTrend: "); updateTrend(); } } wakeReason = 0; Sprintln("\nSensors updated..."); } else if (wakeReason == 1) { updateInterrupt(); wakeReason = 0; } if (battStatCounter >= battUpdater) { batteryStats(); } if (updated) { sendLoop = SENSOR_UPDATE; updated = false; } if (millis() >= currTime + sleepWait) { startUp = false; sleepSensor(); } } /**************************************************************************/ void updateInterrupt() {/* Sprintln("\nInterrupt: "); interruptState = reedContact.read(); wait(50); if (interruptState == !lastInterrupt) { send(msgIntr.setSensor(chanInterrupt).set(interruptState),true); lastInterrupt = interruptState; } Sprint("Door/window is "); if (interruptState) { Sprintln("opened."); } else { Sprintln("closed"); }*/ } /**************************************************************************/ void updateRFM() {/* Sprintln("\nRFM: "); sensorData = wireless.readTemperature(); wait(20); send(msgTemp.set(sensorData, 0)); Sprint("Temperature: \t"); Sprint(sensorData); Sprintln(" sent."); battStatCounter++;*/ } /**************************************************************************/ void updateDS18() {/* Sprintln("\nOne-wire: "); DS18.requestTemperatures(); sensorData = DS18.getTempCByIndex(0); wait(20); send(msgTemp.set(sensorData, 2)); battStatCounter++; Sprint("Temperature: "); Sprint(sensorData); Sprintln(" sent.");*/ } /**************************************************************************/ void updateHTU() { Sprintln("\nHTU: "); sensorData = HTU.readTemperature(); heatTemp = sensorData; wait(20); send(msgTemp.set(sensorData, 1)); Sprint("Temperature: \t"); Sprint(sensorData); Sprintln(" sent."); sensorData = HTU.readHumidity(); heatHum = sensorData; wait(sendPause); send(msgHum.set(sensorData, 1)); Sprint("Humidity: \t"); Sprint(sensorData); Sprintln(" sent."); //wait(sendPause); //send(msgHeat.set(computeHeatIndex(heatTemp, heatHum), 1)); //Sprint("Heatindex sent."); battStatCounter++; } /**************************************************************************/ void updateBME() { /*Sprintln("\nBME: "); BME.begin(); wait(100); sensorData = (BME.readFloatPressure()/pow(1-(alti/44330.0),5.255)/100); if (!startUp) { trend(sensorData); send(msgBaro.set(sensorData,1)); Sprint("Pressure: \t"); Sprint(sensorData); Sprintln(" sent."); } if (!(minuteCount < 35 && firstRound)) { wait(sendPause); send(msgTrend.set(dP_dt,2)); Sprint("Trend: \t\t"); Sprint(dP_dt); Sprintln(" sent."); } wait(sendPause); sensorData = BME.readTempC(); heatTemp = sensorData; send(msgTemp.set(sensorData,1)); Sprint("Temperature: \t"); Sprint(sensorData); Sprintln(" sent."); wait(sendPause); sensorData = BME.readFloatHumidity(); heatHum = sensorData; send(msgHum.set(sensorData,1)); Sprint("Humidity: \t"); Sprint(sensorData); Sprintln(" sent."); wait(sendPause); send(msgHeat.set(computeHeatIndex(heatTemp, heatHum), 1)); Sprint("Heatindex sent."); battStatCounter++;*/ } /**************************************************************************/ float computeHeatIndex(float tempInput, float humInput) //Function derived from Adafruit DHT library { /*// Using both Rothfusz and Steadman's equations // float hiFar; float tempFar = tempInput * 1.8 + 32; hiFar = 0.5 * (tempFar + 61.0 + ((tempFar - 68.0) * 1.2) + (humInput * 0.094)); if (hiFar > 79) { hiFar = -42.379 + 2.04901523 * tempFar + 10.14333127 * humInput + -0.22475541 * tempFar*humInput + -0.00683783 * pow(tempFar, 2) + -0.05481717 * pow(humInput, 2) + 0.00122874 * pow(tempFar, 2) * humInput + 0.00085282 * tempFar*pow(humInput, 2) + -0.00000199 * pow(tempFar, 2) * pow(humInput, 2); if((humInput < 13) && (tempFar >= 80.0) && (tempFar <= 112.0)) hiFar -= ((13.0 - humInput) * 0.25) * sqrt((17.0 - abs(tempFar - 95.0)) * 0.05882); else if((humInput > 85.0) && (tempFar >= 80.0) && (tempFar <= 87.0)) hiFar += ((humInput - 85.0) * 0.1) * ((87.0 - tempFar) * 0.2); } return (hiFar - 32) * 0.55555;*/ } /**************************************************************************/ void updateTrend() { /*if (sensorFunc == 4) { * Sprint("BME -> "); BME.begin(); wait(100); sensorData = (BME.readFloatPressure()/pow(1-(alti/44330.0),5.255)/100); } else if (lightPresent) { Sprint("MAX -> "); sensorData = lightMax.getLux(); } trend(sensorData);*/ } /**************************************************************************/ void updateMAX() { /*Sprintln("\nLight: "); sensorData = lightMax.getLux(); if (sensorFunc <= 3) { trend(sensorData); } send(msgLight.set(sensorData,1)); Sprint("Light: \t"); Sprint(sensorData); Sprintln(" sent."); if (sensorFunc <= 3) { if (!(minuteCount < 35 && firstRound)) { wait(sendPause); send(msgTrend.set(dP_dt,2)); Sprint("Trend: \t"); Sprint(dP_dt); Sprintln(" sent."); } } battStatCounter++;*/ } /**************************************************************************/ void sleepSensor() { if (battPower) { Serial.println("\nSleep the sensor."); wait(50); unsigned long lightsOut = (sleepTime - (millis() - currTime)); if (interruptPresent) { if (sensorPresent) { wakeReason = sleep(1, CHANGE, lightsOut); } else { wakeReason = sleep(1, CHANGE, 0); } } else { //digitalWrite(sensorPowerPin, LOW); //Disabled because of the I2C pull ups on the HTU board sleep(lightsOut); //causing a 140uA load in sleep. Without, sleep drain is 5uA. wakeReason = -1; //digitalWrite(sensorPowerPin, HIGH); wait(50); } currTime = millis(); Sprint("Wake reason: ");Sprintln(wakeReason); } else if (millis() >= currTime + sleepTime) { wakeReason = -1; currTime = millis(); } } /**************************************************************************/ /*void startBME() { BME.settings.commInterface = I2C_MODE; BME.settings.I2CAddress = BMEaddr; BME.settings.runMode = 1; // 1, Single mode BME.settings.tStandby = 0; // 0, 0.5ms BME.settings.filter = 0; // 0, filter off BME.settings.tempOverSample = 1; BME.settings.pressOverSample = 1; BME.settings.humidOverSample = 1; BME.begin(); } /**************************************************************************/ float getLastPressureSamplesAverage() { float lastPressureSamplesAverage = 0; for (int i = 0; i < LAST_SAMPLES_COUNT; i++) { lastPressureSamplesAverage += lastPressureSamples[i]; } lastPressureSamplesAverage /= LAST_SAMPLES_COUNT; return lastPressureSamplesAverage; } /**************************************************************************/ void trend(float pressure) {/* int index = minuteCount % LAST_SAMPLES_COUNT; // Calculate the average of the last 5 minutes. lastPressureSamples[index] = pressure; minuteCount++; if (minuteCount > 185) { minuteCount = 6; } if (minuteCount == 5) { pressureAvg = getLastPressureSamplesAverage(); Sprint("First average: "); Sprint(pressureAvg); } else if (minuteCount == 35) { float lastPressureAvg = getLastPressureSamplesAverage(); float change = (lastPressureAvg - pressureAvg);. } Sprint(F("\tForecast at minute ")); Sprint(minuteCount); Sprint(F(" dP/dt = ")); Sprint(dP_dt); Sprint(F("hPa/h --> "));*/ } /**************************************************************************/ void batteryStats() { if (battPower) { float battPct = vcc.Read_Perc(); float battVolt = vcc.Read_Volts(); wait(50); sendBatteryLevel(battPct); Sprint("Battery level: "); Sprint(battVolt); Sprintln("V.\n"); battStatCounter = 0; wait(50); } } /**************************************************************************/ void sendBattLevel() { /*Serial.println("\nBattery: "); int ADread = analogRead(batteryPin); int battPcnt = map(ADread, 570, 704, 0, 100); //Usable voltage range from 3.4 to 4.2V battPcnt = constrain(battPcnt, 0, 100); //Charging keeps it at 100% sendBatteryLevel(battPcnt); Sprint("\nADread\t"); Sprint(ADread); Sprint("\t"); Sprintln(battPcnt); battStatCounter = 0;*/ }
GertSanders ATMega328p 32p TFQP, 8MHz, 38400baud
Warning: Board breadboard:avr:atmega328bb doesn't define a 'build.board' preference. Auto-set to: AVR_ATMEGA328BB Sketch uses 16,680 bytes (51%) of program storage space. Maximum is 32,256 bytes. Global variables use 942 bytes (45%) of dynamic memory, leaving 1,106 bytes for local variables. Maximum is 2,048 bytes.
LowPowerLab Moteino 16MHz
Warning: Board breadboard:avr:atmega328bb doesn't define a 'build.board' preference. Auto-set to: AVR_ATMEGA328BB Sketch uses 14,264 bytes (44%) of program storage space. Maximum is 31,744 bytes. Global variables use 890 bytes of dynamic memory.
In this sketch the size difference doesn't matter that much. But when I introduce signing it will get tight.
Any thoughts?
(This is a standard sketch which I adapt for every node by commenting out the parts I do not need. So it might seem large for a simple temperature node. It is...).
@DavidZH
Do you also get these differences when compiling for both 16Mhz ? In this case one node is 16MHz (Moteino) and the second is 8MHz.
Apart from that I have no clue why this would result in different sizes.
I have tried with your bootloader on 16MHz and 8 MHz with crystal.
8MHz, crystal, 1V8
Warning: Board breadboard:avr:atmega328bb doesn't define a 'build.board' preference. Auto-set to: AVR_ATMEGA328BB Build options changed, rebuilding all Sketch uses 16,680 bytes (51%) of program storage space. Maximum is 32,256 bytes. Global variables use 942 bytes (45%) of dynamic memory, leaving 1,106 bytes for local variables. Maximum is 2,048 bytes.
16MHz, crystal, 1V8
Warning: Board breadboard:avr:atmega328bb doesn't define a 'build.board' preference. Auto-set to: AVR_ATMEGA328BB Build options changed, rebuilding all Sketch uses 16,694 bytes (51%) of program storage space. Maximum is 32,256 bytes. Global variables use 942 bytes (45%) of dynamic memory, leaving 1,106 bytes for local variables. Maximum is 2,048 bytes.
So the 16Mhz file is actally even a bit bigger. I also tried if changing the BOD voltage would change anything, but nope on that.
Might be something to look into in a spare hour.
hy,
can anyone tell me what is the word near cpu 8mhz-38k4-d13 etc..??
tanks
@mar.conte - Your question isn't very clear. If you're asking what the name means its broken down as:
8mhz - Crystal speed/frequency
38k4 - 38400 upload speed
D13 - Pin 13 to flash the LED (if needed) just for visual indication that the bootloader has been burnt/installed.
Not sure if that is what you meant but I could only assume. If not then please attempt to explain a little clearly for us
Hope that sorts your problem!
@Samuel235
Sorry for my english, your answer is ok Tanks you
@mar.conte - Its okay, i understood you, just about
Now just apply that description to all of the other bootloader varients that the great @Gertsanders has provided us with
I have upload bootloader 8mhz 38k bod 2,7 and internal clock with sketch j gammon!!!
Consumption 2 mh!!!
Why?
Tanks
I'm sorry, i don't understand the issue at hand.... Could you attempt to explain any clearer?
This post is deleted!
@Samuel235
Hy
My project is simple atmega328 with internal clock 8 mhz in powerdown mode end rfm69 hw which send to gateway when pir motion is high powered all with 2 aa battery.
The First test what i do is in breadboard atmega328 with optiboot 6.2 ( i have upload bootloader 8mhz internal , 38vk, bod 2v7) and simple sketch j gammon: result no microampere but around 2/5 mah in sleep mode;
@mar.conte I'm not familiar with the sketch you mentioned. The best is to use the voltmeter and oscillograph to determine consumption.
Why do you have BOD 2.7v? I do not have any BOD - this would let a node to run on two batteries until voltage drops to below 1.9V
@mar.conte - Are you basically saying that your hardware is using too much power to enable you to run on battery for any substantial time?
@Samuel235
Yes, i want obtain few micro amp but its two month wich i try without result.....
- tonnerre33 Hardware Contributor last edited by tonnerre33
@mar.conte Which skecth did you use ?
Did you have radio errors in your log ?
Could you please either include your sketch here or give us a link to the example sketch that you used please.
@Samuel235
Resolved
I have try with sketc j gammon
And the problem are pir input (model hrc-501)
Are ever high and the cpu dont go in sleep.
I have remove pir and all ok the atmega run 20 uah very good.
Can you advise a good pir wich run to 3,3 V for my project?
Tanks
@mar.conte
im not sure why pir with simple test like this is ok
but with my sketch the cpu reset forever
#include <Arduino.h> // assumes Arduino IDE v1.0 or greater #include <avr/sleep.h> #include <avr/wdt.h> #include <avr/power.h> #include <avr/io.h> ISR (PCINT2_vect) { // handle pin change interrupt for D0 to D7 here } // end of PCINT2_vect const unsigned long WAIT_TIME = 4000; const byte LED =8 ; const byte LED2 =9 ; int wakepin =3 ; unsigned long lastSleep; volatile boolean motionDetected=false; float batteryVolts = 5; char BATstr[10]; //longest battery voltage reading message = 9chars char sendBuf[32]; byte sendLen; void checkBattery(void); #include <RFM69.h> //get it here: #include <SPI.h> //********************************************************************************************* // *********** IMPORTANT SETTINGS - YOU MUST CHANGE/ONFIGURE TO FIT YOUR HARDWARE ************* //********************************************************************************************* #define NETWORKID 100 // The same on all nodes that talk to each other #define NODEID 2 // The unique identifier of this node #define RECEIVER 1 // The recipient of packets //Match frequency to the hardware version of the radio on your Feather //#define FREQUENCY RF69_433MHZ //#define FREQUENCY RF69_868MHZ #define FREQUENCY RF69_868MHZ #define ENCRYPTKEY "sampleEncryptKey" //exactly the same 16 characters/bytes on all nodes! #define IS_RFM69HW true // set to 'true' if you are using an RFM69HCW module //********************************************************************************************* #define SERIAL_BAUD 115200 #define RFM69_CS 10 #define RFM69_IRQ 2 #define RFM69_IRQN 0 // Pin 2 is IRQ 0! #define RFM69_RST 9 int16_t packetnum = 0; // packet counter, we increment per xmission RFM69 radio = RFM69(RFM69_CS, RFM69_IRQ, IS_RFM69HW, RFM69_IRQN); void setup () { pinMode (wakepin,INPUT); pinMode(LED, OUTPUT); pinMode(LED2, OUTPUT); digitalWrite(wakepin,HIGH); char buff[50]; Serial.begin(SERIAL_BAUD); Serial.println("Arduino RFM69HCW Transmitter"); // Hard Reset the RFM module pinMode(RFM69_RST, OUTPUT); digitalWrite(RFM69_RST, HIGH); delay(100); digitalWrite(RFM69_RST, LOW); delay(100); // Initialize radio radio.initialize(FREQUENCY,NODEID,NETWORKID); if (IS_RFM69HW) { radio.setHighPower(); // Only for RFM69HCW & HW! } radio.setPowerLevel(31); // power output ranges from 0 (5dBm) to 31 (20dBm) radio.encrypt(ENCRYPTKEY); pinMode(LED, OUTPUT); Serial.print("\nTransmitting at "); Serial.print(FREQUENCY==RF69_433MHZ ? 433 : FREQUENCY==RF69_868MHZ ? 868 : 915); Serial.println(" MHz"); } uint16_t batteryReportCycles=0; void loop () { //rf(); sleepNow(); rf(); //radio.sleep(); // sleep function called here radio.sleep(); } void wake () { // cancel sleep as a precaution //sleep_disable(); // precautionary while we do other stuff detachInterrupt (1); } // end of wake void sleepNow() { if (millis () - lastSleep >= WAIT_TIME) { lastSleep = millis (); byte old_ADCSRA = ADCSRA; // disable ADC ADCSRA = 0; // pin change interrupt (example for D0) PCMSK2 |= bit (PCINT16); // want pin 0 PCIFR |= bit (PCIF2); // clear any outstanding interrupts PCICR |= bit (PCIE2); // enable pin change interrupts for D0 to D7 set_sleep_mode (SLEEP_MODE_PWR_DOWN); power_adc_disable(); power_spi_disable(); power_timer0_disable(); power_timer1_disable(); power_timer2_disable(); power_twi_disable(); //PORTB = 0x00; UCSR0B &= ~bit (RXEN0); // disable receiver UCSR0B &= ~bit (TXEN0); // disable transmitter sleep_enable(); noInterrupts (); attachInterrupt (1, wake,HIGH); digitalWrite (LED, LOW); interrupts (); sleep_cpu (); digitalWrite (LED, HIGH); sleep_disable(); power_all_enable(); ADCSRA = old_ADCSRA; PCICR &= ~bit (PCIE2); // disable pin change interrupts for D0 to D7 UCSR0B |= bit (RXEN0); // enable receiver UCSR0B |= bit (TXEN0); // enable transmitter } // end of time to sleep } void rf() { char radiopacket[20] = "Hello World #"; itoa(packetnum++, radiopacket+13, 10); Serial.print("Sending "); Serial.println(radiopacket); if (!radio.sendWithRetry(RECEIVER, radiopacket, strlen(radiopacket))) { //target node Id, message as string or byte array, message length Serial.println("OK"); } radio.receiveDone(); //put radio in RX mode Serial.flush(); //make sure all serial data is clocked out before sleeping the MCU }
Firstly, i can't read the structure the way you have posted your sketch. You should use the code function in the reply, the icon looks like "</>" above where you make your reply. Put your code in there for us to have any chance of reading it at all.
Secondly, I don't think that this is the place to be posting this issue. For a start, this isn't an issue relevant to the bootloader in this thread and secondly, it doesn't even have anything to do with MySensors.
@mar.conte yes, you can measure consumption (should be in uA) while sleeping with a good voltmeter.
However, I really fail to understand what has Nick's sketch you mentioned to do with MySensors? I suppose some people here may be aware of sleeping issues, but again your problem has nothing to do with MySensors.
Please see the following - it may help you troubleshoot your issue
@Samuel235
I wanted to build my projects with mysensors, first of all I started with bootloader found in this section to try them with examples that I know because I do not know yet mysensors, I wanted to learn it so I took the liberty to insert code in your section. if you can figure out my problem thank you or I apologize for the trouble thanks
I get why you done it, but i don't understand why. You don't need to 'know' mysensors.... There is a getting started page, i suggest you take a few hours and read through everything. I'm not helping with an issue that is completely not related to mysensors while on the mysensors forum. Just simply for future readers of this thread that come here to solve their issue, if this thread is bloated out with over 10/20 replies that is not related to the question at hand its much harder for them to find their resolution here. I suggest contacting the author of the code himself as i'm pretty sure that your issue is not the bootloader.
|
https://forum.mysensors.org/topic/3261/various-bootloader-files-based-on-optiboot-6-2/105
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Update::
from IPython.display import Image Image(''):
Image('')
It just doesn't have the same effect. Matplotlib is great for scientific plots, but sometimes you don't want to be so precise.
This subject has recently come up on the matplotlib mailing list, and started some interesting discussions. As near as I can tell, this started with a thread on a mathematica list which prompted a thread on the matplotlib list wondering if the same could be done in matplotlib.
Damon McDougall offered a quick solution which was improved by Fernando Perez in this notebook, and within a few days there was a matplotlib pull request offering a very general way to create sketch-style plots in matplotlib. Only a few days from a cool idea to a working implementation: this is one of the most incredible aspects of package development on github.
The pull request looks really nice, but will likely not be included in a released version of matplotlib until at least version 1.3. In the mean-time, I wanted a way to play around with these types of plots in a way that is compatible with the current release of matplotlib. To do that, I created the following code:
XKCDify will take a matplotlib
Axes instance, and modify the plot elements in-place to make
them look hand-drawn.
First off, we'll need to make sure we have the Humor Sans font.
It can be downloaded using the command below.
Next we'll create a function
xkcd_line to add jitter to lines. We want this to be very general, so
we'll normalize the size of the lines, and use a low-pass filter to add correlated noise, perpendicular
to the direction of the line. There are a few parameters for this filter that can be tweaked to
customize the appearance of the jitter.
Finally, we'll create a function which accepts a matplotlib axis, and calls
xkcd_line on
all lines in the axis. Additionally, we'll switch the font of all text in the axes, and add
some background lines for a nice effect where lines cross. We'll also draw axes, and move the
axes labels and titles to the appropriate location.
"""). The idea for this comes from work by Damon McDougall[email protected]/msg25499.html """ import numpy as np import pylab as pl from scipy import interpolate, signal import matplotlib.font_manager as fm # We need a special font for the code below. It can be downloaded this way: import os import urllib2 if not os.path.exists('Humor-Sans.ttf'): fhandle = urllib2.urlopen('') open('Humor-Sans.ttf', 'wb').write(fhandle.read()) def xkcd_line(x, y, xlim=None, ylim=None, mag=1.0, f1=30, f2=0.05, f3=15): """ Mimic a hand-drawn line from (x, y) data Parameters ---------- x, y : array_like arrays to be modified xlim, ylim : data range the assumed plot range for the modification. If not specified, they will be guessed from the data mag : float magnitude of distortions f1, f2, f3 : int, float, int filtering parameters. f1 gives the size of the window, f2 gives the high-frequency cutoff, f3 gives the size of the filter Returns ------- x, y : ndarrays The modified lines """ x = np.asarray(x) y = np.asarray(y) # get limits for rescaling if xlim is None: xlim = (x.min(), x.max()) if ylim is None: ylim = (y.min(), y.max()) if xlim[1] == xlim[0]: xlim = ylim if ylim[1] == ylim[0]: ylim = xlim # scale the data x_scaled = (x - xlim[0]) * 1. / (xlim[1] - xlim[0]) y_scaled = (y - ylim[0]) * 1. / (ylim[1] - ylim[0]) # compute the total distance along the path dx = x_scaled[1:] - x_scaled[:-1] dy = y_scaled[1:] - y_scaled[:-1] dist_tot = np.sum(np.sqrt(dx * dx + dy * dy)) # number of interpolated points is proportional to the distance Nu = int(200 * dist_tot) u = np.arange(-1, Nu + 1) * 1. / (Nu - 1) # interpolate curve at sampled points k = min(3, len(x) - 1) res = interpolate.splprep([x_scaled, y_scaled], s=0, k=k) x_int, y_int = interpolate.splev(u, res[0]) # we'll perturb perpendicular to the drawn line dx = x_int[2:] - x_int[:-2] dy = y_int[2:] - y_int[:-2] dist = np.sqrt(dx * dx + dy * dy) # create a filtered perturbation coeffs = mag * np.random.normal(0, 0.01, len(x_int) - 2) b = signal.firwin(f1, f2 * dist_tot, window=('kaiser', f3)) response = signal.lfilter(b, 1, coeffs) x_int[1:-1] += response * dy / dist y_int[1:-1] += response * dx / dist # un-scale data x_int = x_int[1:-1] * (xlim[1] - xlim[0]) + xlim[0] y_int = y_int[1:-1] * (ylim[1] - ylim[0]) + ylim[0] return x_int, y_int def XKCDify(ax, mag=1.0, f1=50, f2=0.01, f3=15, bgcolor='w', xaxis_loc=None, yaxis_loc=None, xaxis_arrow='+', yaxis_arrow='+', ax_extend=0.1, expand_axes=False): """Make axis look hand-drawn This adjusts all lines, text, legends, and axes in the figure to look like xkcd plots. Other plot elements are not modified. Parameters ---------- ax : Axes instance the axes to be modified. mag : float the magnitude of the distortion f1, f2, f3 : int, float, int filtering parameters. f1 gives the size of the window, f2 gives the high-frequency cutoff, f3 gives the size of the filter xaxis_loc, yaxis_log : float The locations to draw the x and y axes. If not specified, they will be drawn from the bottom left of the plot xaxis_arrow, yaxis_arrow : str where to draw arrows on the x/y axes. Options are '+', '-', '+-', or '' ax_extend : float How far (fractionally) to extend the drawn axes beyond the original axes limits expand_axes : bool if True, then expand axes to fill the figure (useful if there is only a single axes in the figure) """ # Get axes aspect ext = ax.get_window_extent().extents aspect = (ext[3] - ext[1]) / (ext[2] - ext[0]) xlim = ax.get_xlim() ylim = ax.get_ylim() xspan = xlim[1] - xlim[0] yspan = ylim[1] - xlim[0] xax_lim = (xlim[0] - ax_extend * xspan, xlim[1] + ax_extend * xspan) yax_lim = (ylim[0] - ax_extend * yspan, ylim[1] + ax_extend * yspan) if xaxis_loc is None: xaxis_loc = ylim[0] if yaxis_loc is None: yaxis_loc = xlim[0] # Draw axes xaxis = pl.Line2D([xax_lim[0], xax_lim[1]], [xaxis_loc, xaxis_loc], linestyle='-', color='k') yaxis = pl.Line2D([yaxis_loc, yaxis_loc], [yax_lim[0], yax_lim[1]], linestyle='-', color='k') # Label axes3, 0.5, 'hello', fontsize=14) ax.text(xax_lim[1], xaxis_loc - 0.02 * yspan, ax.get_xlabel(), fontsize=14, ha='right', va='top', rotation=12) ax.text(yaxis_loc - 0.02 * xspan, yax_lim[1], ax.get_ylabel(), fontsize=14, ha='right', va='top', rotation=78) ax.set_xlabel('') ax.set_ylabel('') # Add title ax.text(0.5 * (xax_lim[1] + xax_lim[0]), yax_lim[1], ax.get_title(), ha='center', va='bottom', fontsize=16) ax.set_title('') Nlines = len(ax.lines) lines = [xaxis, yaxis] + [ax.lines.pop(0) for i in range(Nlines)] for line in lines: x, y = line.get_data() x_int, y_int = xkcd_line(x, y, xlim, ylim, mag, f1, f2, f3) # create foreground and background line lw = line.get_linewidth() line.set_linewidth(2 * lw) line.set_data(x_int, y_int) # don't add background line for axes if (line is not xaxis) and (line is not yaxis): line_bg = pl.Line2D(x_int, y_int, color=bgcolor, linewidth=8 * lw) ax.add_line(line_bg) ax.add_line(line) # Draw arrow-heads at the end of axes lines arr1 = 0.03 * np.array([-1, 0, -1]) arr2 = 0.02 * np.array([-1, 0, 1]) arr1[::2] += np.random.normal(0, 0.005, 2) arr2[::2] += np.random.normal(0, 0.005, 2) x, y = xaxis.get_data() if '+' in str(xaxis_arrow): ax.plot(x[-1] + arr1 * xspan * aspect, y[-1] + arr2 * yspan, color='k', lw=2) if '-' in str(xaxis_arrow): ax.plot(x[0] - arr1 * xspan * aspect, y[0] - arr2 * yspan, color='k', lw=2) x, y = yaxis.get_data() if '+' in str(yaxis_arrow): ax.plot(x[-1] + arr2 * xspan * aspect, y[-1] + arr1 * yspan, color='k', lw=2) if '-' in str(yaxis_arrow): ax.plot(x[0] - arr2 * xspan * aspect, y[0] - arr1 * yspan, color='k', lw=2) # Change all the fonts to humor-sans. prop = fm.FontProperties(fname='Humor-Sans.ttf', size=16) for text in ax.texts: text.set_fontproperties(prop) # modify legend leg = ax.get_legend() if leg is not None: leg.set_frame_on(False) for child in leg.get_children(): if isinstance(child, pl.Line2D): x, y = child.get_data() child.set_data(xkcd_line(x, y, mag=10, f1=100, f2=0.001)) child.set_linewidth(2 * child.get_linewidth()) if isinstance(child, pl.Text): child.set_fontproperties(prop) # Set the axis limits ax.set_xlim(xax_lim[0] - 0.1 * xspan, xax_lim[1] + 0.1 * xspan) ax.set_ylim(yax_lim[0] - 0.1 * yspan, yax_lim[1] + 0.1 * yspan) # adjust the axes ax.set_xticks([]) ax.set_yticks([]) if expand_axes: ax.figure.set_facecolor(bgcolor) ax.set_axis_off() ax.set_position([0, 0, 1, 1]) return ax
Let's test this out with a simple plot. We'll plot two curves, add some labels,
and then call
XKCDify on the axis. I think the results are pretty nice!
%pylab inline
Welcome to pylab, a matplotlib-based Python environment [backend: module://IPython.zmq.pylab.backend_inline]. For more information, type 'help(pylab)'.
np.random.seed(0) ax = pylab.axes() x = np.linspace(0, 10, 100) ax.plot(x, np.sin(x) * np.exp(-0.1 * (x - 5) ** 2), 'b', lw=1, label='damped sine') ax.plot(x, -np.cos(x) * np.exp(-0.1 * (x - 5) ** 2), 'r', lw=1, label='damped cosine') ax.set_title('check it out!') ax.set_xlabel('x label') ax.set_ylabel('y label') ax.legend(loc='lower right') ax.set_xlim(0, 10) ax.set_ylim(-1.0, 1.0) #XKCDify the axes -- this operates in-place XKCDify(ax, xaxis_loc=0.0, yaxis_loc=1.0, xaxis_arrow='+-', yaxis_arrow='+-', expand_axes=True)
<matplotlib.axes.AxesSubplot at 0x2fecbd0>
Now let's see if we can use this to replicated an XKCD comic in matplotlib. This is a good one:
Image('')
With the new
XKCDify function, this is relatively easy to replicate. The results
are not exactly identical, but I think it definitely gets the point across!
# Some helper functions def norm(x, x0, sigma): return np.exp(-0.5 * (x - x0) ** 2 / sigma ** 2) def sigmoid(x, x0, alpha): return 1. / (1. + np.exp(- (x - x0) / alpha)) # define the curves x = np.linspace(0, 1, 100) y1 = np.sqrt(norm(x, 0.7, 0.05)) + 0.2 * (1.5 - sigmoid(x, 0.8, 0.05)) y2 = 0.2 * norm(x, 0.5, 0.2) + np.sqrt(norm(x, 0.6, 0.05)) + 0.1 * (1 - sigmoid(x, 0.75, 0.05)) y3 = 0.05 + 1.4 * norm(x, 0.85, 0.08) y3[x > 0.85] = 0.05 + 1.4 * norm(x[x > 0.85], 0.85, 0.3) # draw the curves ax = pl.axes() ax.plot(x, y1, c='gray') ax.plot(x, y2, c='blue') ax.plot(x, y3, c='red') ax.text(0.3, -0.1, "Yard") ax.text(0.5, -0.1, "Steps") ax.text(0.7, -0.1, "Door") ax.text(0.9, -0.1, "Inside") ax.text(0.05, 1.1, "fear that\nthere's\nsomething\nbehind me") ax.plot([0.15, 0.2], [1.0, 0.2], '-k', lw=0.5) ax.text(0.25, 0.8, "forward\nspeed") ax.plot([0.32, 0.35], [0.75, 0.35], '-k', lw=0.5) ax.text(0.9, 0.4, "embarrassment") ax.plot([1.0, 0.8], [0.55, 1.05], '-k', lw=0.5) ax.set_title("Walking back to my\nfront door at night:") ax.set_xlim(0, 1) ax.set_ylim(0, 1.5) # modify all the axes elements in-place XKCDify(ax, expand_axes=True)
<matplotlib.axes.AxesSubplot at 0x2fef210>
Pretty good for a couple hours's work!
I think the possibilities here are pretty limitless: this is going to be a hugely useful and popular feature in matplotlib, especially when the sketch artist PR is mature and part of the main package. I imagine using this style of plot for schematic figures in presentations where the normal crisp matplotlib lines look a bit too "scientific". I'm giving a few talks at the end of the month... maybe I'll even use some of this code there.
This post was written entirely in an IPython Notebook: the notebook file is available for download here. For more information on blogging with notebooks in octopress, see my previous post on the subject.
|
https://nbviewer.jupyter.org/url/jakevdp.github.com/downloads/notebooks/XKCD_plots.ipynb
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtx_can.h>
CAN_ERROR CAN_start (
U32 ctrl); /* CAN Controller to Enable */
The CAN_start function starts the CAN controller specified
by ctrl and enables the CAN controller to
participate on the CAN network.
The CAN_start function is part of RL-CAN. The prototype is
defined in RTX_CAN.h.
Note
The CAN_start function returns one of the following
manifest constants.
CAN_init
#include <rtx_can.h>
void main (void) {
..
CAN_start (1); /* Start CAN Controller.
|
http://www.keil.com/support/man/docs/rlarm/rlarm_can_start.htm
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
One of the great things about McAfee Threat Intelligence Exchange (TIE) is that it allows you to manipulate Reputations for files and certificates. This allows you to adjust settings for YOUR organization and not rely on just global information.
For example, an in-house custom application can be added (and trusted) manually. Or the Certificate used by your developers can be added.
On the other end, you can react very quickly to emerging threads by importing reputations that you have gathered (for example from IOC and STIX files or from a sandbox analyzer) and setting them as known malicious files. This process can also be scripted and automated. Below are descriptions for the different import methods provided by TIE.
The examples below all show File Reputation imports as they are the most common. All described options also apply to certificate Reputations in the same way.
TIE gives you and easy and fast way to import single Reputations (probably used most often).
Inside of ePO, navigate to TIE Reputations >> File Overrides >> Actions >> Import Reputations and then enter your Reputation Information
Inside of ePO you can import file Reputations in bulk via the UI.
Navigate to TIE Reputations >> File Overrides >> Actions >> Import Reputations and then browse to your XML file containing the Reputations
To create your XML file, you need the following elements:
<FileName> = Optional file name
<SHA1Hash> = Required sha1 hash
<MD5Hash> = Required md5 hash
<ReputationLevel> = Required numeric reputation value (see table below)
<Comment> = Optional comment
Reputation setting
<?xml version="1.0" encoding="UTF-8"?>
<TIEReputations>
<FileReputation>
<FileName>HackIt.exe</FileName>
<SHA1Hash>0x98AF3632E17677A8A23739F720B1A2F215CB8836</SHA1Hash>
<MD5Hash>0xDEF30CBEA881149C2AFFDF9A059FB751</MD5Hash>
<ReputationLevel>15</ReputationLevel>
</FileReputation>
<FileName>trayMan.dll</FileName>
<SHA1Hash>0x7F618396A910908019B5580B4DA9031AF4A433CA</SHA1Hash>
<MD5Hash>0xB2B3DAE040F6B5AE1DF52B0CD7631A18</MD5Hash>
<Comment>Comment for ALTTAB</Comment>
<FileName>cabinet.dll</FileName>
<SHA1Hash>0x98AF3632E17677A8A23739F720B1A2F215CB8837</SHA1Hash>
<MD5Hash>0xDEF30CBEA881149C2AFFDF9A059FB759</MD5Hash>
<Comment>Comment for cabinet.dll</Comment>
<SHA1Hash>0xD182CF4C0F7550064BAA3A825E86DE8DA1D3290B</SHA1Hash>
<MD5Hash>0x36060A75D9EDB1AEF0825988C7DD8511</MD5Hash>
<Comment>Comment for PORTABLEDEVICEAPI</Comment>
<SHA1Hash>0xCAC3CB1EFE7FD53A9AC2C8825DACCC22EDFDFED7</SHA1Hash>
<MD5Hash>0xC693E642ACFBDD76433AF6BE3C3EEE6F</MD5Hash>
<Comment>Comment for PORTABLEDEVICECONNECTAPI</Comment>
</TIEReputations>
To assist with the creation and formatting of the XML file, please find attached (at the bottom) the tie_importer.html file. This tool uses javascript in your local browser (store the file and open it in your favorite browser) to assist in formatting multiple Reputations in the correct XML syntax.
The ePO web API allows for automated and scripted aproaches to setting Reputations. For example the McAfee SIEM could use this API to automatically import file reputations into TIE via a script (see python example below).
More details about the ePO web API can be found in the McAfee ePO Web Scripting Guide
The command used to set TIE reputations via the API is tie.setReputations [fileReps] [certReps]
This command will take file or certificate information as parameters.
The parameters need to be formatted as a JSON string. As in the XML import, the sha1, md5 and reputation level are required fields.
Note that the sha1 and md5 hash are base64 encoded binary representations of the values (not ASCII like in the manual import examples!). In the python example below, you can see how the ASCII hash values are decoded as HEX first, before they are base64 encoded and submitted.
JSON fields:
name: Optional file name
sha1: Required base64 encoded sha1 hash
md5: Required base64 encoded md5 hash
reputation: Required reputation as numeric value (see table above)
comment: Optional comment
Example JSON string of file reputation(s):
[{"sha1":"kioq8sbc2dlBtbZQqYiQCSDJ7KU=","md5":"S1w4yxbZvfoMy+yoRkzcQQ==","reputation":"1","comment":"Test Comment","name":"test.exe"}]
Multiple Reputations can be imported at once by combining multiple JSON strings with a comma.
Example 2 JSON string of file reputations combined:
[{"sha1":"frATnSF1c5s8yw0REAZ4IL5qvSk=","md5":"8se7isyX+S6Yei1Ah9AhsQ==","reputation":"99"},{"sha1":"d3HtjhR0Eb3qN6c+vVxeqVVe0t4=","md5":"V+0uApv5yjk4PSpnHvT7UA==","reputation":"99"}]
import mcafee
import sys
import base64
ePOIP='10.10.55.23'
ePOUser='admin'
ePOUserPwd='MyPassword'
reputation = '1'
#Possible Reputation Values (Need to provide numeric value)
#Known trusted 99
#Most likely trusted 85
#Might be trusted 70
#Unknown 50
#Might be malicious 30
#Most likely malicious 15
#Known malicious 1
#Not set 0
sha1input = sys.argv[1]
md5input = sys.argv[2]
mc = mcafee.client(ePOIP,'8443',ePOUser,ePOUserPwd,'https','json')
sha1base64 = base64.b64encode(sha1input.decode('hex'))
md5base64 = base64.b64encode(md5input.decode('hex'))
repString = '[{"sha1":"' + sha1base64 + '","md5":"' + md5base64 + '","reputation":"' + reputation + '"}]'
print 'Adding to TIE Server: ' + repString
mc.tie.setReputations(repString)
Usage: python addTieReputation.py <sha1hash> <md5hash>
Especially during PoC and testing cases, you often need a quick way to get the sha1 and md5 hash required for the imports above. There are many hash tools out there (a simple google search will give you plenty of options), but if you need something right now, here is an online tool that does the trick (not endorsed in any way!): Online MD5|SHA1 Hash Generator For File And Text
.
Corporate Headquarters
2821 Mission College Blvd.
Santa Clara, CA 95054 USA
Consumer Support | Enterprise Support | McAfee.com
Legal | Privacy | Copyright © 2019 McAfee, LLC
|
https://community.mcafee.com/t5/Documents/How-to-import-File-and-Certificate-Reputations-into-TIE/ta-p/552906
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
Binding editors to data stored in a database or a file is not the only option. Data can also be created and supplied at runtime. This topic describes how to do this. To learn about other data binding methods, refer to the Data Binding Overview topic.
This data binding mode requires that the data source is an object holding a list of "records". Each "record" is an object whose public properties represent record fields and property values are field values. In general, to supply your editor with data created at runtime, you will need to follow the steps below:
Declare a class implementing the IList, ITypedList or IBindingList interface. This class will represent a list serving as the editor's data source. This list's elements must be record objects.
Note: if you don't want to create your own list object, you can use any of the existing objects implementing these interfaces. For instance, a DataTable object can serve as the data source. Therefore, this step is optional.
Note: only data sources implementing the IBindingList interface support data change notifications. When using a data source that doesn't support this interface, you may need to update editors manually. This can be done using the BaseControl.Refresh method, for instance.
The code below creates a class whose instances will represent records. The class declares two public properties (ID and Name), so that you can bind editors to their data.
public class Record {
int id;
string name;
public Record(int id, string name) {
this.id = id;
this.name = name;
}
public int ID { get { return id; } }
public string Name {
get { return name; }
set { name = value; }
}
}
Public Class Record
Dim _id As Integer
Dim _name As String
Sub New(ByVal id As Integer, ByVal name As String)
Me._id = id
Me._name = name
End Sub
Public ReadOnly Property ID() As Integer
Get
Return _id
End Get
End Property
Public Property Name() As String
Get
Return _name
End Get
Set(ByVal Value As String)
_name = Value
End Set
End Property
End Class
Once a record class has been declared, you can create a list of its instances. This list serves as the data source. To bind an editor to this list, use the editor's DataBindings property. For most editors, you need to map data source fields to the BaseEdit.EditValue property. The code below shows how to bind a text editor to the Name field.
ArrayList list = new ArrayList();
list.Add(new Record(1, "John"));
list.Add(new Record(2, "Michael"));
textEdit1.DataBindings.Add("EditValue", list, "Name", true);
Dim List As New ArrayList()
List.Add(New Record(1, "John"))
List.Add(New Record(2, "Michael"))
TextEdit1.DataBindings.Add("EditValue", List, "Name", True)
|
https://documentation.devexpress.com/WindowsForms/618/Controls-and-Libraries/Editors-and-Simple-Controls/Simple-Editors/Editors-Features/Data-Binding-Overview/Binding-to-Data-Created-at-Runtime
|
CC-MAIN-2019-43
|
en
|
refinedweb
|
QNX Developer Support
inet_net_ntop()
Convert an Internet network number to CIDR format
Synopsis:
#include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> char * inet_net_ntop( int af, const void * src, int bits, char * dst, size_t size );
Arguments:
- af
- The address family. Currently, only AF_INET is supported.
- src
- A pointer to the Internet network number that you want to convert. The format of the address is interpreted according to af.
- bits
- The number of bits that specify the network number (src).
- dst
- a pointer to the buffer where the function can store the converted address.
- size
- The size of the buffer that dst points to, in bytes.
Library:
libsocket
Use the -l socket option to qcc to link against this library.
Description::
- a.b.c.d/bits or a.b.c.d
- When you specify a four-part address, each part is interpreted as a byte of data and is assigned, from left to right, to the four bytes of an Internet network number (or Internet address). When an Internet network number is viewed as a 32-bit integer quantity on a system that uses little-endian byte order (i.e. right to left), such as the Intel 386, 486 and Pentium processors, the bytes referred to above appear as "d.c.b.a".
- a.b.c
- When you specify a three-part address, the last part is interpreted as a 16-bit quantity and is placed in the rightmost two bytes of the Internet network number (or network address). This makes the three-part address format convenient for specifying Class B network addresses as net.net.host.
- a.b
- When you specify a two-part address, the last part is interpreted as a 24-bit quantity and is placed in the rightmost three bytes of the Internet network number (or network address). This makes the two-part number format convenient for specifying Class A network numbers as net.host.
- a
- When you specify a one-part address, the value is stored directly in the Internet network number (network address) without any byte rearrangement.).
Returns:
A pointer to the destination string (dst), or NULL if a system error occurs (errno is set).
Errors:
- ENOENT
- Invalid argument af.
Classification:
See also:
inet_aton(), inet_net_ntop()
|
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/i/inet_net_ntop.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
site enables you to point to a single terminal server or to a single terminal server farm to populate the list of RemoteApp programs that appear on the site. If you have multiple terminal servers or multiple terminal server farms, you can use Windows® SharePoint® Services to create a single Web access point for RemoteApp programs and full terminal server desktop connections that are available on different terminal servers. You can customize a Windows SharePoint Services site by adding multiple console tree, right-click Features, and then click Add Features.
On the Select Features page, expand .NET Framework 3.0 Features.
Select the .Net Framework 3.0 check box, and then click Next.
Click Install.
On the Installation Results page, verify that the installation succeeded, and then click Close.
Close Správce serveru.
To install Windows SharePoint Services on a Windows Server 2008-based computer, the required version is Windows SharePoint Services 3.0 with Service Pack 1 (SP1). You cannot install Windows SharePoint Services 3.0 without SP1 on a Windows Server 2008-based computer.
Download Windows SharePoint Services 3.0 with SP1. To download the software, visit either of the following Web sites, depending on your operating system version:
On the Download Center page, click Download.
In the File Download - Security Warning dialog box, click Run to start the installation, or click Save to run the installation later.
In the Internet Explorer - Security Warning dialog box, click Run to continue with the installation.
On the Read the Microsoft Software License Terms page, review the terms of the agreement. If you accept the terms, select the I accept the terms of this agreement check box, and then click Continue.
On the Choose the installation you want page, click Basic to install to the default location. (To install to a different location, click Advanced, specify the location that you want to install to on the Data Location tab, and then click Install Now.)
When Setup finishes, a dialog box prompts you to complete the configuration of your server. Ensure that the Run the SharePoint Products and Technologies Configuration Wizard now check box is selected, and then click Close to continue.
The SharePoint Products and Technologies Configuration Wizard starts..
As a security measure, Windows SharePoint Services requires that you register the Webový přístup k TS Web Part's assembly and namespace as a Safe Control in the Web.config file of the server.
The following procedure shows how to register the Web Part's assembly as a Safe Control for Windows SharePoint Services sites that use the default port 80.
Open an elevated command prompt. To do this, click Start, right-click Command Prompt, and then click Run as administrator.
In the User Account Control dialog box, click Continue.
At the command prompt, type the following command (where C:\ represents the drive where you installed Internet Information Services), and then press ENTER:
notepad C:\inetpub\wwwroot\wss\VirtualDirectories\80\web.config
In the <SafeControls> section of the Web.config file, add the following line under the other SafeControl Assembly entries (as a single line):
On the File menu, click Save, and then close the file.
(Optional) To make the Web Part available to the SharePoint 3.0 Central Administration site, repeat this procedure for the Web.config file that is located in the following folder, where C:\ represents the drive where you installed Internet Information Services:
C:\inetpub\wwwroot\wss\VirtualDirectories\ port_number \web.config
You must create a folder path to store the images for the Web Part to a Windows SharePoint Services site, you must first add the Web Part to the Web Part Gallery for the site. Then, you can add the Web Part and configure it to point to a specific terminal server or terminal server farm. If you have multiple terminal servers, you can add multiple Web Parts to the page, each pointing to a different terminal server or terminal server farm.
In the following procedure, the default Windows SharePoint Services site (on port 80) is used as an example.
In Internet Explorer, open the default Windows SharePoint Services site at the following location:
When you are prompted, enter your account credentials to log on to the site, and then click OK.
If you are prompted that the content is being blocked by Internet Explorer, do one of the following.
If there is an Add button, follow these steps:
-.
To configure the Web Part, click edit in the upper-right corner of the Web Part, and then click Modify Shared Web Part.
In the configuration pane that appears, you can configure settings such as the terminal server or terminal server farm from which to populate the Web Part, the title, and other appearance settings.
When you are finished configuring the Web Part, click OK.
When you are finished editing the site, in the upper-right corner, click Exit Edit Mode.
To add other users who can access the site, click the Site Actions tab, and then click Site Settings. You can configure permissions by clicking one or more of the options under Users and Permissions. For more information, see the "About managing SharePoint groups and users" topic and the "Manage SharePoint groups" topic in Windows SharePoint Services Help.
(Optional) If you want to add the.
|
http://technet.microsoft.com/cs-cz/library/cc771354(d=printer,v=ws.10).aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
The beautiful thing about ASP.NET MVC is that is contains several extension points (in part #1 we saw how the Model Binder extensions work). Another extension point that is of great value is the ControllerFactory. By default, ASP.NET MVC simply looks for a class called {ControllerName}Controller. So a URL like this: would look for a controller called HomeController because Home is the controller name. Under the default model factory, you must have a default constructor on your Controller class (you don't actually have to write one as C# will create a default one for you, but if you create a non-default constructor for use in TDD tests, you must explicity write a default constructor, for an example of this concept, take a look at the AccountController that is part of the ASP.NET MVC template project in Visual Studio 2008/2010). This is fine for simple projects, but once you start getting into more complex projects that utilize services / data access patterns / repositories / etc, you could find yourself writing a lot of redundant code.
That is why Dependency Injection / Inversion of Control came about. By using a DI container like StructureMap, you can decouple the services from the controllers and let the DI container worry about them. So let's start by creating a class that implements the Controller Factory:
public class StructureMapControllerFactory : DefaultControllerFactory{ public StructureMapControllerFactory(IContainer container) { this.container = container; } protected override IController GetControllerInstance(Type controllerType) { try { return container.GetInstance(controllerType) as Controller; } catch (StructureMapException) { System.Diagnostics.Debug.WriteLine(container.WhatDoIHave()); throw; } } public override void ReleaseController(IController controller) { base.ReleaseController(controller); } public IContainer container;}
That's it. StructureMap has the ability that the first time it sees a new class, it will auto-generate an injection scheme based on the currently registered services. Let's look at our Global.asax.cs file and see how we use this class:
protected void Application_Start(){ Container container = new StructureMap.Container(); container.Configure(x => { x.ForRequestedType<IRepository>() .CacheBy(InstanceScope.HttpContext) .TheDefault.Is.OfConcreteType<NHibernateRepository>(); }); ControllerBuilder.Current.SetControllerFactory( new StructureMapControllerFactory(container) );}
Basically what we have done here is:
We would then rewrite our HomeController to support the IRepository pattern for data lookup as follows:
public class HomeController { private IRepository repository; public HomeController(IRepository repo) { this.repository = repo; } public ActionResult Index() { var items = repository.GetItems(); }}
Every time a call was made that used the HomeController, StructureMap would new up a HomeController class, passing in the NHibernateRepository as the parameter to the constructor. You no longer have to worry about the implementation details of the repository, in fact you could months from now decide to change from using NHibernate to Entity Data Model and all you would have to do is change the one line in your Global.asax.cs that sets the concrete implementation of IRepository to your new repository class, like so:
x.ForRequestedType<IRepository>() .CacheBy(InstanceScope.HttpContext) .TheDefault.Is.OfConcreteType<EdmRepository>();
And every single controller in your project that depended on IRepository will start using the Entity Data Model version of your repository without any further changes.
I will talk about the implementation details of the NHibernateRepository next, it will be a multipart treatment as there is a lot of details in the actual implementation as I use Fluent NHibernate, StructureMap as BytecodeProvider (to provide DI to the data model), and LINQ to NHibernate behind the scenes. I will also discuss some performance enhancements I made to the repository.
An important part of ASP.NET MVC is the ability to specify a complex object type as the parameter to your controller action method and have MVC "hydrate" your object from the Form data sent to the server by the browser. For Scott Guthrie's treatment on this feature, go here. I'm not going to review this functionality. What I want to talk about is the fact that model validation can be built into the Model Binder so that not only is your object hydrated, it is automatically validated without you ever having to worry about it. There are several different possible technologies that could be used to do this validation. I personally looked at:
I ultimately selected DataAnnotations for 3 reasons:
Now, having made that decision, I began testing with the default DataAnnotations Model Binder sample posted here. The first thing I realized was that Localization was broken!! It, thankfully, was an easy fix. I just removed the call to GetDisplayName from the OnModelUpdated as it was unnecessary, and instead of the call to GetDisplayName in OnPropertyValidating, I put this line:
validationContext.MemberName = propertyDescriptor.Name;
And everything started working as expected. To save you time, I've attached the changed file. For more information on how to actually implement the DataAnnotations Model Binder, I recommend watching the video referenced above (Ninja on Fire, at position 61:00 of the video, though I highly recommend the whole video).
(NOTE: The file really is attached... it is at the very bottom of this post, but if you don't see it... Click Here)
Ok, having fixed the bug (there was one other bug I heard about, but had not run into yet, about nested complex properties, that I've included the fix for as well), I started writing my custom Validators. The first of which was CreditCardValidatorAttribute, which (obviously) implements a credit card validation. So let's look at how I did that. I'm going to exclude the actual code that does the validation (again, I've included the whole source code in the attachment) and look more at the code specific to DataAnnotations:
public class CreditCardAttribute : AbstractLuhnAttribute{ public CreditCardAttribute() : base(() => Resources.AbstractLuhn_ErrorMessage) { }}
That's it. As I said, I wasn't going to look at the code to do the actual credit card validation (Which is in AbstractLuhnAttribute). What is happening here is that the ValidationAttribute base class contains a constructor that takes a Func<string> parameter which it will use to lookup the error message. In the constructor of our CreditCardValidator, we pass it a value that comes from the Resources of the DLL containing this validator.
If you have watched the video as I suggested you would have seen Phil Haack showing you how to define your own Error messages and how to place them into your own Resources within the ASP.NET MVC Web project, all of that example is totally valid with this Credit Card Validator as well!
The next validator I implemented was a copy of the Future validator from the NHibernate.Validator project, which validated that a date is in the future. I made a slight modification (due to business rules requirements) and I implemented TodayOrFuture. It was slightly different from the Credit Card validator in that it's validation message required a more complex token replacement. So here is the important code (again, the actual validation has been removed, but is included in the attachment to this article. ):
public class TodayOrFutureAttribute : ValidationAttribute{ public TodayOrFutureAttribute() : base(() => Resources.TodayOrFuture_ErrorMessage) { } public override string FormatErrorMessage(string name) { return string.Format(CultureInfo.CurrentCulture, base.ErrorMessageString, new object[] { name, DateTime.Now.ToShortDateString() }); }}
The difference is the FormatErrorMessage method, which allows us to inject a value into the ErrorMessage (which the raw form is
{0} must be greater than or equal to {1}
so when you input an incorrect date, you will get an error message that reads like:
Post Date must be greater than or equal to 09/09/2009
Again, it makes creating a custom validator extremely easy. And for those who noticed that I did not involve the Culture in generating the Date string, good eye! I didn't realize it myself until after I had written this article. I'm going to fix it in the attachment.
Up next will be a look at how I integrated StructureMap with the ASP.NET MVC ControllerFactory to provide Dependency Injection to the Controllers.
Here at Trimedia Atlantic, we've started a new project that I am very excited about, though I cannot yet talk about what that project is, I will talk about my experiences in developing it as we are using a great deal of cutting edge technology, such as a great deal of the NHibernate suite of technologies (Base, Search, etc), StructureMap IoC, AutoMapper, and ASP.NET MVC (along with Telerik's ASP.NET MVC framework). I have had quite a few moments of excitement as I have discovered the capabilities of the platform(s), moments where the lights came on and I "got" the concepts of why and how, and moments of frustration as things "broke" (or in some cases where never meant to work that way). I'm going to blog every day about my experiences as I work through the development cycle (but being horrible as I am at blogging, that remains to be seen).
So here is the list of technologies that I am using:
I will warn people right now, my biggest complaint is (and will be) the lack of good documentation for these projects, there has been a lot of trial and error and learning on my part as to how these technologies work (sometimes work best, sometimes I just go them working and will worry about optimization later). I hope that people benefit from my experience in the course of this blog.
Post #1 -- DataAnnotations Model Binder Post #2 -- Using StructureMap as Controller Factory in ASP.NET MVCPost #3 -- Implementing the Repository pattern with NHibernate and Fluent NHibernatePost #4 -- Implementing the NHibernate BytecodeProvider with StructureMapPost #5 -- Implementing NHibernate.Search
|
http://weblogs.asp.net/adamgreene/archive/2009/09.aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
26 July 2012 06:47 [Source: ICIS news]
By Helen Yan
SINGAPORE (ICIS)--Spot butadiene rubber (BR) prices in Asia are likely to be under pressure in August, with values of feedstock butadiene (BD) expected to come off soon after spiking by more than 30% in just a month, market sources said on Thursday.
BR producers are currently seeking $3,100-3,200/tonne (€2,542-2,624/tonne) CFR (cost and freight) ?xml:namespace>
“The feedstock BD prices have peaked at $2,500/tonne CFR NE Asia and may fall to around $2,300-2,400/tonne CFR NE Asia in early August, which will cap the BR prices,” a trader said.
With BD likely to soften in the near term, downstream tyre producers are insisting on lower BR prices or at least a rollover of July contract prices at $2,900-3,000/tonne CFR Asia for August, market sources said.
“We may consider a rollover for August contracts but it will be difficult to consider a discount as our margins will be eroded,” a BR producer said.
BR needs to be priced about $600-700/tonne higher than the values of feedstock BD for the BR makers to break even, industry sources said.
Underlying weakness in demand in the automotive sector amid a slowing global economy is another major factor that will weigh on BR prices.
Consumer confidence has been eroded by the eurozone debt crisis, a faltering
BR is used in the production of tyres for the automotive industry.
“Demand for BR is likely to remain flat as the macro-economic outlook is still very uncertain and does not look good for the rest of the year,” a BR supplier said.
In
A slowing economy, uncertain fuel price regulations, new taxes and a weakening Indian rupee have slowed down sales of new cars. In May, growth in sales of new cars in the country decelerated to 2.8% compared with a 7% pace recorded in the same period in 2011.
The Society of Indian Automobile Manufacturers (SIAM) said car sales for the current fiscal year are expected to grow at a slower pace of 9-11% than the 10-12% clip originally projected in April.
In
Truck tyre makers in
(
|
http://www.icis.com/Articles/2012/07/26/9581097/asia-br-under-pressure-as-bd-prices-look-set-to-fall.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
provide a simple mechanism for synchronizing access to a variable that is shared by multiple threads. The threads of different processes can use this mechanism if the variable is in shared memory.
The InterlockedCompareExchange function performs an atomic comparison of the Destination value with the Comperand value. If the Destination value is equal to the Comperand value, the Exchange value is stored in the address specified by Destination. Otherwise, no operation is performed.
The variables for InterlockedCompareExchange must be aligned on a 32-bit boundary.
Each object type, such as memory maps, semaphores, events, message queues, mutexes, and watchdog timers, has its own separate namespace. Empty strings ("") are handled as named objects. On Windows desktop-based platforms, synchronization objects all share the same namespace.
|
http://msdn.microsoft.com/en-us/library/aa908785.aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world
Helper To Format Numbers Using Scientific Notation
For those of us who can't remember the formatting codes
def number_to_scientific(num,precision=3) "%.#{precision}e" % num end number_to_scientific(10000) => 1.000e004
in some cases it might be better to use the 'g' code instead of the 'e' code.
|
http://www.dzone.com/snippets/helper-format-numbers-using
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
)
which we will discuss in
Section 8.9. The Gated Recurrent Unit (GRU) [Cho et al., 2014] is a slightly more streamlined variant that often offers comparable performance and is significantly faster to compute. See also [Chung et al., 2014] for more details. Due to its simplicity we start with the GRU.
8.8.2. Implementation from Scratch¶
To gain a better understanding of the model let us implement a GRU from scratch.
8.8.2.1. Reading the Data Set¶
We begin by reading The Time Machine corpus that we used in Section 8.5. The code for reading the data set is given below:
import d2l from mxnet import nd from mxnet.gluon import rnn batch_size, num_steps = 32, 35 train_iter, vocab = d2l.load_data_time_machine(batch_size, num_steps).
def get_params(vocab_size, num_hiddens, ctx): num_inputs = num_outputs = vocab_size normal = lambda shape : nd.random.normal(scale=0.01, shape=shape, ctx=ctx) three = lambda : (normal((num_inputs, num_hiddens)), normal((num_hiddens, num_hiddens)), nd.zeros(num_hiddens, ctx=ctx)) W_xz, W_hz, b_z = three() # Update gate parameter W_xr, W_hr, b_r = three() # Reset gate parameter W_xh, W_hh, b_h = three() # Candidate hidden state parameter # Output layer parameters W_hq = normal(
Section 8.5, this function returns a tuple composed
of an NDArray with a shape (batch size, number of hidden units) and with
all values set to nd.concat(*outputs, dim=0), (H,)
8.8.2.4. Training and Prediction¶
Training and prediction work in exactly the same manner as before.
vocab_size, num_hiddens, ctx = len(vocab), 256, d2l.try_gpu() num_epochs, lr = 500, 1 model = d2l.RNNModelScratch(len(vocab), num_hiddens, ctx, get_params, init_gru_state, gru) d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, ctx)
Perplexity 1.1, 13446 tokens/sec on gpu(0) time traveller well thattime is only a kind of space here is a p traveller it s against reason said filby what reason said.
gru_layer = rnn.GRU(num_hiddens) model = d2l.RNNModel(gru_layer, len(vocab)) d2l.train_ch8(model, train_iter, vocab, lr, num_epochs, ctx)
Perplexity 1.1, 60693 tokens/sec on gpu(0) time traveller it s against reason said filby what reason said traveller it s against reason said filby what reason said.
|
http://classic.d2l.ai/chapter_recurrent-neural-networks/gru.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
I decided to make project that would interface with the popular educational software, PowerSchool, and add some special functionality that many schools ask for from the software. It basically displays student profiles, and acts as a dashboard for school administrators. It provides data visualizations for grades and behavior traits, and allow users to add comments as they review the students progress. I utilized a JavaScript chart library, Charts.js, in order to provide some nice dynamic and animated visual representations of data.
I populated my database with real data provided by the school, that unfortunately I can not post any of the real data here, but you can get the idea how it would work. I seeded my database from a JSON file, but in the future hope to write and API for them to have their database serve up the data to a URL in real time, which will use my code written in my seed.rb file.
My models include Users, Students, Grades, Standards, and Courses. Here is an example of some validations and model methods, on my Students model:
class Student < ActiveRecord::Base has_many :comments has_many :users, through: :comments has_many :grades has_many :standards, through: :grades has_many :courses, through: :grades validates :lastfirst, :student_number, :gradelevel, presence: true validates_uniqueness_of :student_number def uniq_courses self.courses.uniq end def self.search(word) if word.present? self.all.where('lastfirst LIKE ?', "%#{word}%") else self.all end end def grades_per_semester(semester) self.grades.where("semester = ?", semester) end def grades_per_course(course) self.grades_per_semester("S1").where("course_id = ?", course.id) end def homs_per_course(course) arr = {} self.grades_per_course(course).each do |grade| if grade.standard.hom? arr[grade.standard.standardname] = grade.grade end end arr end def standards_per_course(course) arr = {} self.grades_per_course(course).each do |grade| if !grade.standard.hom? arr[grade.standard.standardname] = grade.grade end end arr end end
As you can see I added some search functionality so users could easily find students. Basically, it’s just a form (utilizing the form_tag ActionView::Helper method) that posts to the index action in the students_controller here:
def index if params[:student] @students = Student.search(student_params[:search]).sort_by &:lastfirst else @students = Student.all.sort_by &:lastfirst end end
Simple but effective search function.
Also validations are to protect against any incomplete information, and to make sure the student_number was unique. You can also see some SQL relationships here, comments and grades act as join tables between users, standards, and courses.
I used some nested resources to allow me to post comments to a students show page, so nested comments under the student in order to have a url like this “/students/124/comments”. And “/studnts/124/comment/new”. Here is my nested route:
resources :students do resources :comments end
I have basic login, logout, and signup functionality as well as the ability for users to login/signup with their existing google account using the OmniAuth gem (absolute lifesave, made the process so easy!).
I had a little bit of trouble figuring out hwo to serve up my data from my models, to the front end so the Charts.js could work it’s magic. I realize this part was a bit out of the scope of this project, but found a workaround by posting the data, in hash form, to a data attribute in a hidden div, using the content_tag helper:
<%= content_tag :div, class: "student_list", id: "studentshow", data: {student: @s1_data_hash} do %> <%end%>
Which then, I could grab that data hash with jQuery:
$(document).on('turbolinks:load', function(){ var data_hash = $('#studentshow').data('student'); // all my charts.js code uses that data here });
Make sure you wrap that JS code in a document on load function!
And then users can search the current students, see their grades in dynamic animated graphs, and post comments to share thouhts with other school administrators.
This poject is still a work in progress, have learned a ton already, was super fun, looking forward to expand the functionality to include dynamic student schedules, and registration for after school programs.
Thanks for reading!
|
https://nictravis.com/my_rails_project
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Celery is an opensource asynchronous distributed task queue that allows processing of vast amounts of messages. That’s a mouth full of course. Let me explain it a bit easier by giving a concrete example: the idea is that for instance activation emails for new sign ups on your website are handled via tasks that are distributed to and executed concurrently on one or more servers. The way it works is that a task is sent over a message queue like RabbitMQ. This is also often referred to as a “Message Broker”. The servers that will execute the tasks, often referred to as “ Workers” are listening to incoming tasks from the broker and will execute them. Obviously, the benefit is that your main web application is offloaded and can continue normal operation, assuming the tasks will be processed at a later time. I found the following a nice tutorial:
Below is a picture to make clear how this works in principle:
On the client side, a function is called to add your task onto the message queue. The worker computers are listing to the queue and if an incoming task is available, the celery daemon will execute that particular function.
Here is an overview of my setup. We will have Celery installed on the client side. On server side, we install RabbitMQ and Celery.
Installing Celery is really simple. I followed these steps:
ubuntu@ubuntu-celery-client:~/Celery$ sudo apt-get update ubuntu@ubuntu-celery-client:~/Celery$ sudo apt-get install python-pip ubuntu@ubuntu-celery-client:~/Celery$ sudo pip install Celery
The above example is run on the client side so to speak. Obviously, you also have to repeat the above for all the distributed servers that are going to execute the tasks.
Installing RabbitMQ is also not very difficult. Do the following:
ubuntu@ubuntu-celery-server:~/Celery$ sudo apt-get update ubuntu@ubuntu-celery-server:~/Celery$ sudo apt-get install rabbitmq-server
Now, we need to configure RabbitMQ. For simplicity, I will have a user “ubuntu” with password “ubuntu”
ubuntu@ubuntu-celery-server:~/Celery$ sudo rabbitmqctl add_user ubuntu ubuntu ubuntu@ubuntu-celery-server:~/Celery$ sudo rabbitmqctl add_vhost vhost_ubuntu ubuntu@ubuntu-celery-server:~/Celery$ sudo rabbitmqctl set_permissions -p vhost_ubuntu ubuntu".*"".*"".*"
Now that both Celery and RabbitMQ are installed and configured properly, let’s create an easy example of how this all works.
On the client side, we write the “client.py” script
from celery.execute import send_task results = [] for x in range(1,100): results.append(send_task("tasks.multiply",[200,200]))
Note, the following snippet would also work and I even consider it a bit more cleaner:
from celery import Celery results = [] celery = Celery() celery.config_from_object('celeryconfig') for x in range(1,100): results.append(celery.send_task("tasks.multiply",[200,200]))
In the above snippet, we have to do 100 multiplications. Instead of doing them in the same process, we will send these tasks to a different server that will take care of the execution. In order to execute the task, we use Celery’s send_task command. This
On the server side, we write the “tasks.py” script:
from celery.task import task @task def multiply(x, y): multiplication = x * y return "The product is " + str(multiplication)
Before we can execute the scripts, we need to tell Celery where the broker can be found. We do this by creating a celeryconfig.py file that contains the following content:
BROKER_HOST = “173.39.241.238” #IP address of the server B, which is running RabbitMQ and Celery BROKER_PORT = 5672 BROKER_USER = “ubuntu” #username for RabbitMQ BROKER_PASSWORD = “ubuntu” #password for RabbitMQ BROKER_VHOST = “vhost_ubuntu” #host as configured on RabbitMQ server CELERY_RESULT_BACKEND = "amqp" CELERY_IMPORTS=("tasks”,)
This file is stored on both servers in the same directory as your other scripts.
On the server side, we ensure that Celery is running:
ubuntu@ubuntu-celery-server:~/Celery$ celery worker -l info -------------- celery@ubuntu-celery-7696e291-37d9-4d0a-802e-fcc046d9e72d v3.1.16 (Cipater) ---- **** ----- --- * *** * -- Linux-3.13.0-36-generic-x86_64-with-Ubuntu-14.04-trusty -- * - **** --- - ** ---------- [config] - ** ---------- .> app: default:0x7f6764948750 (.default.Loader) - ** ---------- .> transport: amqp://ubuntu:**@173.39.241.238:5672/vhost_ubuntu - ** ---------- .> results: amqp - *** --- * --- .> concurrency: 1 (prefork) -- ******* ---- --- ***** ----- [queues] -------------- .> celery exchange=celery(direct) key=celery [tasks] . tasks.multiply [2014-10-29 13:22:08,380: INFO/MainProcess] Connected to amqp://ubuntu:**@173.39.241.238:5672/vhost_ubuntu [2014-10-29 13:22:08,389: INFO/MainProcess] mingle: searching for neighbors [2014-10-29 13:22:09,399: INFO/MainProcess] mingle: all alone [2014-10-29 13:22:09,410: WARNING/MainProcess] celery@ubuntu-celery-7696e291-37d9-4d0a-802e-fcc046d9e72d ready.
The server is now configured and waiting for tasks to execute. On the client server, we execute the client.py file:
ubuntu@ubuntu-celery-client:~/Celery$ python client.py
If all is well, you should see output the is similar to the one below:
[2014-10-29 13:23:53,169: INFO/MainProcess] Received task: tasks.multiply[ec6273e2-2adf-4a98-b3ab-7d2b95bb72df] [2014-10-29 13:23:53,176: INFO/MainProcess] Received task: tasks.multiply[c94d8e5a-4afc-4920-916f-b33fca0dc94c] [2014-10-29 13:23:53,186: INFO/MainProcess] Received task: tasks.multiply[8cdcb1de-31f5-455c-b785-19d8eb9281f2] [2014-10-29 13:23:53,187: INFO/MainProcess] Received task: tasks.multiply[5ecb8a03-2af4-4d6f-ab2f-e8b0f4398f54] [2014-10-29 13:23:53,188: INFO/MainProcess] Received task: tasks.multiply[1d8c3efb-ad20-42e9-976b-34b8be0a5e39] [2014-10-29 13:23:53,205: INFO/MainProcess] Task tasks.multiply[ec6273e2-2adf-4a98-b3ab-7d2b95bb72df] succeeded in 0.0337770140031s: 'The product is 40000' [2014-10-29 13:23:53,208: INFO/MainProcess] Received task: tasks.multiply[5c42dac8-2f4f-4639-9089-d91b2873dff1] [2014-10-29 13:23:53,219: INFO/MainProcess] Task tasks.multiply[c94d8e5a-4afc-4920-916f-b33fca0dc94c] succeeded in 0.0136614609946s: 'The product is 40000' [2014-10-29 13:23:53,221: INFO/MainProcess] Received task: tasks.multiply[a72e756b-1e99-4455-ad18-4110cbfd3e1e] [2014-10-29 13:23:53,224: INFO/MainProcess] Task tasks.multiply[8cdcb1de-31f5-455c-b785-19d8eb9281f2] succeeded in 0.00538706198859s: 'The product is 40000' [2014-10-29 13:23:53,226: INFO/MainProcess] Received task: tasks.multiply[004296e3-f931-4075-ba83-09a7804b5e49] [2014-10-29 13:23:53,229: INFO/MainProcess] Task tasks.multiply[5ecb8a03-2af4-4d6f-ab2f-e8b0f4398f54] succeeded in 0.00483364899992s: 'The product is 40000' [2014-10-29 13:23:53,231: INFO/MainProcess] Received task: tasks.multiply[18172600-70f8-4402-a923-db02d71718a5] [2014-10-29 13:23:53,235: INFO/MainProcess] Task tasks.multiply[1d8c3efb-ad20-42e9-976b-34b8be0a5e39] succeeded in 0.00486789099523s: 'The product is 40000' .....
You can see that first 5 tasks were retrieved from the queue and then the workers started to execute the tasks successfully, each displaying the multiplication result.
|
https://blog.wimwauters.com/2014/10/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
TL;DR – C# enum refers to a keyword for indicating an enumeration of integral constants.
Declaring enum: Possible Types and Use
The C#
enum keyword indicates a collection of named integral constants.
Specifying C# enums within a namespace is the best practice as all the classes will be able to access it. It is also possible to place
enum within a struct or a class.
Here is a basic code example, showing the way to declare enums:
enum grades {A, B, C, D, E, F};
Note: by default, C# enum type for elements is int. You can set it to different integral numeric by adding a colon.
The following code reveals the way to manipulate the type of elements:
enum grades : byte{A, B, C, D, E, F};
Here is a list of C# enum types that can replace int:
- sbyte
- byte
- short
- ushort
- int
- uint
- long
- ulong
If you need other options for the conversion of C# enum, string is also a possibility. To convert enums to strings, you should apply the
ToString function:
using System; public class EnumSample { enum Holidays { Christmas = 1, Easter = 2 }; public static void Main() { Enum myHolidays = Holidays.Christmas; Console.WriteLine("The value of this instance is '{0}'", myHolidays.ToString()); } }
Note: you cannot set an enum to string as enums can only have integers. The conversion above simply converts an already existing enum into a string.
It is set that every enumeration begins at 0. With each element, the value increases by 1. However, it is possible to manipulate this default rule by specifying a value to elements in the set. In the C# enum example below, we used an initializer to assign a value to the elements.
enum grades {A=1, B, C, D, E, F};
You can use functions to make your code perform certain functions. For instance, the following code prints out all the elements in an enum:
using System; public class EnumSample { enum Holidays { Thanksgiving = 1, Christmas = 2, Easter = 3 }; public static void Main() { foreach (var item in Enum.GetNames(typeof (Holidays))) { Console.WriteLine(item); } } }
Adding Flags to C# Enum
There are two C# enum types: simple and flag. The flag type is for supporting bitwise operations with the enum values.
The
[Flag] attribute is for representing a set of possible values and not a single value. Therefore, such sets are often used for bitwise operators.
The following C# enum example shows how to assign a flag on an enum by applying the bitwise
OR operator:
using System; public class Program { enum Roommates { Josh = 1, Carol = 2, Christine = 4, Paul = 8 } [ ] enum RoommatesPav { Josh = 1, Carol = 2, Christine = 4, Paul = 8 } public static void Main() { var str1 = (Roommates.Josh | Roommates.Paul).ToString(); var str2 = (RoommatesPav.Josh | RoommatesPav.Paul).ToString(); Console.WriteLine(str1); Console.WriteLine(str2); } }
C# enum: Useful Tips
- C# enums define integers more consistently and clearly. There are fewer chances that your code will contain invalid constants.
- It is usually better to represent items with
enumwhen their order in an enumeration is important.
- Be careful not to add the same values to different elements represented by
enum. It can cause inconsistencies in your code and render unwanted results.
|
https://www.bitdegree.org/learn/c-sharp-enum
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Build a Secure REST Application Using Jersey
REST is one of the most used architectural styles when it comes to developing web services. In Java, we have the JAX-RS specification that defines how to create a RESTful application. To show the power of the spec, Jersey, the reference implementation of JAX-RS was created. Building JAX-RS endpoints only requires adding annotations to your code. Keep reading to see how easy it is!
In this tutorial you’ll create a TODO list service that will perform all four CRUD functions (Create, Retrieve, Update, and Delete), using the Jersey API. In the end, you’ll add security to make sure only authenticated users can call your services.
Prerequisites:
Java 8+ - Install with SDKMAN or directly from AdoptOpenJDK
An Okta account - More on that below
Create Your Jersey REST Application
You’ll use Spring Initializer to create the application.
Go to and fill in the following information:
Project: Maven Project
Language: Java
Group:
com.okta
Artifact:
jersey-rest
Dependencies:
Jersey
When you filled all the fields, click Generate. This action will download the project, which you can unzip to your preferred folder.
You can also use the command line to generate the project. If you prefer this method, just go to your terminal and execute the following command:
curl -d language=java \ -d dependencies=jersey \ -d packageName=com.okta \ -d name=jersey-rest \ -d type=maven-project \ -o jersey-rest.zip
That’s it! You created a Spring Boot application that imports Jersey as a dependency. With that application, you’ll be able to use Jersey to develop your REST endpoints.
Now that your Java project structure is created, you can start developing your app.
Configure Your REST App to Work with Jersey
Before you can start programming your REST application, you need to specify in which packages you’ll have endpoints.
Inside your project, create a
src/main/java/com/okta/jerseyrest/configuration/ directory and the following
JerseyConfiguration class:
package com.okta.jerseyrest.configuration; import org.glassfish.jersey.server.ResourceConfig; import org.springframework.context.annotation.Configuration; @Configuration (1) public class JerseyConfiguration extends ResourceConfig { (2) public JerseyConfiguration() { packages("com.okta.jerseyrest.resource"); (3) } }
When using Jersey, a class that contains the REST endpoints is called a resource, hence why the name
resource in the package. A resource fulfills a role similar to the
Controller in Spring.
Now that you configured Jersey to read your endpoints, let’s create the first one.
Create the First Entry in Your Jersey REST App
Let’s start creating your TODO app. The first endpoint you’re going to develop is to create a task.
In REST applications, you use the POST method to create an entity. Your goal is to make a POST request to the
/tasks URI, passing the following payload:
{ "description" : "buy bread" }
The result of this request should be an identifier to the task you just created.
Create a class that represents the JSON above. Create the
src/main/java/com/okta/jerseyrest/request directory and add the following new
TaskRequest class:
package com.okta.jerseyrest.request; public class TaskRequest { private String description; public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } }
The class above is a simple Plain Old Java Object (POJO), that will receive the JSON payload that the user sends to the API. Jersey uses Jackson, a JSON library that can serialize and deserialize JSON objects to Java classes automatically. All you need to do is declare the class as a parameter in your endpoint and you’ll receive the information already deserialized.
Next, create a class to represent the task inside your application model. Create a new
src/main/java/com/okta/jerseyrest/model directory and add the following new
Task class:
package com.okta.jerseyrest.model; import java.util.UUID; public class Task { private UUID id; private String description; public Task(UUID id, String description) { this.id = id; this.description = description; } public UUID getId() { return id; } public void setDescription(String description) { this.description = description; } public String getDescription() { return description; } }
In an advanced scenario, this class would represent data saved on a database, for instance. Here you have both the description of the task and the ID that you use to identify which task you’re referring to.
Now that you have both the model and the payload classes, you can start working on your endpoint to create the task itself.
Create the
src/main/java/com/okta/jerseyrest/resources directory and create the following
TaskResource class:
package com.okta.jerseyrest.resource; import com.okta.jerseyrest.model.Task; import com.okta.jerseyrest.request.TaskRequest; import javax.inject.Singleton; import javax.ws.rs.*; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import java.util.*; @Path("/tasks") (1) @Singleton (2) public class TaskResource { private Map<UUID, Task> tasks = new LinkedHashMap<>(); @POST (3) @Consumes(MediaType.APPLICATION_JSON) (4) public String createTask(TaskRequest request) { UUID taskId = UUID.randomUUID(); tasks.put(taskId, new Task(taskId, request.getDescription())); return taskId.toString(); } }
You implemented all the code for your POST endpoint! Let’s call it and see if it works. Start your application by executing the following command inside your project folder:
./mvnw spring-boot:run
After your application starts, execute the following command in your terminal:
curl -X POST \ \ -H 'Content-Type: application/json' \ -d '{ "description" : "do the dishes" }'
The result of the request should be an id, such as the following one:
d7fc8d86-d7fe-47b9-a6ac-f5e8e28e2ea9
It worked! Now let’s create an endpoint to list all the tasks you already have created.
List All the Entries in Your Jersey REST App
Go inside the
TaskResource class and add the following code:
@GET @Produces(MediaType.APPLICATION_JSON) public List<Task> getTasks() { return new ArrayList<>(tasks.values()); }
This method is also simple. It is annotated by
@GET, which is the HTTP method using to retrieve information from the services.
Since you’re going to return a JSON response, you need to indicate this in the method also. You do this by adding the annotation
@Produces and specifying
MediaType.APPLICATION_JSON as its value.
The last step is to define the return of the method. Here you declared
List<Task>. Jersey will automatically serialize this using Jackson, and transform the content into JSON, which was the type specified in the
@Produces annotation.
Start your server with your latest changes. Since you’re not saving the tasks into the disk (using a database, for instance), every time you restart your application the data is lost. Create a new task again, and keep track of the returned ID.
With the task created again, execute the following command in your terminal:
curl -X GET
Your response should be an array with all the tasks you created so far. In my case, the result was:
[{"id":"d7fc8d86-d7fe-47b9-a6ac-f5e8e28e2ea9","description":"do the dishes"}]
Now that you can both create and list all tasks, the next step is to update an existing task.
Update an Entry
To update the task you are going to create a PUT request to the
tasks/<task_id> URI, where
<task_id> is the ID of the task you want to update.
Inside the
TaskResource, add the following method:
@PUT (1) @Path("/{taskId}") (2) public Response updateTask(@PathParam("taskId") UUID taskId, TaskRequest request) { (3) if (!tasks.containsKey(taskId)) { // return 404 return Response.status(Response.Status.NOT_FOUND).build(); (4) } Task task = tasks.get(taskId); task.setDescription(request.getDescription()); // return 204 return Response.noContent().build(); }
It’s time to test if the endpoint is working. Start your application again and execute the following code:
curl -X POST \ \ -H 'Content-Type: application/json' \ -d '{ "description" : "do the dishes" }'
The command above will create a new task, just like you did before. Now that you have created a task again, you can update its description using the following command:
curl -X PUT \<task_id> \ -H 'Content-Type: application/json' \ -d '{ "description" : "clean the house" }'
Replace
<task_id> with the ID of one of the tasks you created previously.
Great job! If you list your tasks again you’ll see that the description changed.
You implemented all the CRUD functions, except for the last one. Let’s finish it by implementing the delete endpoint.
Delete an Entry
To delete a task you’re going to make a DELETE request to the URI
tasks/<task_id>. This is the same URI that is used to update the task, the only difference is the HTTP method being used to perform the action.
Add the following method to the
TaskResource class:
@DELETE (1) @Path("/{taskId}") public Response deleteTask(@PathParam("taskId") UUID taskId) { (2) tasks.remove(taskId); return Response.noContent().build(); }
To delete the task you’re just removing it from the map, by passing the task ID.
Let’s test it! Run the application with the latest changes, then go to your terminal and type the following command:
curl -X POST \ \ -H 'Content-Type: application/json' \ -d '{ "description" : "do the dishes" }'
The command above will create a new task for you, with the description "do the dishes". Copy the ID of the task you just created and replace with
<task_id> in the command above:
curl -X DELETE<task_id>
After you execute the command the task is going to be deleted. If you list your tasks again, you’ll notice that the task is not there anymore.
Now that you have a CRUD application up and running, the last step is to make sure only authenticated users can have access to it.
Secure Your Jersey REST Application
You’re going to use Okta to authenticate your users, so let’s start by creating an account.
If you don’t have an Okta account, go ahead and create one. After creating it, go through the following steps:
Log in to your account
Click in Applications
Click on Add Application. You will be redirected to the following page:
Select Web and click Next
Fill in the following options in the form:
Name: my-first-app
Base URIs:
Login redirect URL:
Grant Type allowed:
Authorization Code
Implicit(Hybrid)
Allow Access Token with implicit grant type
Click Done
Now that you have your Okta application you can use it to authenticate inside your app.
Secure Your Jersey Service
Let’s start by adding Okta’s library inside your project.
Go to the
pom.xml and add the following dependency inside the
<depencencies> tag:
<dependency> <groupId>com.okta.spring</groupId> <artifactId>okta-spring-boot-starter</artifactId> <version>1.3.0</version> </dependency>
This library will integrate with your Okta app you just created. It will also add Spring Security to your current application.
Inside
src/main/java/com/okta/jerseyrest/configuration create the following
SecurityConfiguration class:
package com.okta.jerseyrest.configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; @EnableWebSecurity public class SecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.oauth2ResourceServer() .and() .authorizeRequests() .anyRequest() .authenticated(); } }
The configuration above will ensure all your requests will be authenticated. If you’re using Spring MVC you don’t need to add this configuration, but since you’re developing with Jersey you need to make sure they are also included in the authentication process.
Now that you added the library and the configuration, you can inform the following variables inside the
src/main/resources/application.properties inside your project:
okta.oauth2.issuer: https://{yourOktaDomain}/oauth2/default
If you want to avoid adding this configuration to source control, you can use environment variables:
OKTA_OAUTH2_ISSUER=https://{yourOktaDomain}/oauth2/default
Your
{yourOktaDomain} will be visible in your Okta dashboard, just click on the Dashboard on the menu. You will see the Org URL in the right upper corner.
Now your application is secure!
Let’s try to make a request to one of your endpoints. Run your application with your latest changes, then go to your terminal line and execute the following command:
curl -X GET -I
The result should be similar to this one:
HTTP/1.1 401 Set-Cookie: JSESSIONID=06775BFFBFDB74DA632CB6F4D973ADA4; Path=/; HttpOnly WWW-Authenticate: Bearer X-Content-Type-Options: nosniff X-XSS-Protection: 1; mode=block Cache-Control: no-cache, no-store, max-age=0, must-revalidate Pragma: no-cache Expires: 0 X-Frame-Options: DENY Content-Type: text/html;charset=utf-8 Content-Language: en Content-Length: 802 Date: Mon, 30 Dec 2019 12:52:52 GMT
The status code of the response is
HTTP 401, which means the request was not authorized to execute. In other words, your application is now secure! You need a valid token to make a request to your endpoints.
Let’s see how you can generate a valid token and how to add it to your request.
Generate a Valid Token
To validate your request you need to add the
Authorization header to the request. The header will provide the type of authentication and the token, which will look like the snippet below:
-H 'Authorization: Bearer <token>'
To generate the token you can access and provide the following information:
Authorize URI:
https://{yourOktaDomain}/oauth2/default/v1/authorize
Redirect URI:
Client ID:
{yourClientID}
Scope:
openid
State:
dev
Response type:
token
You can keep the already filled-in value for the "Nonce" field.
After you fill in all the fields, click on Send Request. You’ll be redirected to your Okta’s App login page:
Put your username and password, and click on Sign In. You’ll be redirected to the OIDC Debugger again, where you’ll see the generated token:
Copy the value and replace with the
<token> keyword in the command below:
curl -X GET -o \ -H 'Authorization: Bearer <token>'
You’ll see that the command now executes successfully:
< HTTP/1.1 200 < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block < Cache-Control: no-cache, no-store, max-age=0, must-revalidate < Pragma: no-cache < Expires: 0 < X-Frame-Options: DENY < Content-Type: application/json < Content-Length: 2 < Date: Mon, 30 Dec 2019 10:15:36 GMT < * Connection #1 to host localhost left intact []
Let’s register a task to make sure everything works as it should. Execute the following command into your terminal, replacing
<token> by your token:
curl -X POST \ \ -H 'Authorization: Bearer <token>' \ -H 'Content-Type: application/json' \ -d '{ "description" : "Test my Jersey App!" }'
Now let’s execute the first command again:
curl -X GET \ -H 'Authorization: Bearer <token>'
It now returns the task you just created!
[{"id":"a44dba4f-d239-441a-925d-d9248aeb4925","description":"Test my Jersey App!"}]
Well done! You managed to create a CRUD service using Jersey! Even better, the service is secure and it took you minimal effort to make it happen.
You can view the source code of this tutorial going to its GitHub repository.
Learn More About Jersey and REST!
Do you want to learn more about Java, REST, Jersey, and secure applications? Here are some links you might want to read:
For more posts like this one, follow @oktadev on Twitter, follow us on LinkedIn, or subscribe to our YouTube channel.
|
https://developer.okta.com/blog/2019/12/30/java-jersey-jaxrs
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
- Dan Williams authored
NVDIMM namespaces, in addition to accepting "struct bio" based requests, also have the capability to perform byte-aligned accesses. By default only the bio/block interface is used. However, if another driver can make effective use of the byte-aligned capability it can claim namespace interface and use the byte-aligned ->rw_bytes() interface. The BTT driver is the initial first consumer of this mechanism to allow adding atomic sector update semantics to a pmem or blk namespace. This patch is the sysfs infrastructure to allow configuring a BTT instance for a namespace. Enabling that BTT and performing i/o is in a subsequent patch. Cc: Greg KH <[email protected]> Cc: Neil Brown <[email protected]> Signed-off-by:
Dan Williams <[email protected]>8c2f7e86
|
https://gitlab.com/post-factum/pf-kernel/-/blob/8c2f7e8658df1d3b7cbfa62706941d14c715823a/drivers/nvdimm/label.c
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Provided by: allegro4-doc_4.4.2-10_all
NAME
fade_from_range - Gradually fades a part of the palette between two others. Allegro game programming library.
SYNOPSIS
#include <allegro.h> void fade_from_range(const PALETTE source, const PALETTE dest, int speed, int from, int to);
DESCRIPTION
Gradually fades a part of the palette from the source palette to the dest palette._from(3alleg4)
|
http://manpages.ubuntu.com/manpages/bionic/man3/fade_from_range.3alleg4.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS). A container is essentially a fully packaged and portable computing environment:
That’s because there’s less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packaging up the many individual microservices that make up modern apps. ADCs. This setup works because all containers run minimal, resource-isolated processes that others cannot access.
Think of a containerized application as the top layer of a multi-tier cake:
Containerization as we know it evolved from cgroups, a feature for isolating and controlling resource usage (e.g., how much CPU and RAM and how many threads a given process can access) within the Linux kernel. Cgroups became Linux containers (LXC), with more advanced features for namespace isolation of components, such as routing tables and file systems. An LXC container can is very lightweight and can be run in large numbers even on relatively limited machines.
LXC serves as the basis for Docker, which launched in 2013 and quickly became the most popular container technology – effectively an industry standard, although the specifications set by the Open Container Initiative (OCI) have since become central to containerization. Docker is a contributor to the OCI specs, which specify standards for the image formats and runtimes that container engines use.
Someone booting a container, Docker or otherwise, can expect an identical experience regardless of the computing environment. The same set of containers can be run and scaled whether the user is on a Linux distribution or even Microsoft Windows. This cross-platform compatibility is essential to today’s digital workspaces, they run independently of each other using actual resources from the underlying infrastructure. But the differences are more important:
Still, running multiple VMs from relatively powerful hardware is still a common paradigm in application development and deployment. Digital workspaces commonly feature both virtualization and containerization, toward the common goal of making applications as readily available and scalable as possible to employees.
Containerized apps can be readily delivered to users in a digital workspace. More specifically, containerizing a microservices-based application, a set of Citrix ADCs or a database (among other possibilities) has a broad spectrum of distinctive benefits, ranging from superior agility during software development to easier cost controls.
Compared to VMs, containers are simpler to set up, whether a team is using a UNIX-like OS or Windows. The necessary developer tools are universal and easy to use, allowing for the quick development, packaging and deployment of containerized applications across OSes. DevOps engineers and teams can (and do) leverage containerization technologies to accelerate their workflows.
A container doesn’t require a full guest OS or a hypervisor. That reduced overhead translates into more than just faster boot times, smaller memory footprints and generally better performance, though. It also helps trim costs, since organizations can reduce some of their server and licensing costs, which would have otherwise gone toward supporting a heavier deployment of multiple VMs. In this way, containers enable greater server efficiency and cost-effectiveness.
Containers make the ideal of “write once, run anywhere” a reality. Each container has been abstracted from the host OS and will run the same in any location. As such, it can be written for one host environment and then ported and deployed to another, as long as the new host supports the container technologies and OSes in question. Linux containers account for a big share of all deployed containers and can be ported across different Linux-based OSes whether they’re on-prem or in the cloud. On Windows, Linux containers can be reliably run inside a Linux VM or through Hyper-V isolation. Such compatibility supports digital workspaces, in which numerous clouds, devices and workflows intersect.
If one container fails, others sharing the OS kernel are not affected, thanks to the user space isolation between them. That benefits microservices-based applications, in which potentially many different components support a larger program. Microservices within specific containers can be repaired, redeployed and scaled without causing downtime of the application
Container orchestration via a solution such as Kubernetes platform makes it practical to manage containerized apps and services at scale. Using Kubernetes, it’s possible to automate rollouts and rollbacks, orchestrate storage systems, perform load balancing and restart any failing containers. Kubernetes is compatible with many container engines including Docker and OCI-compliant ones.
A container may support almost any type of application that in previous eras would have been traditionally virtualized or run natively on a machine. At the same time, there are several computing paradigms that are especially well-suited to containerization, including:
The microservices that comprise an application may be packaged and deployed in containers and managed on scalable cloud infrastructure. Key benefits of microservice containerization include minimal overhead, independently scaling, and easy management via a container orchestrator such as Kubernetes.
Citrix ADC can help with the transition from monolithic to microservices-based applications. More specifically, it assists admins, developers and site reliability engineers with networking issues such as traffic management and shifting from monolithic to microservices-based architectures.:
|
https://www.citrix.com/es-mx/glossary/what-is-containerization.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
A video frame as output by the VDP scanline conversion unit, before any postprocessing filters are applied. More...
#include <RawFrame.hh>
A video frame as output by the VDP scanline conversion unit, before any postprocessing filters are applied.
Definition at line 25 of file RawFrame.hh.
Definition at line 6 of file RawFrame.cc.
References openmsx::FrameSource::FIELD_NONINTERLACED, openmsx::DiskImageUtils::format(), openmsx::FrameSource::init(), openmsx::MemBuffer< T, ALIGNMENT >::resize(), setBlank(), and openmsx::FrameSource::setHeight().
Definition at line 57 of file RawFrame.hh.
Abstract implementation of getLinePtr().
Pixel type is unspecified (implementations that care about the exact type should get it via some other mechanism).
Implements openmsx::FrameSource.
Definition at line 40 of file RawFrame.cc.
References openmsx::MemBuffer< T, ALIGNMENT >::data(), and openmsx::FrameSource::getHeight().
Definition at line 31 of file RawFrame.hh.
References openmsx::MemBuffer< T, ALIGNMENT >::data().
Gets the number of display pixels on the given line.
Implements openmsx::FrameSource.
Definition at line 34 of file RawFrame.cc.
References openmsx::FrameSource::getHeight().
Definition at line 35 of file RawFrame.hh.
Returns the distance (in pixels) between two consecutive lines.
Is meant to be used in combination with getMultiLinePtr(). The result is only meaningful when hasContiguousStorage() returns true (also only in that case does getMultiLinePtr() return more than 1 line).
Reimplemented from openmsx::FrameSource.
Definition at line 49 of file RawFrame.cc.
Returns true when two consecutive rows are also consecutive in memory.
Reimplemented from openmsx::FrameSource.
Definition at line 54 of file RawFrame.cc.
Definition at line 46 of file RawFrame.hh.
References openmsx::FrameSource::getHeight().
Referenced by RawFrame().
Definition at line 39 of file RawFrame.hh.
References openmsx::FrameSource::getHeight().
|
http://openmsx.org/doxygen/classopenmsx_1_1RawFrame.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
.
no example available in JavaScript
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { AudioSource audioSource;
void Start() { audioSource = GetComponent<AudioSource>(); }
void OnCollisionEnter(Collision collision) { // Debug-draw all contact points and normals foreach (ContactPoint contact in collision.contacts) { Debug.DrawRay(contact.point, contact.normal, Color.white); }
// Play a sound if the colliding objects had a big impact. if (collision.relativeVelocity.magnitude > 2) audioSource.Play(); } }
Another example:
no example available in JavaScript
// A grenade // instantiates a explosion prefab when hitting a surface // then destroys itself using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Transform explosionPrefab; void OnCollisionEnter(Collision collision) { ContactPoint contact = collision.contacts[0];
// Rotate the object so that the y-axis faces along the normal of the surface Quaternion rot = Quaternion.FromToRotation(Vector3.up, contact.normal); Vector3 pos = contact.point; Instantiate(explosionPrefab, pos, rot); Destroy(gameObject); } }
Did you find this page useful? Please give it a rating:
|
https://docs.unity3d.com/2017.4/Documentation/ScriptReference/Rigidbody.OnCollisionEnter.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
are before we talk about stateful vs. stateless?
State
A State is an object inside the constructor method of a class which is a must in the stateful components. It is used for internal communication inside a component. It allows you to create components that are interactive and reusable. It is mutable and can only be changed by using setState() method.
Simple Example of Stateful Component which has two properties.
import React, { Component } from 'react'; class StateExample extends Component { constructor(){ super(); this.state={ first_name: '', last_name: '' } } render(){ <div> <p>State Component </p> </div> } } export default StateExample;
Components
A React application is divided into smaller molecules, and each molecule represents a component. In other words, a component is the basic building of a React application. It can be either a class component or a functional component.
React components are independent and reusable and contains JSX(JavaScript XML Syntax) which a combination of JS + HTML. It may take props as the parameter and returns a Document Object Model(DOM) element that describes how the User Interface(UI) should appear.
Class Component(Stateful)
import React, { Component } from 'react'; class StateExample extends Component { constructor(){ super(); this.state={ first_name: 'Shruti', last_name: 'Priya' } } render(){ return ( <div> <p> Class Component </p> <p>{this.state.first_name}</p> <p>{this.state.last_name}</p> </div> ) } } export default StateExample;
Functional Component(Stateless)
import React from 'react'; function Example(props) { return( <div> <p>{props.first_name}</p> <p>{props.last_name}</p> </div> ) } export default Example;
Stateful Components
Stateful components are those components which have a state. The state gets initialized in the constructor. It stores information about the component’s state change in memory. It may get changed depending upon the action of the component or child components.
Stateless Components
Stateless components are those components which don’t have any state at all, which means you can’t use this.setState inside these components. It is like a normal function with no render method. It has no lifecycle, so it is not possible to use lifecycle methods such as componentDidMount and other hooks. When react renders our stateless component, all that it needs to do is just call the stateless component and pass down the props.
Stateful vs. Stateless
A stateless component can render props, whereas a stateful component can render both props and state. A significant thing to note here is to comprehend the syntax distinction. In stateless components, the props are displayed like {props.name} but in stateful components, the props and state are rendered like {this.props.name} and {this.state.name} respectively. A stateless component renders output which depends upon props value, but a stateful component render depends upon the value of the state. A functional component is always a stateless component, but the class component can be stateless or stateful.
There are many distinct names to stateful and stateless components.
– Container components vs Presentational components
– Smart components vs Dumb components
I’m sure you guys must have guessed by just looking at the names which are stateful and stateless. Haven’t you?
State and Props used in stateful component
import React, { Component } from 'react'; class StateAndProps extends Component { constructor(props){ super(props); this.state={ value: '50' } } render(){ return ( <div> <p>{this.state.value}</p> <p>{this.props.value}</p> </div> ) } } export default StateAndProps;
When should I make a component stateful or stateless?
It’s pretty straightforward that you should make your component stateful whenever you want to have a dynamic output (means that the output will change whenever the state changes), and you want to share the properties of parent component with the children components. On the other side, if there is no state necessity, you should make the component stateless.
Conclusion
Stateless components are more elegant and usually are the right choice for building the presentational components because they are just functions, you won’t find it challenging to write and understand them, and moreover, they are very straightforward to test.
There is no need for ‘this’ keyword that has always been a significant cause of confusion. Stateful components are difficult to test. Moreover, it tends to combine logic and presentation together in one single class, which is again a wrong choice for the separation issues.
Keep reading and Keep learning!!!
If you’ve any doubts, please let us know through comment!!
Thank you !!!
Stay connected
|
https://www.cronj.com/blog/learn-stateful-and-stateless-components-in-reactjs/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
:
Loads texts as strings into memory.
Splits strings into tokens, a token could be a word or a character.
Builds a vocabulary for these tokens to map them into numerical indices.
Maps all tokens in the data into indices to facilitate to feed into models.
8.2.1. Data Loading read the dataset into a list of sentences, each sentence is a string. Here we ignore punctuation and capitalization.
import collections import re # Save to the d2l package. return a list of split sentences.
# Save to the d2l package. inconvienet to be used by models, which take numerical inputs. Now let’s build a dictionary, often called vocabulary as well, to map string tokens into numerical indices starting from 0. To so do, we first count the unique tokens in all documents, called corpus, and then assign a numerical index to each unique token according to its frequency. Rarely appeared tokens are often removed to reduce the complexity. A token doesn’t exist in corpus or has been removed is mapped into a special unknown (“<unk>”) token. We optionally add another three special tokens: “<pad>” a token for padding, “<bos>” to present the beginning for a sentence, and “<eos>” for the ending of a sentence.
# Save to the d2l package.: # padding, begin of sentence, end of sentence,] # Save to the d2l package. things. Put All Things Together¶
We packaged the above code in the
load_corpus_time_machine function,
which returns
corpus, a list of token indices, and
vocab, the
vocabulary. The modification we did here is that
corpus is a single
list, not a list of token lists, since we do not the sequence
information in the following models. Besides, we use character tokens to
simplify the training in later sections.
# Save to the d2l package.
Documents are preprocessed by tokenizing the words or characters and mapping them into indices.
8.2.6. Exercises¶
Tokenization is a key preprocessing step. It varies for different languages. Try to find another 3 commonly used methods to tokenize sentences.
|
http://classic.d2l.ai/chapter_recurrent-neural-networks/text-preprocessing.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Yet Another Unsolicited "Intro to Data Analysis in Python Using Pandas" Post
Todd Birchard
Originally published at
hackersandslackers.com
on
・7 min read
Let’s face it: the last thing the world needs is another “ Intro to Pandas ” post. Anybody strange enough to read this blog surely had the same reaction to discovering Pandas as I did: a manic euphoria that can only be described as love at first sight. We wanted to tell the world, and that we did. A lot. Yet here I am, about to helplessly sing cliche praises one more time.
I’m a prisoner of circumstance here. As it turns out, the vast (and I mean _ vast _) majority of our fans have a raging Pandas addiction. They come to our humble mom-and-pop shop here at Hackers and Slackers foaming at the mouth, falling into an absolute raging benders for all Pandas-related content. If I had half a brain, I’d rename this site Pandas and Pandas and delete all non-Pandas-related content. Talk about cash money.
As a middle-ground, I’ve decided to do a bit of housekeeping. My previous “Intro to Pandas” post was an unflattering belligerent mess jotted into a Confluence instance long ago during a Friday night pregame. That mess snuck its way on to this blog, and has gone virtually unnoticed for a year now. I've decided that this probably wasn't the best way to open up a series about the most influential Python library of all time. We're going to try this one more time. For the Pandas.
Intro to Pandas Released: in IMAX 8k 4D
Pandas is used to analyze and modify tabular data in Python. When we say “tabular data,” we mean any instance in life where data is represented in a table format. Excel, SQL databases, shitty HTML tables.... they’ve all been the same thing with different syntax this whole time. Pandas can achieve anything that any other table can.
If you’re reasonably green to data analysis and are experiencing the “oh-my-God-all-data-professions-are-kinda-just-Excel” realization as we speak, feel free to take a moment. Great, that’s behind us now.
The Anatomy of a DataFrame
Tabular data in Pandas is referred to as a “DataFrame.” We can’t call everything “tables-” otherwise, our choice of vague terminology would grow horribly confusing when we refer to data in different systems. Between you and me though, DataFrames are basically tables.
So how do we represent two-dimensional data via command line: a concept which inherently interprets and displays information one-dimensionally?
DataFrames consist of individual parts which are easy-to-understand at face value. It’s the complexity of these things together, creating a sum greater than the whole of its parts, which fuels the seemingly endless power of DataFrames. If we want any hope of contributing to the field of Data Science, we need to not only understand the terminology but at least be aware of core concepts of what a DataFrame is beneath the hood. This understanding is what separates engineers from Excel monkeys.
Engineers
1
Win
Excel
0
Lose
Parts of a DataFrame
Were you expecting this post just to be a bunch of one-liners in Pandas? Good, I hope you're disappointed. Strap yourself in, we might actually learn something today. Class is now in session, baby. Let's break apart what makes a DataFrame, piece-by-piece:
The most basic description of any table would be a collection of columns and rows. Thinking abstractly, we can use the same definition for columns as we do for rows: a sequence of values separated by cells. The only distinction between the two is the direction (horizontal or vertical). Considering we can flip any table on its side and retain the same meaning, we could almost argue that rows and columns are actually the same thing. In Pandas, that's exactly what's happening: both rows and columns are considered to be a Series.
- Series' are objects native to Pandas (and Numpy) which refer to one-dimensional sequences of data. Another example of a one-dimensional sequence of data could be an array , but series' are much more than arrays: they're a class of their own for many powerful reasons, which we'll see in a moment.
- Axis refers to the 'direction' of a series, or in other words, whether a series is a column or a row. A series with an axis of
0is a row, whereas a series with an axis of
1is a column. This should help break the conception that columns are separate entities from rows: instead, they're the same object with a different attribute.
- A series contains labels , which are given visual names for a row/column specifying labels allows us to call upon any labeled series in the same way we would access a value in a Python dictionary. For instance, accessing
dataframe['awayTeamName']returns the entire column matching the header "awayTeamName".
- Every row and column has a numerical index. Most of the time, a row's label will be equivalent to the row's index. While it's common practice to define headers for columns, columns have indexes as well, which simply aren't shown. In this regard, Series share an attribute with lists/arrays, in that they are a collection of indexed values
Consider the last two points: we just described a series to work the same way as a Python dictionary, but also the same way as a Python list. That's right: series' objects are like the biracial offspring of lists and dicts. We can access any column by either its name or its index, and the same goes for rows. Even if we rip a column out from a DataFrame, each cell in that series will still retain the row labels for every cell. This means we can say things like get me column #3, and then find me the value for whatever was in the row labeled "Y". Of course, this works in the reverse as well. It's crazy how things get exponentially more powerful and complicated when we add entire dimensions, isn't it?
If you've made it this far, you've earned the right to start getting hands-on. Luckily, Pandas has plenty of methods to load tabular data into DataFrames, regardless if you're using static files, SQL, or quirkier methods, Pandas has you covered. Here are some of my favorite examples:
import pandas as pd # Reads a local CSV file. csv_df = pd.read_csv('data.csv') # Similar to above excel_df = pd.read_excel('data.xlsx') # Creating tabular data from non-tabular JSON json_df = pd.read_json('data.json') # Direct db access utilizing SQLAlchemy read_sql = read_sql('SELECT * FROM blah', conn=sqlalchemy_engine) # My personal ridiculous favorite: HTML table to DataFrame. read_html = read_html('examplePageWithTable.html) # The strength of Google BigQuery: already officially supported by Pandas read_gbq = read_gbq('SELECT * FROM test_dataset.test_table', projectid)
All of these achieve the same result of creating a DataFrame. No matter what horrible data sources you may have been forced to inherit, Pandas is here to help. Pandas knows our pain. Pandas is love. Pandas is life.
With data loaded, let's see how we can apply our new knowledge of series' to interact with out data.
Finding Data in Our Dataframe
Pandas has a method for finding a series by label, as well as a separate method for finding a series by index. These methods are
.iloc and
.loc, respectively. Let's say our DataFrame from the example above is stored as a variable named
baseball_df. To get the values of a column by name, we would do the following:
baseball_df = baseball_df.iloc['homeTeamName'] print(baseball_df)
This would return the following:
0 Cubs 1 Indians 2 Padres 3 Diamondbacks 4 Giants 5 Blue Jays 6 Reds 7 Cubs 8 Rockies 9 Yankees Name: homeTeamName, dtype: object
That's our column! We can see the row labels being listed alongside each row's value. Told ya so. Getting a column will also return the column's dtype , or data type. Data types can be set on columns explicitly. If they aren't, Pandas will generally either default to detecting that the data in the column is a float (returned for any column which only holds numerical values, despite number of decimal points) or an ' object' , which is a fancy catch-all meaning "fuck if I know, there's letters and shit in there, it could be anything probably." Pandas doesn't try hard on its own to discern the types of data in each field.
If you're thinking ahead, you might see a looming conflict of interest with
iloc. Since we've established that columns and rows are the same, and we're accessing series' based on criteria that is met by both columns and rows (every table has a first row and a first column), how does Pandas know what we want with
.loc()? Short answer: It doesn't, so it just returns both!
baseball_df = baseball_df.loc[3] print(baseball_df) homeTeamName awayTeamName startTime duration_minutes 0 Cubs Reds 188 1 Indians Astros 194 2 Padres Giants 185 3 Diamondbacks Brewers 211
Ahhh, a 4x4 grid! This does, in fact, satisfy what we asked for- albiet in a clever, intentional way. " Clever and intentional" is actually a great way to describe Pandas as a library. This combination of ease and power is what makes Pandas so magnetic to curious newcomers.
Want another example? How about leveraging the unique attributes of series' to splice DataFrames as though they were arrays?
sliced_df = df.loc['homeTeamName':'awayTeamName'] print(sliced_df) homeTeamName awayTeamName 0 Cubs Reds 1 Indians Astros 2 Padres Giants 3 Diamondbacks Brewers
...Did we just do that? We totally did. We were able to slice a two-dimensional set of data by using the same syntax that we'd used to slice arrays, thanks to the power of the series object.
Welcome to the Club
There are a lot more entertaining, mind-blowing ways to introduce people to Pandas. If our goal had been sheer amusement, we would have leveraged the cookie-cutter route to Pandas tutorials: overloading readers with Pandas "tricks" displaying immense power in minimal effort. Unfortunately, we took the applicable approach to actually retaining information. Surely this model of "informational and time consuming" will beat out "useless but instantly gratifying," right? RIGHT?
Fuck it, let's just rebrand to become Pandas and Pandas next week. From now on when people want that quick fix, you can call me Pablo Escobar. Join us next time when we use Pandas data analysis to determine which private Caribbean island offers the best return on investment with all the filthy money we'll make.
And in case you were wondering: it's definitely not the Fyre festival one.
What was your win this week?
Got to all your meetings on time? Started a new project? Fixed a tricky bug?
after 3 symbol add 'python' to color
u da real mvp <3
|
https://dev.to/hackersandslackers/yet-another-unsolicited-intro-to-data-analysis-in-python-using-pandas-post-2f69
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
SDN is a revolutionary idea in computer networking that ensures significant flexibility and simplicity in network control and management, apart from giving a broader scope for innovation through programmability. SDN is all about controlling the network through software or, in other words, making the network programmable.
There has been a dramatic increase in Internet usage over the past few years, as compared to the use of networking technology, which hasnt been engaged as much. SDN has been introduced as a replacement for conventional networking, to meet market requirements. Fortunately, SDN couldnt have evolved at a better time. Since the use of cloud computing is increasing, it is necessary to automate the configurations as much as possible.
According to the ONF (Open Network Foundation) definition, SDN is actually a decoupling of the network control plane and the forwarding plane. With the demand for cloud computing increasing, SDN has evolved as the most efficient way of controlling networking using some high level language to make the programming as flexible as possible.
Traditional networking versus SDN
The main difference between traditional networking and SDN is the way in which data is handled and forwarded. Unlike traditional networking, SDN has separate devices called SDN controllers which control the data path. These control the path of packets coming to switches known as the OFvSwitch (Open Flow Virtual Switch). So, there has to be some interface between the forwarding plane and the controlling plane, which is provided by the OF (Open Flow) protocol. SDN enables admins to control the way switches handle the data, provide QoS (Quality of Service), and automate the process to make it less tedious and erroneous.
Advantages of SDN over traditional networking
SDN has many advantages over traditional networking:
a. Due to the introduction of some automation in the process of networking through SDN, scalability has been increased significantly, which is also a critical requirement of the current market.
b. Unlike conventional networking, SDN only requires one centralised control plane which offsets the cost of the forwarding plane.
c. VM migration becomes easier.
d. Automating the configuration is possible.
e. Quality of Service can be provided in a more efficient way.
As shown in Figure 2, the SDN controller provides a programmable interface to the OF switches. With the help of this interface, different network applications can be written to control, manage and offer new functionalities. A recent study of several OpenFlow implementations, conducted on a large emulated network with 100,000 hosts and 256 switches, revealed that all the controllers were able to handle about 50,000 new flow requests per second. Besides, according to research going on at Stanford University, the new architecture will support around 20 million requests per second with about 5,000 switches. Multiple controllers can be used for scalability purposes; these also allow backup controllers to overcome failures and provide almost 100 per cent uptime.
Configuring an existing basic controller (POX controller) in Python
POX is an open source development platform for Python-based software defined networking (SDN) control applications, such as OpenFlow SDN controllers. POX, which enables rapid development and prototyping, is now being more commonly used than NOX, a sister project which is in C/C++.
In this article, well configure a basic SDN controller (POX) in Python, and emulate using an open source emulator called Mininet, which provides the functionality to create and operate a virtual network and control it using a controller. To set up the environment for a Mininet emulator, follow the steps described in the article that appeared in the September 2015 issue of OSFY. Quick commands to run on the terminal are given below for installation purposes (these commands have been tested on Ubuntu 14.04 and might change for other versions or distributions):
sudo apt-get install mininet
For installation from the source, type:
git clone git://github.com/mininet/mininet cd mininet mininet/util/install.sh [options]
where options include the installation of various packages along with Mininet, which can be seen using the Help option.
After the successful installation of Mininet (along with the POX controller), we can start POX using the following command:
./pox.py log.level DEBUG misc.of_tutorial
The above command tells the POX controller to enable verbose logging and to start the of_tutorial file, which acts as an Ethernet hub right now.
Now, start the Mininet openflow tool to perform experiments using the following command line:
sudo mn topo single, 3 --mac --switch ovsk --controller remote
After successfully starting, it will show the message that switches have been connected with the {MAC address}. To verify the connections established by default, use the following command:
pingall
Here, all the hosts will be unreachable to one another. This is because we have set the remote controller, but have not initiated it or, in other words, there is no entry in the controller for how to handle incoming packets at the switches. Thus, when OF-enabled switches ask for the decision from the controller, it simply gives the command to drop the packets. Figure 3 shows a snapshot of the following command:
h1 ping c3 h2
We can now control each and every host of the network in the emulator, virtually. For that, we can use the following command:
xterm h1 h2 h3
Three small terminal-like windows will pop up. To check that everything is working fine, we can just capture the packages. Assuming that support for Mininet in Wireshark is also installed, run the following commands:
tcpdump -XX -n -I h1-eth0 tcpdump -XX -n -I h2-eth0
Now, we are monitoring the traffic on h1 and h2 hosts. Lets run the ping command in xterm of h3.
ping c1 10.0.0.1
After running this command, we can see the ARP requests in the h1 terminal.
Customising an existing POX controller to act as a firewall and load balancer
Till now, we were making the controller work like a hub. To make it work like a switch or learning switch, we can make changes in the file of_tutorial, and replace the statement act_like_hub() to act_like_switch().
An SDN based firewall
For a firewall kind of application, we need to have a mediator that can filter out the packets based on some conditions. Let us consider an example with MAC address filtration. For that we require to build a look-up table that can hold the MAC addresses of all the devices allowed to communicate or transfer data.
To make the controller communicate with switches, the former needs to send the messages to the latter. As and when a connection initiates a switch, even ConnectionUp is fired. Thus, tutorial code creates a Connection object, which can later be used to send the messages to the switches using the function connection.send(). At that time, we can decide the default flow using the ofp_action_output function, as shown below:
out_action = of.ofp_action_output (port = of.OFPP_FLOOD)
Here, OFPP_FLOOD even decides on flooding, which means that the packets will be forwarded to every port except the one on which the packet had arrived.
A firewall needs to restrict the packets on the basis of several other criteria. For that, we need to create the object ofp_match class. Some important fields of this class are dl_src, dl_dst and in_port. Here, dl represents the data link layer, which means src and dst are MAC addresses of the source and the destination, respectively. To send the packet, we need to send a message using ofp_packet_out. The send_packet() method can be used in of_tutorial.
Here is an example:
def send_packet (self, buffer_id, raw_data, out_port, in_port): Sends the packet out of the specified port.
If buffer_id is a valid buffer on the switch, use it; otherwise, send the raw data in raw_data.
The in_port is the port number that the packet arrived on. Use OFPP_NONE if the packet is generated by you.
msg = of.ofp_packet_out() msg.in_port = in_port if buffer_id != -1 and buffer_id is not None: msg.buffer_id = buffer_id else: if raw_data is None: return msg.raw_data = raw_data action = of.ofp_action_output (port = out_port) msg.actions.append (action) self.connection.send(msg)
This function gives the command to the switch to enter the flow table entry. Now, to take appropriate action at the switch, we need to create the entry for the packet that we want to route. Lets say we want to route a packet coming to input port = 2. So, we need to create a matching object for that, which is given by ofp_match. To create the entry in the switch, write the following code:
fm = of.ofp_flow_mod() fm.match_in_port = 2 fm.actions.append(of.ofp_action_output(port = 4))
Thus, as and when a packet comes at the port = 2, it will be redirected to port = 4. We can definitely add more parameters as per the requirement, like idle_timeout, hard_timeout, actions, priority, buffer_id, match, in_port. But we have not included all of them — just to avoid complexity.
An SDN based load balancer using round robin scheduling
As writing a load balancer could be a tedious task, we can use a template of an existing controller which forwards a request to the available servers randomly. The template can be found at
The main role of the controller is to select the server to which the incoming requests are to be forwarded. This part is coded in the function of the template called pick_server. It is found to be at Line No. 190 in the standard template. Thus, we can modify that procedure to customise the default policy.
The code below refers to Line No. 190 of the file ip_loadbalancer.py:
def _pick_server (self, key, inport): Pick a server for a (hopefully) new connection return random.choice(self.live_servers.keys())
The above lines show the random selection of the servers for forwarding. Now we are going to change this method to remove the randomness and add the round robin algorithm to the load balancer. First, we will add a new variable called selected_server to class iplb (which is the class containing the above method, i.e., pick_server() ). The selected_server will be an instance variable defaulting to 0, which will keep track of the state of the server allocated to fulfill the immediate previous request.
So, a modified class definition of iplb will be as shown below:
class iplb: def __init__(self,...): ... ... self.selected_server = 0
Here, __init__ is the constructor and the self inside the method refers to the instance of the object. So is the case with anything defined with self. Prefixed can be used as an instance variable (similar to other object-oriented languages such as Java). So now, to change the pick_server method, run the following commands:
def _pick_server (self, key, inport):
Pick a server for a (hopefully) new round robin based connection.
all_keys=self.live_servers.keys() if self.selected_server==len(self.live_servers): self.selected_server=0 redirected_s=all_keys[self.selected_server] self.selected_server+=1 return redirected_s
The above code first gets all the keys from the self.live_servers (which is a Python dictionary, i.e., a Hash Map). Then, in the next line, it makes sure that the selected_server isnt out of range to the total number of servers. Finally, it gets the key of the selected server and then increments the variable for the next call.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
|
https://opensourceforu.com/2016/01/configuring-an-sdn-controller-in-open-source-mininet-emulator/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
From: Victor A. Wagner, Jr. (vawjr_at_[hidden])
Date: 2003-03-14 09:30:33
I appreciate the difficulties in getting a release out.
I _am_ puzzled about what behavior you wish people (who use (all | any) of
boost regularly) to use in validating problems they may encounter (most of
the time boost is in a constant state of improvement).
During these times, the usual advice is check the latest out of CVS and
verify the problem exists there.
Now we appear to be told that the "latest" isn't to be used (for filesystem
at any rate) until 1.30 is released.
Is this a rule for ALL of boost for the duration?
Enquiring minds want to know.
_Surely_ you don't want people to _quit_ testing during this pre-release
phase. That would make the whole phase irrelevant.
I suggest further, that perhaps the release mechanism be changed such that
the "how to check the latest" NEVER changes from the point of view of the
user/tester i.e. "cvs update -A -P -d" would ALWAYS get the latest believed
to be working copy.
At Friday 2003/03/14 04:53, you wrote:
>At 11:00 PM 3/13/2003, Victor A. Wagner, Jr. wrote:
>
> ".
>
>I don't know why your run hung, but the '"filesystem" isn't part of
>namespace "boost"' error is a known namespace alias bug in VC++ 7.1 final
>beta. A workaround has been checked into Boost's RC_1_30_0. I'm not
>worrying about the main trunk until 1.30.0 ships.
>
>--Beman
>
>
>_______________________________________________
|
https://lists.boost.org/Archives/boost/2003/03/45698.php
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
I have the following simple task in my build:
task generateFile << {
def file = new File("$buildDir/setclasspath.sh")
file.text = "sample"
outputs.file(file)
}
task createDistro(type: Zip, dependsOn: ['copyDependencies','packageEnvironments','jar', 'generateFile']) <<{
from generateClasspathScript {
fileMode = 0755
into 'bin'
}
}
gradle clean build
Cannot call TaskOutputs.file(Object) on task ':generateFile' after task has started execution. Check the configuration of task ':generateFile' as you may have misused '<<' at task declaration
It's the opposite as what is being suggested in the comments. You are trying to set the outputs in execution phase. The correct way to do what you are probably trying to do is for example:
task generateFile { def file = new File("$buildDir/setclasspath.sh") outputs.file(file) doLast { file.text = "sample" } }
|
https://codedump.io/share/J12QS4jqczPI/1/gradle-clean-erasing-my-file-prior-to-zip-task-execution
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
I would like to fill N/A values in a DataFrame in a selective manner. In particular, if there is a sequence of consequetive nans within a column, I want them to be filled by the preceeding non-nan value, but only if the length of the nan sequence is below a specified threshold. For example, if the threshold is 3 then a within-column sequence of 3 or less will be filled with the preceeding non-nan value, whereas a sequence of 4 or more nans will be left as is.
That is, if the input DataFrame is
2 5 4
nan nan nan
nan nan nan
5 nan nan
9 3 nan
7 9 1
2 5 4
2 5 nan
2 5 nan
5 5 nan
9 3 nan
7 9 1
fillna
method='ffill'
limit=3
Working with contiguous groups is still a little awkward in pandas.. or at least I don't know of a slick way to do this, which isn't at all the same thing. :-)
One way to get what you want would be to use the compare-cumsum-groupby pattern:
In [68]: nulls = df.isnull() ...: groups = (nulls != nulls.shift()).cumsum() ...: to_fill = groups.apply(lambda x: x.groupby(x).transform(len) <= 3) ...: df.where(~to_fill, df.ffill()) ...: Out[68]: 0 1 2 0 2.0 5.0 4.0 1 2.0 5.0 NaN 2 2.0 5.0 NaN 3 5.0 5.0 NaN 4 9.0 3.0 NaN 5 7.0 9.0 1.0
Okay, another alternative which I don't like because it's too tricky:
def method_2(df): nulls = df.isnull() filled = df.ffill(limit=3) unfilled = nulls & (~filled.notnull()) nf = nulls.replace({False: 2.0, True: np.nan}) do_not_fill = nf.combine_first(unfilled.replace(False, np.nan)).bfill() == 1 return df.where(do_not_fill, df.ffill())
This doesn't use any
groupby tools and so should be faster. Note that a different approach would be to manually (using shifts) determine which elements are to be filled because they're a group of length 1, 2, or 3.
|
https://codedump.io/share/Jhn2NpZzNL6Z/1/using-fillna-selectively-in-pandas
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
.
15 comments:
Sasa is shaping up to be pretty amazing. You should consider writing an OOPSLA practitioner's report next year about it. I think it would be the first time in history that I've read a practitioner's report and didn't fall asleep.
Thanks for the kind words! I'll consider that when Sasa gets a little closer to 1.0. Some of these features don't enjoy the degree of test coverage I'd like since they were just fun hacks at the time.
I'm not even sure what abstractions people find interesting. The ones I described in this post are the most innovative or unusual for .NET libraries that I've encountered, but I haven't received much feedback beyond the MIME library.
I have demo projects in mind for parsing (a small programming language), and serialization will be a good demonstration of dynamics, but I don't have a good demo in mind for the event functions. A simple concurrent program would be nice, so if you have any thoughts please let me know!
Maybe I'll contribute some documentation to Sasa, then.
I've got a couple of projects I am working on now as-is: (1) SQLPSX PowerShell project's monitoring API and dashboardd (2) an ORM for Haskell (3) Porting Google Refine to .NET [non-trivial] (4) Porting Gource from C++ and SDL to F# with the help of C# SDL .NET library.
I just finished another port recently from a WPF app to Silveright (with improved abstractions and code re-use). I don't know why, but huge ports have been a fun hobby for me lately.
If you have anything you want help with, just make a list.
Cheers,
Z-Bo
Actually, maybe I could start with a tutorial on the Enum stuff in Sasa. I did some research a few months ago on all the available tools for manipulating Enum's in .NET when I wanted to add an EnumFlagsCSharpRepresentationVisualizer for Visual Studio 2008, similar to the same feature available in VS 2005+ for C++. I kind of went overboard with the research since all I needed to do was a coercive cast, but I just got really curious with the design options.
Have you seen Jon Skeet's Enum library?
Re: unconstrained-melody, yes, some of the features I implemented were inspired by Skeet's blog posts. I just took another quick look at the source there, and there is some odd code there for the enums. He builds some dictionaries for descriptions, and returns IList instead of IEnumerable, so it's definitely not as efficient or minimal as the interface Sasa provides.
As for what Sasa is missing, I think just tutorials and tests, mainly for the concurrent abstractions really. I briefly looked into using MS's CHESS concurrency checker to more rigourously verify the futures and event extensions, but haven't had time to really dig into it. That's definitely an interesting project though if you're looking for something cool to do.
Each assembly has a TODO.txt file detailing some of the plans I have, or incomplete/unsatisfactory abstractions, with a rough timeline I'm trying to stick to. You might be interested in the IL rewriting extensions I plan to add, such as pattern matching inlining, and an analysis to ensure a struct's default constructor is never called (mainly to make NonNull more useful). But anything you'd like to do is welcome!
Give me some time and space to think about the pattern matching tricks.
I'll just focus on writing tutorials for the existing stuff, since it will also push me to read more of the code.
With regards to IL Rewriting, I consider that to be a very big topic. Have you seen the CCI-Metadata project on Codeplex? It is written by the same dude @ MSFT who wrote ILMerge. He pretty much just wanted a less crappy ILMerge. He should've used F# for CCI-Metadata though. I imagine the code being simpler and executing faster.
Also, as my friend jokes, "Nemerle, C# 5.0, Today!"
Re: CCI-Metadata, thanks for the ref, it's new to me. I'm just using Mono.Cecil 2.0 for rewriting, which sounds similar though less ambitious.
I was excited about Nemerle for awhile, but it's really difficult to gauge its status, and the devs aren't very interested in developing a community.
Re: pattern matching, I wasn't looking at anything fancy, just to inline the compiler-generated delegates for matching on Either types. It certainly could get more interesting considering runtime tests and casts are more efficient than vtable dispatch, but that's further down the road.
Anyhow, let me know the tutorials go! Feel free to submit bugs, suggestions or feature requests.
I see CCI-Metadata as fitting into the PHX (Phoenix) compiler stack...
I found a small bug in PrattParser.cs (Sasa 0.9.3); the line counts are not updated for skipped tokens (see lines 567-573).
For my purposes I also had to add a "CurrentToken" property to PrattParser ('var t' from the main loop), else I couldn't see a way to access the Nud for a token during evaluation of its Led ('Token' has already advanced). I hope that explanation makes sense. It appears to not be an issue in the examples I've seen due to the operators tending to be fixed strings. Perhaps I have missed something.
Finally, can I propose the following addition, which I found made my life alot easier when producing scanners for my parser:
protected static Scanner ScanRegex(string regex)
{
Regex r = new Regex(@"\G" + regex);
return (input, start) =>
{
Match m = r.Match(input, start);
return m.Success ? start + m.Length : start;
};
}
Hey Peter,
Re: line counts, I believe you're correct. I'll implement a fix tonight or tomorrow.
Re: CurrentToken, can you provide a simple example? Your mention of fixed operators suggests you might be extending the symbol table dynamically with user-defined operators. I haven't applied PrattParser in this domain yet, so it might indeed be missing something important.
Re: ScanRegex, that's eminently doable. I generally try to avoid using control strings, so I'd prefer to actually add combinators for the regex matching you've found useful, but that's certainly a useful interim solution. Thanks!
I've fixed the line count bug, and also found another bug with the Grammar.Match declaration, so thanks Peter!
I've added the Regex scanner as well, with an optional argument for RegexOptions.
I haven't done anything regarding the previous token, which Peter named "CurrentToken". I know which token this refers to, but I'd like to better understand the specific circumstances where it's needed before making this change.
Assuming it's needed, I think the best solution would be to extend Token with a property designating the previous Token, so you can look back an arbitrary number of tokens.
Hi Sandro, thanks for your response.
The language I was parsing is GML, from the ICFP 2000 ray tracer task. I believe that falls into the category of "dynamic, functional programming language", which according to Doug Crockford's article should be easy with a Pratt parser.
It's really just lists of tokens (including nested lists for functions, arrays). I ended up adding a Led to every symbol:
protected Symbol ListSymbol(string id)
{
var lex = Symbol(id, 50);
lex.Led =
(parser, left) =>
list(left, parser.CurrentToken.Nud(parser));
return lex;
}
"list" is essentially 'cons' here. Notice though that I need to parse the CurrentToken, and because I need the match string, essentially, I have to call Token.Nud.
(By the way, the parser worked fine with those minor changes, and I went on to get a working ray tracer, so thanks!)
I expect I'm missing something obvious, as I'm new to Pratt parsers. Possibly, there could just be an alternate Parse method for returning a list of tokens (where there's a sort of default Led of list append).
I haven't needed this, but don't some languages allow you to apply a function f in infix position by writing 'f or `f or something? e.g. "1 'plus 2" Perhaps that would be an interesting case to prototype?
Incidentally, I think the \G I was prepending to the regex expression is important, otherwise the regex may match past the start location. It appears to be missing from the SVN version.
I added a few more convenience declarations to handle lists of values, and I committed a trivial list grammar to the test suite. It's basically just integer values and lists of integers.
I used a temporary hack to check token membership, but the essence of list parsing is there. I don't specify a Led anywhere, just a Nud on the list symbol, and no dependency on previous tokens. I basically just took the implementation of the Group declaration, and instead of just returning a single value between the open and close delimiters, I process a list of values.
It's not purely functional like your version, but the Pratt parsing algorithm is inherently imperative, so no great loss there IMO. I think a purely functional version is possible, it's just too late to think about it now. :-)
Re: regular expression and \G, I don't use regexes much, and I couldn't find much documentation on \G. Can you provide a simple test case that I can add to the test suite?
About the \G in RegEx strings, see e.g.
In short, it ensures the scanner will only match at the 'start' position given.
Given the current Grammer.RegEx method, a test would be something like:
Scanner s = RegEx("abc");
Assert(s("123abc", 2) == 2); // no match
Assert(s("123abc", 3) == 6); // match
I think the first assertion would fail as the code is right now. I suppose alternatively it could be left to the caller to add the @"\G" themselves, but that seems like an avoidable source of bugs.
Thanks for the references Peter, I just couldn't seem to find any discussion of \G. Why Match would take an index but seemingly ignore the parameter entirely unless you specify a cryptic regex option is beyond me.
I've committed the addition to Sasa.
|
https://higherlogics.blogspot.com/2010/12/sasa-v093-released.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
ES6 for Django Lovers!
The Django community is not one to fall to bitrot. Django supports every new release of Python at an impressive pace. Active Django websites are commonly updated to new releases quickly and we take pride in providing stable, predictable upgrade paths.
We should be as adamant about keeping up that pace with our frontends as we are with all the support Django and Python put into the backend. I think I can make the case that ES6 is both a part of that natural forward pace for us, and help you get started upgrading the frontend half of your projects today.
The Case for ES6
As a Django developer and likely someone who prefers command lines, databases, and backends you might not be convinced that ES6 and other Javascript language changes matter much.
If you enjoy the concise expressiveness of Python, then ES6's improvements over Javascript should matter a lot to you. If you appreciate the organization and structure Django's common layouts for projects and applications provides, then ES6's module and import system is something you'll want to take advantage of. If you benefit from the wide variety of third-party packages the Python Package index makes available to you just a pip install away, then you should be reaching out to the rich ecosystem of packages NPM has available for frontend code, as well.
For all the reasons you love Python and Django, you should love ES6, too!
Well Structured Code for Your Whole Project
In any Python project, you take advantage of modules and packages to break up a larger body of code into sensible pieces. It makes your project easier to understand and maintain, both for yourself and other developers trying to find their way around a new codebase.
If you're like many Python web developers, the lack of structure between your clean, organized Python code and your messy, spaghetti Javascript code is something that bothers you. ES6 introduces a native module and import system, with a lot of similarities to Python's own modules.
import React from 'react'; import Dispatcher from './dispatcher.jsx'; import NoteStore from './store.jsx'; import Actions from './actions.jsx'; import {Note, NoteEntry} from './components.jsx'; import AutoComponent from './utils.jsx'
We don't benefit only from organizing our own code, of course. We derive an untold value from a huge and growing collection of third-party libraries available in Python and often specifically for Django. Django itself is distributed in concise releases through PyPI and available to your project thanks to the well-organized structure and the distribution service provided by PyPI.
Now you can take advantage of the same thing on the frontend. If you prefer to trust a stable package distribution for Django and other dependencies of your project, then it is a safe bet to guess that you are frustrated when you have to "install" a Javascript library by just unzipping it and committing the whole thing into your repository. Our Javascript code can feel unmanaged and fragile by comparison to the rest of our projects.
NPM has grown into the de facto home of Javascript libraries and grows at an incredible pace. Consider it a PyPI for your frontend code. With tools like Browserify and Webpack, you can wrap all the NPM installed dependencies for your project, along with your own organized tree of modules, into a single bundle to ship with your pages. These work in combination with ES6 modules to give you the scaffolding of modules and package management to organize your code better.
A Higher Baseline
This new pipeline allows us to take advantage of the language changes in ES6. It exposes the wealth of packages available through NPM. We hope it will raise the standard of quality within our front-end code.
This raised bar puts us in a better position to continue pushing our setup forward.
How Caktus Integrates ES6 With Django
Combining a Gulp-based pipeline for frontend assets with Django's runserver development web server turned out to be straightforward when we inverted the usual setup. Instead of teaching Django to trigger the asset pipeline, we embedded Django into our default gulp task.
Now, we set up livereload, which reloads the page when CSS or JS has been changed. We build our styles and scripts, transforming our Less and ES6 into CSS and Javascript. The task will launch Django's own runserver for you, passing along --port and --host parameters. The rebuild() task delegated to below will continue to monitor all our frontend source files for changes to automatically rebuild them when necessary.
// Starts our development workflow gulp.task('default', function (cb) { livereload.listen(); rebuild({ development: true, }); console.log("Starting Django runserver http://"+argv.address+":"+argv.port+"/"); var args = ["manage.py", "runserver", argv.address+":"+argv.port]; var runserver = spawn("python", args, { stdio: "inherit", }); runserver.on('close', function(code) { if (code !== 0) { console.error('Django runserver exited with error code: ' + code); } else { console.log('Django runserver exited normally.'); } }); });
Integration with Django's collectstatic for Deployments
Options like Django Compressor make integration with common Django deployment pipelines a breeze, but you may need to consider how to combine ES6 pipelines more carefully. By running our Gulp build task before collectstatic and including the resulting bundled assets — both Less and ES6 — in the collected assets, we can make our existing Gulp builds and Django work together very seamlessly.
References
- GulpJS ()
- ES6 Features ()
- Django Project Template (), maintained by Caktus
|
https://www.caktusgroup.com/blog/2016/05/02/es6-django-lovers/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
README: Angular 2 has changed significantly since this post was written. As such, please do not use this code verbatim. Instead, focus on the concepts below and then map them to the new syntax and API of Angular 2.0.0..
For many, Angular 2 represents a massive change to the framework they love (or love to hate). Everyone spent so much time learning the odd language of v1 (
directive, anyone?), understanding how scopes and digest cycles work, spending time debugging
ngModel issues, and trying to figure out the perfect folder structure, and now almost all of that is changing.
Trust me, it’s for the best.
Angular 2 Series
The Ionic team is one of the earliest adopters of Angular 2 for a large project. Because of that, we are learning a lot about the intricacies, limitations, and power of Angular 2. We know that in order to get the community to embrace Angular 2, we need to start sharing our experiences and educating frontend developers on Angular 2.
Starting this week, we are going to do a series of short posts on Angular 2. These posts will cover various parts of the framework, how to use it, and where to get help.
Today, we will start with an intro to the framework, getting everything installed, and trying out the samples.
Getting started
Let’s follow the quickstart guide on the official Angular 2 docs, but with some added color and commentary.
First, create a project folder, and clone the quickstart repo into it:
mkdir myApp cd myApp git clone git@github.com:angular/quickstart.git
HTML
Create a new
index.html file with this:
<!-- index.html --> <html> <head> <title>Angular 2 Quickstart</title> <script src="/quickstart/dist/es6-shim.js"></script> </head> <body> <!-- The app component created in app.es6 --> <my-app></my-app> <script> // Rewrite the paths to load the files System.paths = { 'angular2/*':'/quickstart/angular2/*.js', // Angular 'rtts_assert/*': '/quickstart/rtts_assert/*.js', // Runtime assertions 'app': 'app.js' // The my-app component }; // Kick off the application System.import('app'); </script> </body> </html>
Your first response should be, “What’s with all these weird
System.* things?” System just adds es6 module loading support to the browser. It’s worth noting, like many of the boilerplate in Angular 2 right now, this is going to be going away (or at least we will abstract it out in Ionic 2, so you don’t ever have to see this). Learned, and promptly forgotten.
Everything else should look pretty familiar, minus the fact that we don’t have an
ng-app anywhere.
Javascript
Next, we need to write some ES6!
So, create a file called
app.js with this code in it. (the docs use
.es6, but I don’t recommend that extension, and it doesn’t seem to be catching on).
import {Component, View, bootstrap} from 'angular2/angular2'; // Annotation section @Component({ selector: 'my-app' }) @View({ inline: '<h1>Hello {{ name }}</h1>' }) // Component controller class MyAppComponent { constructor() { this.name = 'Alice' } } bootstrap(MyAppComponent)
The interesting thing about this is how we specify the app component. The
bootstrap(MyAppComponent) call tells the app to start, much like
ng-app did. Except, in this case, we provided the actual component that starts the app.
Let’s test this!
If you don’t have a local HTTP server installed, we can use
npm install -g http-server or
python -m SimpleHTTPServer. It’s up to you, but I recommend getting one and learning how to use it.
http-server
Open in your browser, and you should see this:
That’s it!
TypeScript?
For the sake of simplicity, the starter project uses a pre-built version of Angular 2 from Traceur.
However, the project is moving to TypeScript right as we speak, which will make the whole toolchain a lot simpler. Suffice to say, you don’t need to learn Traceur or even remember the name for much longer.
Next up: Components
In the file above, we created our first Component. Components are the core of Angular 2, and replace the delicate mess of Controllers, Scopes, and Directives as we knew them from v1.
See the next post in the series, Intro to Angular 2 Components to learn more about the new component system!
|
https://blog.ionic.io/angular-2-series-introduction/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
08-21-2012 12:39 PM - edited 08-21-2012 05:08 PM
I have created a .aspx page and I have added C# code from the first example in the SDK to it - "Getting a table name" - to try and make a connection to the database and log in.
This is the code that I have right now:
<%@ Page Language="C#" %>
<%@ Import Namespace="Act.Framework" %>
<script runat="server">
ActFramework ACTFM = new ActFramework();
ACTFM.LogOn("D:\\ACTDB\\SchillerGroundsCareDev.pad","username","password");
</script>
I have been going through forum posts and found a few that said to copy certain .dll files over from the /Global Cache/ folder to the /Program Files/ ACT/ directory and I did this. I found that it was necessary to use this line for referencing the ACT.Framework assembly or it would throw an error.
<%@ Import Namespace="Act.Framework" %>
I'm' now getting an error on this line of code:
ACTFM.LogOn("D:\\ACTDB\\SchillerGroundsCareDev.pad","username","password);
The error that I'm getting is as follows:
CS1519: Invalid token '(' in class, struct, or interface member declaration
Is there something wrong with the way that I am passing the path to the .pad file? What else could be wrong that would cause this line to throw an error? Could I be missing a reference to an assembly?
I am able to connect to the same data source using Visual Studio using this information (from my .udl file):
Provider=ACTOLEDB2.1;Data Source=D:\ACTDB\SchillerGroundsCareDev.pad;User ID=ACTService;Password=!act2012sgc!#;Persist Security Info=True
Any help that might get me pointed in the right direction is appreciated.
Thanks!
08-22-2012 06:59 AM
I could be wrong, but just at a glance it appears there's a quotation mark missing after password.
08-22-2012 11:23 AM
That was my not copying the entire line when I pasted it in my post.
Here is an the same code even more simplified to try to catch any syntax errors:
<%@ Page Language="C#" %>
<%@ Import Namespace="Act.Framework" %>
<%@ Import Namespace="Act.Shared.Collections" %>
<script runat="server">
ActFramework ACTFM = new ActFramework();
string pathDB = "D:\\ACTDB\\SchillerGroundsCareDev.pad";
string userID = "username";
string pwd = "password";
ACTFM.LogOn(pathDB, userID, pwd);
</script>
I am getting an error for this line:
ACTFM.LogOn(pathDB, userID, pwd);
What could I be doing wrong. Could there be a problem with how I am passing the path to the database?
08-22-2012 12:30 PM
Check out this page. It talks about the generic "Invalid Token" error...
-- Jim
05-23-2014 07:29"
05-23-2014 11:26 AM
05-23-2014 11:28 AM
You can get the DLL from the ACT extracted installation files folder:
C:\<wherever you extracted the files to>\Sage ACT! 2013 Premium\ACT_Premium_2013\ACTWG\GlobalAssemblyCache
Then you can put the file in a folder called ActReferences or something easy to find.
The you just have to add the reference to the dll in your project.
Once you have that reference to that DLL you can add the using statements:
using Act.Framework; using Act.Framework.Contacts; using Act.Framework.Groups;
and so on.
05-26-2014 01:30 AM
Thanks a lot. Now I found them
|
https://community.act.com/t5/Act-Developer-s-Forum/Beginner-help-error-connecting-with-SDK/td-p/215992
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
...> calculated-result-type beta(T1 a, T2 b); template <class T1, class T2, class Policy> calculated-result-type beta(T1 a, T2 b, const Policy&); }} // namespaces
The beta function is defined by: when T1 and T2 are different types.
The following table shows peak errors for various domains of input arguments, along with comparisons to the GSL-1.9 and Cephes libraries. Note that only results for the widest floating point type on the system are given as narrower types have effectively zero error.
Note that the worst errors occur when a or b are large, and that when this is the case the result is very close to zero, so absolute errors will be very small.
A mixture of spot tests of exact values, and randomly generated test data are used: the test data was computed using NTL::RR at 1000-bit precision.
Traditional methods of evaluating the beta function either involve evaluating the gamma functions directly, or taking logarithms and then exponentiating the result. However, the former is prone to overflows for even very modest arguments, while the latter is prone to cancellation errors. As an alternative, if we regard the gamma function as a white-box containing the Lanczos approximation, then we can combine the power terms:
which is almost the ideal solution, however almost all of the error occurs in evaluating the power terms when a or b are large. If we assume that a > b then the larger of the two power terms can be reduced by a factor of b, which immediately cuts the maximum error in half:
This may not be the final solution, but it is very competitive compared to other implementation methods.
The generic implementation - where no Lanczos approximation approximation is available - is implemented in a very similar way to the generic version of the gamma function. Again in order to avoid numerical overflow the power terms that prefix the series and continued fraction parts are collected together into:
where la, lb and lc are the integration limits used for a, b, and a+b.
There are a few special cases worth mentioning:
When a or b are less than one, we can use the recurrence relations:
to move to a more favorable region where they are both greater than 1.
In addition:
|
http://www.boost.org/doc/libs/1_58_0/libs/math/doc/html/math_toolkit/sf_beta/beta_function.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
hello,
I am trying to familiarize myself with the visual C++ debugger.
In the code below when i step into the strlen() function a small window opens up saying please enter the path for STRLEN.ASM.
I have tried looking for the file but its not there.
How do i debug programs which contain functions from a different library like string.h.
thanks.thanks.Code:// debug experiment #include<iostream> #include<string.h> using namespace std; int main(void) { int num = 1; char letter = 't'; char name[] = "Mark"; int length = strlen(name); int age = 29; return(0); }
|
https://cboard.cprogramming.com/cplusplus-programming/65674-help-debugging.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Copyright ©2001 4 October 2001. It is released as a W3C Working Draft to gather public feedback before its final release as a W3C Recommendation. This document should not be used as reference material or cited as a normative reference from another document. The review period for this Working Draft is 4 weeks ending 1 November 2001..
Please report errors in this document to www-html-editor@w3.org (archive).
This document has been produced as part of the W3C HTML Activity. The goals of the HTML Working Group (members only) are discussed in the HTML Working Group charter.
A list of current W3C Recommendations and other technical documents can be found at.
Public discussion on HTML features takes place on the mailing list www-html@w3.org (archive).
XHTML is a family of current and future document types and modules that reproduce, subset, and extend HTML 4 [HTML]. XHTML family document types are XML based, and ultimately are designed to work in conjunction with XML-based user agents. The details of this family and its evolution are discussed in more detail in the section on Future Directions. for use on these platforms is somewhat limited.
XML™ is the shorthand for Extensible Markup Language,:
The following terms are used in this specification. These terms extend the definitions in [RFC2119] in ways based upon similar definitions in ISO/IEC 9945-1:1990 [POSIX.1]:
This version of XHTML provides a definition of strictly conforming XHTML documents, which are restricted to tags and attributes from the XML and XHTML namespaces. See Section 3.1.2 for information on using XHTML with other namespaces, for instance, to include metadata expressed in RDF within XHTML documents.
A Strictly Conforming XHTML Document is a document that requires only the facilities described as mandatory in this specification. Such a document must meet all of the following criteria:
It must conform to the constraints expressed in.
Here is an example of a minimal XHTML document:
<(e.g. the
idattribute on most XHTML elements) as fragment identifiers.
White space is handled according to the following rules. The following characters are defined in [XML] as white space characters:
The XML processor normalizes different systems' line end codes into one single LINE FEED character, that is passed up to the application.
The user agent must process white space characters in the data received from the XML processor as follows:
xml:space' attribute is set to '
preserve', white space characters must be preserved and consequently LINE FEED characters within a block must not be converted.
xml:space' attribute is not set to '
preserve', then:
White space in attribute values is processed according to [XML].
In determining how to convert a LINE FEED character a user agent must meet the following rules, whereby the script of characters on either side of the LINE FEED determines the choice of the replacement. The assignment of script names to all characters is done in accordance to the Unicode [UNICODE] technical report TR#24 (Script Names).
Note (informative): Some scripts, such as HAN, HIRAGANA, KATAKANA, KHMER, LAO, MYANMAR, THAI do not use space characters for word boundary delimitation, but may still use these space characters for delimitation of sentences or fragments of sentence. If such a character occurs as the last character before a LINE FEED character, or a character following a LINE FEED character, it may be eliminated by the white space processing described above. Several solutions are possible:>
In attribute values, user agents will strip leading and trailing white space from attribute values and map backwards", as they are compatible with most HTML browsers. This document makes no recommendation about MIME labeling of other XHTML documents. sub-setting are defined in "Modularization of XHTML" [XHTMLMOD].
Modularization brings with it several advantages:
It provides a formal mechanism for sub-setting XHTML.
It provides a formal mechanism for extending XHTML.
It simplifies the transformation between document types.
It promotes the reuse of modules in new document types.. It is likely that when the DTDs are modularized, a method of DTD construction will be employed that corresponds more closely to HTML 4.
The XHTML entity sets are the same as for HTML.9)..
Be aware that processing instructions are rendered on some user agents. However, also note
[HTML]
Section 6.2
for more information.
Finally, note that XHTML 1.0 has deprecated the
name attribute of the
a,
applet,
form,
frame,
iframe,
img, and
map
elements, and it will be
removed from XHTML in subsequent versions.or
application. Applications need to adapt to this accordingly.
When an attribute value contains an ampersand, it must be expressed as a character"?> <html xmlns="" xml: <head> <title>An internal stylesheet example</title> <style.
|
https://www.talisman.org/web/xhtml-1ed2/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
..... More about.....Character Encoding
Merc
Asked: 2015-04-16 03:57:45 -0600
Seen: 937 times
Last updated: Aug 07 '15
from __future__ import unicode_literals and variable names
Is there a way to use non english symbols?
Is it possible to use unicode letters in tag-names
UnicodeDecodeError in Notebook Server if Worksheet is set to 'python' instead of 'sage'
problem with encoding & german characters
Why Sage can not plot Chinese label
How to convert renpy python code into renpy python script?
UnicodeDecodeError in matplotlib if 'python' set instead of 'sage' in Notebook [closed]
How do I work with the character tables of Weyl groups in Sage to compute restrictions to parabolic subgroups?
See also the presumably identical...
And see also...
|
https://ask.sagemath.org/question/26556/character-encoding/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
full fix for disconnected path (paths)
Bug Description
With the apparmor 3 RC1 upload, there is an incomplete bug fix for disconnected paths. This bug is to track that work.
This denial may be related:
Sep 23 10:10:50 localhost kernel: [40262.517799] audit: type=1400 audit(141148505
This is related to bug 1375410
I'm going to need to add attach_disconnected to the cups profile as a temporary workaround. When this bug is fixed, we need to undo that.
This bug was fixed in the package cups - 1.7.5-3ubuntu1
---------------
cups (1.7.5-3ubuntu1) utopic; urgency=medium
* debian/
- fix peer on signal rule to use /usr/sbin/
(LP: #1376611)
- temporarily use attach_disconnected to work around LP: #1373070. This
should be undone once 1373070 is properly fixed
-- Jamie Strandboge <email address hidden> Thu, 02 Oct 2014 08:22:36 -0500
To add one more data point, my Trusty server using the Utopic HWE kernel also exhibits the problem:
May 21 12:27:28 xeon kernel: [95104.918686] audit: type=1400 audit(143222564
$ apt-cache policy apparmor linux-image-
apparmor:
Installed: 2.8.95~
Candidate: 2.8.95~
Version table:
*** 2.8.95~
500 http://
100 /var/lib/
2.
500 http://
500 http://
2.
500 http://
linux-image-
Installed: 3.16.0-
Candidate: 3.16.0-
Version table:
*** 3.16.0-
500 http://
500 http://
500 http://
100 /var/lib/
rsyslog:
Installed: 7.4.4-1ubuntu2.6
Candidate: 7.4.4-1ubuntu2.6
Version table:
*** 7.4.4-1ubuntu2.6 0
500 http://
100 /var/lib/
7.
500 http://
7.4.4-1ubuntu2 0
500 http://
I'm affected by this bug too at Trusty + Vivid HWE
# lsb_release -rd
Description: Ubuntu 14.04.3 LTS
Release: 14.04
# uname -a
Linux amanda 3.19.0-42-generic #48~14.04.1-Ubuntu SMP Fri Dec 18 10:25:23 UTC 2015 i686 i686 i686 GNU/Linux
# dpkg -l | grep linux-image-generic
ii linux-image-generic 3.13.0.74.80 i386 Generic Linux kernel image
ii linux-image-
# dpkg -l | grep -e rsyslog -e apparmor
ii apparmor 2.8.95~
ii apparmor-profiles 2.8.95~
...
ii rsyslog 7.4.4-1ubuntu2.6 i386 reliable system and kernel logging daemon
# grep 'audit:' /var/log/syslog | grep DENIED
Dec 26 09:39:48 amanda kernel: [11627.614510] audit: type=1400 audit(145111198
Pavel, Déziel,
Im reproducing the same issue with dnsmasq + openstack + neutron:
Feb 16 18:35:01 juju-inaddy-
AND when using :
/usr/sbin/dnsmasq flags=(
in usr.sbin.dnsmasq profile, I'm mitigating the problem (just as the cups patch).
I'll try reproducing using rsyslog so I can have a simple reproducer in order to bisect kernel 3.13 -> 3.19 and check what caused apparmor's regression (likely related to apparmor's filesystem labeling mechanism).
Thank you
-inaddy
I am able to reproduce this just by having apparmor.d profile usr.sbin.rsyslogd removed from disable/ directory.
[ 674.165128] audit: type=1400 audit(145649188
[ 674.165178] audit: type=1400 audit(145649188
OR
[ 522.429097] audit: type=1400 audit(145649172
[ 527.268883] audit: type=1400 audit(145649173
As expected, that's a totally different issue.
Please add
/dev/log r,
to your rsyslogd profile.
Yep, you're right. It was getting /dev/log from abstractions/base for write only. My bad.
Though,
https:/
Shows same issue.
Though,
For comments:
https:/
If you remove /dev/log rwx from /etc/apparmor.
Using kernel Ubuntu-3.13.x DOES NOT show any DENIALS (Ubuntu-3.16, Ubuntu-3.19 and Ubuntu-4.2 HWE kernels shows).
Using upstream kernels 3.13, 3.16, 3.19 and 4.2 DOES NOT show any DENIALS.
I wonder why only Ubuntu >= 3.16 kernels show the denials.
Okay, so, I had more time to dig a bit into this and, after some analysis, I got:
Errors being reproduced:
[1668392.078137] audit: type=1400 audit(145931178
And apparmor dnsmasq profile:
#/usr/sbin/dnsmasq flags=(
#/usr/sbin/dnsmasq flags=(complain) {
/usr/sbin/dnsmasq {
Without any flags.
And the command causing the apparmor errors:
root 16877 0.0 0.2 66416 3648 ? S 13:23 0:00 sudo /usr/bin/
It is a "sudo-like" approach from openstack (rootwrap) to execute dnsmasq in a new network namespace with different privileges.
Ubuntu kernel 3.13.X has apparmor 3 alpha 6 code: https:/
Ubuntu kernel 3.16 and 3.19 has apparmor 3 rc 1 code: https:/
From apparmor I could see that the error comes from "aa_path_name" called by either:
- path_name *
- aa_remount
- aa_bind_mount
- aa_mount_
- aa_move_mount
- aa_new_mount
- aa_unmount
- aa_pivotroot
So, since the job is being restarted by neutron (or at least it is trying to re-start it, causing the apparmor to block the access), I created a systemtap script to monitor path_name and check for dnsmasq trying to open "log" (allegedly /dev/log) file.
probe kernel.
funcname = execname();
if (funcname == "dnsmasq") {
filename = reverse_
if (filename == "log") {
printf("(%s) %s\n", execname(), filename);
print_
}
}
}
And got the backtrace from the denials:
(dnsmasq) log
0xffffffff8132deb0 : path_name+0x0/0x140 [kernel]
0xffffffff8132e413 : aa_path_
0xffffffff81337e26 : aa_unix_
0xffffffff8132c653 : apparmor_
0xffffffff812eb8a6 : security_
0xffffffff817019db : unix_dgram_
0xffffffff8164a987 : SYSC_connect+
0xffffffff8164b68e : sys_connect+
0xffffffff817700cd : system_
When trying to check if "log" could be converted to "fullpath" by using systemtap function:
return task_den...
Correct.
There are actually several ways to get disconnected paths and this specific one is being caused by the new file ns. The proper fix for this is delegating access to the object that would not normally be accessible, however delegation is not available in the current releases of apparmor and the HACK of attach disconnected is being used to work around this.
As for apparmor not complaining about disconnected path failures, it should be unless attach disconnected is specified. The info field in the apparmor audit message will be
info="Failed name lookup - disconnected path"
Hi,
I think bug 1594202 is another data point for this:
Jun 20 01:49:24 omicron kernel: [ 962.491873] audit: type=1400 audit(146638016
But before I close-as-dup and open a dovecot task here I'd ask if one that has worked on this issue take a look if that is true?
If so are we still supposed to add workarounds like the attach_disconnected or were there updates to this issue which didn't make it to the bug yet?
Actually the dovecot profiles are in apparmor and not dovecot source packages - so it would be an apparmor task then.
possibly. There isn't actually enough information in that bug to be sure if it is an actual namespacing issue or it is a separate bug to do with unix domain sockets.
Unfortunately the workaround of attach_disconnect is still required to deal with these issues.
Status changed to 'Confirmed' because the bug affects multiple users.
Same problem with powerdns, I can't run it with apparmor profile, because it complains:
operation="sendmsg" info="Failed name lookup - disconnected path" error=-13 profile=
I am not an expert, but I tried to put run/systemd/
Note: I have: /usr/sbin/
But still ... (now I have only complain mode).
If I exclude pdns from systemd it works btw, and no wonder as it seems the problem somehow connected to systemd's journal, so it's better not to use systemd if possible since it renders apparmor unusable in my experience :( But for sure, I would be more than happy to have a better option, rather than deleting systemd's unit file each time after upgrade pdns ... Or so.
this is up-to-date Ubuntu 16.04.3 LTS 64 bit, fresh install, but I have about a dozen of servers with this problem with different daemons as well, not only powerdns.
Gábor, systemd is well-meaning in providing namespacing features so the thousands of daemons that are in the world don't have to re-implement something similar. But of course the kernel hook points used by AppArmor don't provide sufficient information to know what pathname to reconstruct when the named object isn't visible in the namespace where it was used.
Add /run/systemd/
Thanks
Here is another:
0.203:112) : apparmor="DENIED" operation="connect" info="Failed name lookup - disconnected path" error=-13 profile= "/usr/sbin/ cupsd" name="run/ dbus/system_ bus_socket" pid=3608 comm="cupsd" requested_mask="rw" denied_mask="rw" fsuid=0 ouid=0
Sep 10 09:06:00 callisto kernel: audit: type=1400 audit(141033276
|
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1373070
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
In this post we will see how to add Web API support to an existing MVC project. Sometimes we have a project that started as simple ASP.NET MVC site, maybe even as one single page project, with no more functionality than to display some simple information in the screen. As time goes by new needs arise, new features are requested, and it becomes convenient to develop a Rest API that will be used by the client to query the server for more information.
For starters we need to create a basic MVC project that represents our basic deployed MVC application, and then we will see how to add to the project Web Api support, and how to query the REST API from our original application using Ajax. Finally we will see some of the common mistakes that we make when adding a WebAPI controller to an existing – not WebAPI – project.
Here are the four parts in which this tutorial is divided:
- Creating our sample MVC Project
- Adding WebAPI support to our MVC Application
- Consuming the RESTful web service
- Things that can go wrong
1.Creating our sample MVC Project
1.1.Create a new project of type ASP.NET web Application
For this example an empty ASP.NET web project will suffice. We will just check the MVC option so that the basic folder structure corresponding to an ASP.NET MVC project is created.
1.2. Adding a Controller for our Home Page
Next we need to create a controller for our Home Page in the Controllers folder. We can do that selecting Add >> Controller … …
… and choosing MVC 5 Controller Empty from the available options in the Scaffold Dialog (note that the controller name must end with “Controller”, for example HomeController)
Our Home Controller will not have much to do besides displaying the view:
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; namespace WebApplication1.Controllers { public class HomeController : Controller { // GET: Home public ActionResult Index() { return View(); } } }
1.3. Adding the Home Page View
Now we can create a View for our Home Page; we will select Add >> View … …
… and write some simple HTML. In this case when the browser opens our it will display a classical “Hello World” message.
@{ Layout = null; } <meta name="viewport" content="width=device-width"> <title>Index</title> <h1>Hello World</h1>
1.4. Adding JQuery Support
Later in the game we want to add REST API support to our web application, and consume the RESTful webservice from the client (the end user browser). To simplify the development necessary in the client side, we will add the JQuery library to the project. We will use JQuery to do the GET/POST requests to the RESTful web service. We can either download the JQuery library from its official page, or simply include it from a CDN:
<script src=""></script>
With this we finally have a working MVC application, and we can move on to the next step: how to add WebAPI support to the MVC project that we have just created.
2. Adding WebAPI support to our MVC Application
2.1. Adding the necessary WebAPI packages
In order to add WebAPI support we need to add some extra packages to our project. That can be done using the NuGet Manager:
The bare minimum of packages that we need to install are the Microsoft WebApi package, and the Microsoft WebHost package.
Once we have accepted the License Agreement the NuGet Manager will install the required packages along with their dependencies.
2.2 Configuring the routing
We need to create a WebApiConfig.cs file in the App_Start/ folder. In this file we will define the webAPI specific routes (much as the RouteConfig.cs file is used to configure the ASP.NET routes):
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Http; namespace WebApplication1.App_Start { public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } }
In the Application_Start method of the file Global.asax.cs file, we will add a call to GlobalConfiguration.Configure; be careful to place it before the call to RouteConfig.RegisterRoutes(RouteTable.Routes):
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Http; using System.Web.Mvc; using System.Web.Routing; using WebApplication1.App_Start; namespace WebApplication1 { public class MvcApplication : System.Web.HttpApplication { protected void Application_Start() { AreaRegistration.RegisterAllAreas(); GlobalConfiguration.Configure(WebApiConfig.Register); RouteConfig.RegisterRoutes(RouteTable.Routes); } } }
2.3 Creating our WebAPI controller
Finally we are are ready to add a WebAPI controller to our project. As we did in the first part of this tutorial we will select Add >> Controller … to add a WebAPI controller to our Controller folder, but this time in the Scaffolding Dialog we must select the WebApi 2 Controller – Empty.
This will create an empty class, inheriting from ApiController, where we can define our methods. Let´s add a method – GetStudents – that returns a list of students.
Here is what our Student class would look like:
public class Student { public String Name {get; set;} public String Email { get; set; } }
In a real life situation we would be getting our students from a database,etc., but for this example we can use a simple function that creates and returns a list of Student objects from an array of student names:
public List GetStudents() { List studentList = new List(); string[] students = { "John", "Henry", "Jane", "Martha" }; foreach (string student in students) { Student currentStudent = new Student { Name = student, Email = student + "@academy.com" }; studentList.Add(currentStudent); } return studentList; }
Finally we will specify a route prefix to indicate the uri to use with all the methods in this controller:
[RoutePrefix("api/Students")] public class WepApiController : ApiController
We can also add a route to the methods in the controller:
[Route("GetStudents")] public List GetStudents()
This is how our WebAPI controller looks like, after all the introduced changes:
using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using WebApplication1.Models; namespace WebApplication1.Controllers.webapi { [RoutePrefix("api/Students")] public class WepApiController : ApiController { [HttpGet] [Route("GetStudents")] public List GetStudents() { List studentList = new List(); string[] students = { "John", "Henry", "Jane", "Martha" }; foreach (string student in students) { Student currentStudent = new Student { Name = student, Email = student + "@academy.com" }; studentList.Add(currentStudent); } return studentList; } } }
With this we had added a WebApi support to our MVC project, and we can call the method defined in our WebAPI controller from the end user browser.
3.Consuming the RESTful web service
The only thing we need to do here to recover our students list from the Index.cshtml home page we created in the first part of this tutorial is to do a JQuery call to the following url:
For example:
$.get("", function (data, status) { alert("Status: " + status); });
Notice how by using in the WebAPIController, the RoutePrefix decorator, and setting the uri to api/Students, we can now use in the ajax call a friendly customized url instead of an /api/controllername/… type of url.
4.Things that can go wrong
Actually many things. As you may have noticed, adding a WebAPI controller to an existing ASP.NET MVC project is an extremely simple task that can be achieved in a few minutes. Still, all too frequently, one finds out that after following all the steps the application will stubbornly refuse to work properly: our ajax calls to the webapi service that we just defined will be answered with a “URL not Found” error. And since this error message is not exactly very helpful when it comes to determining at which point of the process we did go astray, we may end up wasting hours trying to figure out what we did wrong.
Here is a quick checklist of things to check up, if we encounter the above problem:
- Check the names of the classes, and the files: our API controller must be in a file whose name ends in Controller.cs, and the class name must also end with the word Controller
- Did you add the call to GlobalConfiguration.Configure(WebApiConfig.Register) in Application_Start? Order matters here, make sure that it is called before the RouteConfig.RegisterRoutes line.
- Use the decorators RoutePrefix and Route to specify the URL of the service and action.
- Clean and rebuild the project to make sure Visual Studio is using your last changes.
|
https://developerslogblog.wordpress.com/2016/12/30/adding-web-api-support-to-an-existing-asp-net-mvc-project/
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
This article explains how to use a certificate to connect to, for example, a web service. Do you know the floppies or USB Pens that some banks and IT Companies give to their employees or customers that permit them to connect to services via the internet using a security file and channel? Well, we will learn how to use a similar file in this article, which is divided into six parts:
For this article I used three different machines:
You can use only one Windows XP Professional machine; I use the second machine to ensure that the certification file and the software that I develop are the only two things that I must take away to connect to the web service. Good luck!
In this section we create the two projects that we will use to try our certification program. I won't explain the two projects in detail; I presume that you are expert enough to create two standard projects with Visual Studio 2005.
All right, start creating a new website project. Choose the ASP.NET Web Service template. Attention! It is important that you use a real web server and not the emulator. Therefore, if you don't have IIS Web Server or something similar installed, install it and then choose the HTTP option from the combo location of the VS Project. For the name of the application, I chose. Visual Studio takes care of most of what you need; you only have to configure the website appropriately. If you build the project you'll have just one method in the service, the classic "hello word" example. I suggest we use this for our test, but if you don't like it you can create another function that you prefer. The second project that you must now create is a Windows Application. Create it in the same solution as the Web Service; I called mine testSSLWebServ. In the new project, add a Web Reference from the solution explorer and choose your web service. I called the namespace for the service remoteWebServSSL. All right, the last pass is to add a button on the form and call the unique method of the Web Service that we have. Here 's some code:
HTTP
testSSLWebServ
remoteWebServSSL
remoteWebServSSL.Service srv = new remoteWebServSSL.Service();
MessageBox.Show("The Web Service say: " + srv.HelloWorld());
Important: Some problems can arise from the firewall settings of the machine. Be sure to turn off the firewall; you can configure it in a second time.
Now we can start.
Next, to activate the secure channel on our web site, we must go to the properties of the root Web Server and set the Server Certificate. There's more than one mode to do this; you can choose ones you prefer. I chose to connect to the CA Web Server to ask for the Certificate. Therefore I must create a Certification Request and ask the CA a second time. Click on the Server Certificate button. The certificate wizard will then appear; click Next. In this dialog choose Create a New Certificate and click Next. The only option that you can choose now is "Prepare the request now, but send it later;" again click Next. In this form you must choose the name of the certificate that you will install on your web server. Leave the other parameter as it is and go on. Now write something for the organization and organization unit. Go on. This next step is important: the Common Name required is the name of your machine and is, therefore, the name of the web server that responds to your web server. For example, my machine name is WorkStatio23 and the http address that I write when I navigate on my web server is. The Common Name that I must choose is WorkStatio23. Also insert the State/province and City/locality data and then go on. As a last step, save your Certification Request; choose a location and click Next. Read the Request File Summary, click Next and Finish. The Certification Request has now been created.
WorkStatio23
Open the generated Text file on your computer from the directory where you chose to save it. Select all the text in the file and copy it. Now it's time to use Windows 2000 Server with the CA. Ensure that your CA is configured to accept the request by the Web Server and that it releases the certificate immediately, without user participation. Connect to the CA Web Server using the related address. For example, my Windows 2000 Server machine is named w2ksrw. Therefore the address to request the certification is. Next, choose the second option "Request a certificate" and click Next. Choose the "Advanced request" option and click Next again. Choose the second available option: "Submit a certificate request using a base64 encoded PKCS #10 file or a renewal request using a base64 encoded PKCS #7 file." Click Next. Since we already have the Text file code copied to memory, just paste it in the relevant textbox control (the first) and click Next. Now you are ready to download the certificate. Download the two certificates on the result page: "Download CA certificate" and "Download CA certification path."
w2ksrw
Select the certificate file with the .p7b extension. For me, this is certnew.p7b. Double click on your file once you find it. Note that the certificate is not valid. Click the "Install certificate" button and then click Next until Finish; confirm the last message and the certificate is now installed. Now finish the activation of the SSL channel on the Web Server. Launch your IIS configuration tool, go on the root Web Server (Default Web Site), right click and select Properties. Return to the Directory Security and re-click the Server Certificate button. Click Next. Note that the options have changed. Choose the "Process the pending request and install the certificate" option and click Next. Browse for the other certificate that you obtain from the CA. Click Next two times and then Finish. Well, you are ready to use your https connection. Try your web server in https and enjoy. For this example, we must enforce the use of the SSL channel on our Web Services. Therefore, on the IIS, go to the properties of WebServSSL, choose the Directory Security tab and click the Edit button. Check the "Require secure channel (SSL)" checkbox and the "Require client certificates" option in the Client Certificates group. Close all windows and try your web services now using your browser.
This step will be fast and simple. Using your browser, go to the Web Server of the CA using the same address (). Choose the "Request a certificate" option and click Next. Choose the "Advanced request" option and then Next. Leave the "Submit a certificate request to this CA using a form" option selected and click Next. Fill in the Name and the E-Mail fields under Identifying Information with whatever you want and leave the other field as is. For the Intended Purpose field, choose Client Authentication Certificate. For the last setting, set the "Mark keys as exportable" option. Bypass the other field and click Submit. Answer "yes" to the question and finally install the certificate.
Note: In this step, I have encountered problems using the Certification Authority website. The "Downloading ActiveX Control" message did not disappear. If you have the same problem, you can resolve it by installing a fix for the Windows 2000 Advanced Server. Refer to KB323172 of the Microsoft site and find the relative fix q323172_W2K_SP4_X86_EN.exe.
You can try more tests with your example, if you'd like, before continuing with my instructions here. After you're satisfied, launch the Internet Explorer browser. Select Tools and go to Internet Options. Select the Content tab and click on the Certificates button. You can immediately see the Personal tab selected and, in the list control, you can see your personal certificate (Andy74 for me). Select the specified certificate and click on the Export button. In the wizard panel choose Next, choose the "Yes, export the private key" option and click Next again. Here you can export only in the PFX format; leave the default option and click Next. Insert a password. For simplicity, I chose a very simple password, "password." Then click Next. Choose where to export the certificate and click Next again. Click Finish and OK in the message box. Now, return to your project in Visual Studio and add this line after the creation of the web reference instance:
srv.ClientCertificates.Add(
new System.Security.Cryptography.X509Certificates.X509Certificate2(
@"c:\Andy74cert.pfx", "password"));
Obviously, change the path of the certificate where you have saved it. Now on the machine where you have the web services, go to the administrative tools section of the control panel and launch the Server Extensions Administrator. Select File -> Add/Remove Snap-in. On the next window click the Add button, select Certificates and finally click the Add button. Select the Computer Account option and click Next. Leave the "local computer" (the computer this console is running on) option selected and click Finish. Close the active window and finally the OK button. Open the Certificates (Local Computer) node in the tree view, then Trusted Root Certification Authorities and the Certificates node. Right click on the node: All Tasks -> Imports to import another wizard. Click Next. Remember the .p7b certificate that we download from the certification authority in the fourth step. Browse to that file and select it; then click Next. Choose "Place all certificates in the following store" and then the "Trusted Root Certification Authorities" folder; click Next and then Finish. All right, you are ready to try your final application. To really test your application, you can use other machines never used up to now. Copy your application and your .pfx file (certificate) to where you want, but remember to place the certification file in the same folder where you load from the application, or use a parameter to load it from whichever you'd.
|
https://www.codeproject.com/Articles/18709/Using-authentication-certificates-to-connect-to-we?fid=416118&df=90&mpp=25&sort=Position&spc=Relaxed&tid=2031704
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Building full screen applications¶
prompt_toolkit can be used to create complex full screen terminal applications. Typically, an application consists of a layout (to describe the graphical part) and a set of key bindings.
The sections below describe the components required for full screen applications (or custom, non full screen applications), and how to assemble them together.
Warning
This is going to change.
The information below is still up to date, but we are planning to refactor some of the internal architecture of prompt_toolkit, to make it easier to build full screen applications. This will however be backwards-incompatible. The refactoring should probably be complete somewhere around half 2017.
Running the application¶
To run our final Full screen Application, we need three I/O objects, and
an
Application instance. These are passed
as arguments to
CommandLineInterface.
The three I/O objects are:
- An
EventLoopinstance. This is basically a while-true loop that waits for user input, and when it receives something (like a key press), it will send that to the application.
- An
Inputinstance. This is an abstraction on the input stream (stdin).
- An
Outputinstance. This is an abstraction on the output stream, and is called by the renderer.
The input and output objects are optional. However, the eventloop is always required.
We’ll come back to what the
Application
instance is later.
So, the only thing we actually need in order to run an application is the following:
from prompt_toolkit.interface import CommandLineInterface from prompt_toolkit.application import Application from prompt_toolkit.shortcuts import create_eventloop loop = create_eventloop() application = Application() cli = CommandLineInterface(application=application, eventloop=loop) # cli.run() print('Exiting')
Note
In the example above, we don’t run the application yet, as otherwise it will hang indefinitely waiting for a signal to exit the event loop. This is why the cli.run() part is commented.
(Actually, it would accept the Enter key by default. But that’s only
because by default, a buffer called DEFAULT_BUFFER has the focus; its
AcceptAction is configured to return the
result when accepting, and there is a default Enter key binding that
calls the
AcceptAction of the currently
focussed buffer. However, the content of the DEFAULT_BUFFER buffer is not
yet visible, so it’s hard to see what’s going on.)
Let’s now bind a keyboard shortcut to exit:
Key bindings¶
In order to react to user actions, we need to create a registry of keyboard
shortcuts to pass to our
Application. The
easiest way to do so, is to create a
KeyBindingManager, and then attach
handlers to our desired keys.
Keys contains a few
predefined keyboards shortcut that can be useful.
To create a registry, we can simply instantiate a
KeyBindingManager and take its
registry attribute:
from prompt_toolkit.key_binding.manager import KeyBindingManager manager = KeyBindingManager() registry = manager.registry
Update the Application constructor, and pass the registry as one of the argument.
application = Application(key_bindings_registry=registry)
To register a new keyboard shortcut, we can use the
add_binding() method as a
decorator of the key handler:
from prompt_toolkit.keys import Keys @registry.add_binding(Keys.ControlQ, eager=True) def exit_(event): """ Pressing Ctrl-Q will exit the user interface. Setting a return value means: quit the event loop that drives the user interface and return this value from the `CommandLineInterface.run()` call. """ event.cli.set_return_value(None)
In this particular example we use
eager=True to trigger the callback as soon
as the shortcut Ctrl-Q is pressed. The callback is named
exit_ for clarity,
but it could have been named
_ (underscore) as well, because the we won’t
refer to this name.
Creating a layout¶
A layout is a composition of
Container and
UIControl that will describe the
disposition of various element on the user screen.
Various Layouts can refer to Buffers that have to be created and pass to the application separately. This allow an application to have its layout changed, without having to reconstruct buffers. You can imagine for example switching from an horizontal to a vertical split panel layout and vice versa,
There are two types of classes that have to be combined to construct a layout:
- containers (
Containerinstances), which arrange the layout
- user controls (
UIControlinstances), which generate the actual content
Note
An important difference:
The
Window class itself is
particular: it is a
Container that
can contain a
UIControl. Thus, it’s
the adaptor between the two.
The
Window class also takes care of
scrolling the content if the user control created a
Screen that is larger than what was
available to the
Window.
Here is an example of a layout that displays the content of the default buffer
on the left, and displays
"Hello world" on the right. In between it shows a
vertical line:
from prompt_toolkit.enums import DEFAULT_BUFFER from prompt_toolkit.layout.containers import VSplit, Window from prompt_toolkit.layout.controls import BufferControl, FillControl, TokenListControl from prompt_toolkit.layout.dimension import LayoutDimension as D from pygments.token import Token layout = VSplit([ # One window that holds the BufferControl with the default buffer on the # left. Window(content=BufferControl(buffer_name=DEFAULT_BUFFER)), # A vertical line in the middle. We explicitely specify the width, to make # sure that the layout engine will not try to divide the whole width by # three for all these windows. The `FillControl` will simply fill the whole # window by repeating this character. Window(width=D.exact(1), content=FillControl('|', token=Token.Line)), # Display the text 'Hello world' on the right. Window(content=TokenListControl( get_tokens=lambda cli: [(Token, 'Hello world')])), ])
The previous section explains how to create an application, you can just pass
the currently created layout when you create the
Application instance
using the
layout= keyword argument.
app = Application(..., layout=layout, ...)
The rendering flow¶
Understanding the rendering flow is important for understanding how
Container and
UIControl objects interact. We will
demonstrate it by explaining the flow around a
BufferControl.
Note
A
BufferControl is a
UIControl for displaying the
content of a
Buffer. A buffer is the object
that holds any editable region of text. Like all controls, it has to be
wrapped into a
Window.
Let’s take the following code:
from prompt_toolkit.enums import DEFAULT_BUFFER from prompt_toolkit.layout.containers import Window from prompt_toolkit.layout.controls import BufferControl Window(content=BufferControl(buffer_name=DEFAULT_BUFFER))
What happens when a
Renderer objects wants a
Container to be rendered on a
certain
Screen?
The visualisation happens in several steps:
The
Renderercalls the
write_to_screen()method of a
Container. This is a request to paint the layout in a rectangle of a certain size.
The
Windowobject then requests the
UIControlto create a
UIContentinstance (by calling
create_content()). The user control receives the dimensions of the window, but can still decide to create more or less content.
Inside the
create_content()method of
UIControl, there are several steps:
- First, the buffer’s text is passed to the
lex_document()method of a
Lexer. This returns a function which for a given line number, returns a token list for that line (that’s a list of
(Token, text)tuples).
- The token list is passed through a list of
Processorobjects. Each processor can do a transformation for each line. (For instance, they can insert or replace some text.)
- The
UIControlreturns a
UIContentinstance which generates such a token lists for each lines.
The
Window receives the
UIContent and then:
- It calculates the horizontal and vertical scrolling, if applicable (if the content would take more space than what is available).
- The content is copied to the correct absolute position
Screen, as requested by the
Renderer. While doing this, the
Windowcan possible wrap the lines, if line wrapping was configured.
Note that this process is lazy: if a certain line is not displayed in the
Window, then it is not requested
from the
UIContent. And from there,
the line is not passed through the processors or even asked from the
Lexer.
Input processors¶
An
Processor is an object that
processes the tokens of a line in a
BufferControl before it’s passed to a
UIContent instance.
Some build-in processors:
The
Application instance¶
The
Application instance is where all the
components for a prompt_toolkit application come together.
Note
Actually, not all the components; just everything that is not dependent on I/O (i.e. all components except for the eventloop and the input/output objects).
This way, it’s possible to create an
Application instance and later decide
to run it on an asyncio eventloop or in a telnet server.
from prompt_toolkit.application import Application application = Application( layout=layout, key_bindings_registry=registry, # Let's add mouse support as well. mouse_support=True, # For fullscreen: use_alternate_screen=True)
We are talking about full screen applications, so it’s important to pass
use_alternate_screen=True. This switches to the alternate terminal buffer.
Filters (reactivity)¶
Many places in prompt_toolkit expect a boolean. For instance, for determining
the visibility of some part of the layout (it can be either hidden or visible),
or a key binding filter (the binding can be active on not) or the
wrap_lines option of
BufferControl, etc.
These booleans however are often dynamic and can change at runtime. For
instance, the search toolbar should only be visible when the user is actually
searching (when the search buffer has the focus). The
wrap_lines option
could be changed with a certain key binding. And that key binding could only
work when the default buffer got the focus.
In prompt_toolkit, we decided to reduce the amount of state in the whole framework, and apply a simple kind of reactive programming to describe the flow of these booleans as expressions. (It’s one-way only: if a key binding needs to know whether it’s active or not, it can follow this flow by evaluating an expression.)
There are two kind of expressions:
SimpleFilter, which wraps an expression that takes no input, and evaluates to a boolean.
CLIFilter, which takes a
CommandLineInterfaceas input.
Most code in prompt_toolkit that expects a boolean will also accept a
CLIFilter.
One way to create a
CLIFilter instance is by
creating a
Condition. For instance, the
following condition will evaluate to
True when the user is searching:
from prompt_toolkit.filters import Condition from prompt_toolkit.enums import DEFAULT_BUFFER is_searching = Condition(lambda cli: cli.is_searching)
This filter can then be used in a key binding, like in the following snippet:
from prompt_toolkit.key_binding.manager import KeyBindingManager manager = KeyBindingManager.for_prompt() @manager.registry.add_binding(Keys.ControlT, filter=is_searching) def _(event): # Do, something, but only when searching. pass
There are many built-in filters, ready to use:
HasArg
HasCompletions
HasFocus
InFocusStack
HasSearch
HasSelection
HasValidationError
IsAborting
IsDone
IsMultiline
IsReadOnly
IsReturning
RendererHeightIsKnown
Further, these filters can be chained by the
& and
| operators or
negated by the
~ operator.
Some examples:
from prompt_toolkit.key_binding.manager import KeyBindingManager from prompt_toolkit.filters import HasSearch, HasSelection manager = KeyBindingManager() @manager.registry.add_binding(Keys.ControlT, filter=~is_searching) def _(event): # Do, something, but not when when searching. pass @manager.registry.add_binding(Keys.ControlT, filter=HasSearch() | HasSelection()) def _(event): # Do, something, but not when when searching. pass
|
https://python-prompt-toolkit.readthedocs.io/en/latest/pages/full_screen_apps.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
gnutls_db_set_remove_function(3) gnutls gnutls_db_set_remove_function(3)
gnutls_db_set_remove_function - API function
#include <gnutls/gnutls.h> void gnutls_db_set_remove_function(gnutls_session_t session, gnutls_db_remove_func rem_func);
gnutls_session_t session is a gnutls_session_t type. gnutls_db_remove_func rem_func is the function.
Sets the function that will be used to remove data from the resumed sessions database. This function must return 0 on success. The first argument to rem_func will be null unless gnutls_db_set_ptr() has been called._set_remove_function(3)
|
http://man7.org/linux/man-pages/man3/gnutls_db_set_remove_function.3.html
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
Open new link in Meteor + Blaze
I have some trouble I have a first page in Meteor My First Page and my second page and it in same folder with my first page enter image description here
My firts page html:
<body> <div class="container"> <header> <h1>Todo List</h1> </header> <a href = "/secondPage">{{> test}}</a> </div> </body> <template name="chuong"> <ul> {{#each chuongs}} <li>{{Chuong_ID}}, {{Truyen_ID}}</li> {{/each}} </ul> </template>
My firts page in javascript:
import { Template } from 'meteor/templating'; import { Chuong } from '../api/chuong.js'; import './doctruyen.html'; Template.chuong.helpers ({ chuongs() { return Chuong.find({}); }, });
My second Page in html:
<body> <h1>MY SECOND PAGE</h1> </body>
in first page, when I click items will show the second page.... Thanks for help!
1 answer
- answered 2017-10-11 20:35 Dom
It's best to use a router to have multiple, linked pages in Meteor. While there are a few you could use, my preference (and a common standard) is iron:router.
There are pretty good examples on the above-linked page and in the Iron Router Guide, but here are some entry-level concepts to get your mind around things:
- You don't need to put
<body>tags everywhere. Any
<body>tag in an HTML file will be inserted into all rendered pages by default. The same is true of
<head>tags.
- Each "Page" needs a template (as you've successfully defined with Template#chuong). I like to put my templates all in their own HTML files, but you can put templates anywhere inside your "client" directory. You can also add common layouts which you'll read about in the Iron Router documentation.
- Each "Page" also needs a "Route", which can be defined in a javascript file anywhere in your project, excluding server-only directories (like the "server" and "private" folders, for example.
Once the above is handled, you should be able to link between pages the same way you usually would, using standard anchor tags (
href="/routename").
|
http://quabr.com/46685504/open-new-link-in-meteor-blaze
|
CC-MAIN-2017-47
|
en
|
refinedweb
|
I learned a little more about stack traces in .NET today… in a very painful manner… but, lesson learned! Hopefully someone else will be able to learn this lesson without having to spend 4 hours on it like I did. Take a look at this stack trace that I was getting in my Compact Framework app, today:
1: System.NullReferenceException
2: Message="NullReferenceException"
3: StackTrace:
4: at TrackAbout.Mobile.UI.CustomControls.TABaseUserControl.<EndInit>b__8(Object o, EventArgs e)
5: at System.ComponentModel.Component.Dispose(Boolean disposing)
6: at System.Windows.Forms.Control.Dispose(Boolean disposing)
7: at TrackAbout.Mobile.UI.Views.BaseForms.TAForm.Dispose(Boolean disposing)
8: at TrackAbout.Mobile.UI.Views.GeneralActions.SetExpirationDateView.Dispose(Boolean disposing)
9: at System.ComponentModel.Component.Dispose()
10: etc.
11: etc.
12: ...
In this stack trace, on line 4, is one very important detail: an anonymous method signature and the parent method that defines it. After several hours of debugging and finally turning on the “catch all” feature for CLR exceptions in Visual Studio, I discovered that line 4 actually translates to this code:
1: public virtual void EndInit()
2: {
3: ParentForm = FormUtils.GetParentFormOf(this) as TAForm;
4: if (ParentForm == null) return;
5:
6: ParentForm.Closing += FormClose;
7: ParentForm.Activated += (o, e) => ParentActivatedHandler(o, e);
8: ParentForm.Deactivate += (o, e) => ParentDeactivateHandler(o, e);
9: ParentForm.Disposed += (o, e) => ParentDisposedHandler(o, e);
10:
11: if (ControlEndInit != null)
12: {
13: ControlEndInit(this, EventArgs.Empty);
14: }
15: }
Let me translate this line stack trace into this method… the namespace in the stacktrace is obvious… so is the username. The first part to note is the <EndInit>. Apparently this means that the EndInit method contains the code that is throwing the exception, but is not actually firing the code that is causing the exception. The next part is where we find what is throwing the exception. Apparently b__8(Object o, EventArgs e) tells me that the failing code in question is an anonymous method. The CLR naming of this method seems cryptic, but also seems like it might be something useful…
Examining the entire method call: TrackAboutControl.<EndInit>b__8(Object o, EventArgs e) what I understand this to be saying is “The EndInit method is defining an anonymous method with a standard event signature at line 8 of the method.” Now I’m not entire sure that “line 8 of the method” is what this anonymous method name means… but it fits in this case… it matches up to the line that was causing the problem.
The problem in this specific case was that this line had a null reference: ParentForm.Disposed += (o, e) => ParentDisposedHandler(o, e);
The ParentDisposedHandler is defined as an event earlier in the class, and since it had no subscribers, it was null. That was easy to fix… just add a null ref check or define the event with a default value of “= delegate{ }”.
So… 4 hours into debugging this issue, it turned out to be 1 line of anonymous method calls. The stack trace was cryptic and confusing to me at first. I hope to retain this lesson and hope to be able to pass this on to someone else that sees a cryptic stack trace such as this, and same someone the same heartache and headache that I went through today.
Get The Best JavaScript Secrets!
Get the trade secrets of JavaScript pros, and the insider info you need to advance your career!
Post Footer automatically generated by Add Post Footer Plugin for wordpress.
|
http://lostechies.com/derickbailey/2010/03/19/net-stack-traces-and-anonymous-methods/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
"Attila Lendvai" <attila.lendvai@...> writes:
> +(defun restart-frame (frame)
> + "Try to restart execution at FRAME. If it's not possible (e.g. due to
> +inadequate debug level) then this function returns with (VALUES) ."
> + (when (frame-has-debug-tag-p frame)
> + ;; FIXME: Getting the first entry in FRAME-CALL-AS-LIST is just plain
> + ;; wrong but sometimes accidentally work. Find a way to get hold of the
> + ;; function of the current frame and don't forget about fixing Slime either.
Getting hold of the function would be the wrong thing to do. The
workflow of the people who use RESTART-FRAME is to fix a bug, redefine
the function, and then restart the frame. If you use the function
object corresponsing to the frame, you'll be restarting the buggy
version instead of the fixed version.
Sure, this means that only functions visible in the global function
namespace are restartable. But that's pretty far from "only sometimes
accidentally works".
--
Juho Snellman
View entire thread
|
http://sourceforge.net/p/sbcl/mailman/message/18830579/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
28 February 2013 22:53 [Source: ICIS news]
HOUSTON (ICIS)--Fire broke out in a hose connected to a tank pump at ?xml:namespace>
No injuries were reported, and no off-site impacts occurred, said Shane Pochard, communications manager for
Pochard had no comment on whether operations at the refinery had been affected.
Emergency crews continued to work the scene Thursday afternoon to extinguish the blaze.
On 1 February,
|
http://www.icis.com/Articles/2013/02/28/9645606/tank-pump-hose-caught-fire-at-marathon-refinery-spokesperson.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
in reply to
Getting soap client/server to work
you will get better diagnostics if you do:
use SOAP::Lite +trace => 'all';
[download]
/J\
Thanks!
Looks like I'm getting closer :) A 500 error from apache. hmmm. This should be reported in the /var/log/httpd/error_log but isn't.
Jason L. Froebe
Team Sybase member
No one has seen what you have seen, and until that happens, we're all going to think that you're nuts. - Jack O'Neil, Stargate SG-1
Figured it out! The URI was wrong! :)
print SOAP::Lite
->uri('')
->proxy('')
->hi()
->result;
[download]
print SOAP::Lite
->uri('')
->proxy('')
->hi()
->result;
[download]
Yeah the uri actually becomes the namespace for the content of the SOAP Body element. SOAP::Lite uses this to determine the module to use for an rpc/encoded request. So with your original attempt would try to load a module like cgi-bin::soap::Demo. The namespace URI doesn't have to be a URI of an actual resource and shouldn't be confused with one. This is one reason why I tend to use a URI of the form urn:Demo (which should work in your case, avoiding using the http scheme reduces the potential for confusion IM
|
http://www.perlmonks.org/index.pl/jacques?node_id=587741
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Learn how to build a Java EE application that uses Ajax, JavaServer Faces, and ADF Faces for the Web tier and EJB3 for the business logic.
Published March 2007
Enterprise applications can use Ajax to provide better Web interfaces that increase the user productivity. In many cases, it is possible to submit a partially completed form to obtain useful information from the server application. For example, the server could perform some early validation or it could use the partial user input to suggest values for the empty form fields, speeding up the data entry process. Ajax can also be used to connect to data feeds whose information is displayed without refreshing the whole page.
In this article, I'll present a simple application containing a Web page that uses Ajax to connect to an ad feed. The user input is submitted to a controller servlet that invokes a business method of an EJB component to select a personalized ad. The business method returns an entity that is used in a JSP page to generate the Ajax response, which is then inserted in the Web page, using DHTML. The following diagram depicts the application's architecture:
Figure 1
I'll rely on Oracle JDeveloper wizards for creating the application's components and user interface..
In this section, I'll use JDeveloper's EJB wizards to create a simple entity and an EJB session component whose business method will be invoked from an Ajax client via an Ajax-EJB controller. Launch JDeveloper and create a new project named ajaxejb.
Right-click the newly created project in the Applications navigator and click New. In the New Gallery window, expand the Business Tier node in the left panel and select EJB. Then, select Entity (JPA/EJB 3.0) in the right panel of the same window and click OK:
Figure 2
Skip the Welcome page of the Create JPA / EJB 3.0 wizard and provide the AdEntity name of the Entity Class. The wizard will also change the Entity Name field:
Figure 3
The next step of the wizard lets you select an inheritance option. We'll use No inheritance for this first example:
Figure 4
The third step lets you enter a table name, select a schema, and provide the entity's superclass:
Figure 5
The fourth step allows you to change the default id and version fields. For our example, uncheck the Include @Version field option, change the name of the @Id field to keyword, and select the String type for the keyword field:
Figure 6
Click Next to review the selected options of the entity and then click Finish. JDeveloper will create the AdEntity in the ajaxejb package of the Application Sources folder:
Figure 7
Now you can add new fields and methods to the entity, using JDeveloper's wizards. Right-click Fields in the Structure navigator and click New Field. Provide the field's name, select its type and click OK:
Figure 8
After adding the url field, repeat the same procedure to add another field named content. Here is the source code of the AdEntity:
package ajaxejb;
import java.io.Serializable;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
@Entity
@NamedQuery(name = "AdEntity.findAll",
query = "select o from AdEntity o")
public class AdEntity implements Serializable {
@Id
private String keyword;
public String url;
public String content;
public AdEntity() {
}
public String getKeyword() {
return keyword;
}
public void setKeyword(String keyword) {
this.keyword = keyword;
}
public String getUrl() {
return url;
}
public void setUrl(String url) {
this.url = url;
}
public String getContent() {
return content;
}
public void setContent(String content) {
this.content = content;
}
}
In the following subsection, I'll show how to create a stateless session bean that uses AdEntity.
Right-click the ajaxejb project in the Applications navigator and click New. In the New Gallery window, expand the Business Tier node in the left panel and select EJB. Then, select Session Bean (EJB 1.1/2.x/3.0) in the right panel of the same window and click OK:
Figure 9
Skip the Welcome page of the Create Session Bean wizard, provide the AdSession name of the EJB, select the Stateless session type, select the Container transaction type and you can optionally instruct JDeveloper to Generate Session Facade Methods:
Figure 10
Next you can choose the methods to expose through the facade:
Figure 11
The third step lets you enter the name of the bean class. Enter ajaxejb.AdSessionBean and click Next:
Figure 12
The fourth step allows you to select the interfaces that the EJB will implement. We could use a local interface because we'll test the sample application in JDeveloper with a single OC4J instance. In a production environment, however, we might want to use a dedicated ad server (or maybe a cluster) running the session bean. The controller servlet that calls the bean's methods could be deployed on multiple Web servers and could also be used in different Web applications. By instructing JDeveloper to generate a Remote Interface named ajaxejb.AdSession, we'll still be able to run the sample application with the embedded OC4J server of JDeveloper, and we'll have maximum flexibility in a production environment. Nevertheless, a local interface would offer better performance if we were sure we want to deploy both the session bean and the controller servlet on the same server. It is also possible to implement both remote and local interfaces in the same bean. In this example, we'll implement only a remote interface:
Figure 13
Click Next to review the selected options of the session bean and then click Finish. JDeveloper will create the AdSessionBean in the ajaxejb package of the Application Sources folder:
Figure 14
Let's add now a business method named selectAd(), using the wizard offered by JDeveloper. Right-click Methods in the Structure navigator and click New Method. In the Bean Method Details dialog, enter the selectAd method name, select the ajaxejb.AdEntity return type, enter the String userInput parameter and click OK:
Figure 15
JDeveloper will update both the AdSession interface and the AdSessionBean class. Here is the source code of the AdSession interface:
package ajaxejb;
import java.util.List;
import javax.ejb.Remote;
@Remote
public interface AdSession {
Object mergeEntity(Object entity);
Object persistEntity(Object entity);
List<adentity> queryAdEntityFindAll();
void removeAdEntity(AdEntity adEntity);
AdEntity selectAd(String userInput);
}
The AdSessionBean class generated by JDeveloper contains the session facade methods followed by the selectAd() method:
package ajaxejb;
import java.util.List;
import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
@Stateless(name="AdSession")
public class AdSessionBean implements AdSession {
@PersistenceContext(unitName="ajaxejb")
private EntityManager em;
public AdSessionBean() {
}
public Object mergeEntity(Object entity) {
return em.merge(entity);
}
public Object persistEntity(Object entity) {
em.persist(entity);
return entity;
}
/**
select o from AdEntity o */
public List<adentity> queryAdEntityFindAll() {
return em.createNamedQuery("AdEntity.findAll").getResultList();
}
public void removeAdEntity(AdEntity adEntity) {
adEntity = em.find(AdEntity.class, adEntity.getKeyword());
em.remove(adEntity);
}
public AdEntity selectAd(String userInput) {
...
}
}
select o from AdEntity o
Now we have to code the body of the selectAd() method.
A real-world application would have a Web interface for creating, updating and deleting AdEntity instances, using the methods of the AdSession bean. The selectAd() method would use some algorithm to match the user's interest to the ads retrieved from a database, using a query method. In this article, however, we'll keep things simple so that we can focus on the main topic. The selectAd() method will pick a random word from the user input and will return a new AdEntity instance:
package ajaxejb;
...
import java.util.Random;
import java.util.StringTokenizer;
...
public class AdSessionBean implements AdSession {
...
public AdEntity selectAd(String userInput) {
String keyword = "nothing";
if (userInput != null && userInput.length() > 0) {
StringTokenizer st = new StringTokenizer(
userInput, ",.?!'& \t\n\r\f");
int n = st.countTokens();
if (n > 0) {
int k = new Random().nextInt(n);
for (int i = 0; i < k; i++)
st.nextToken();
keyword = st.nextToken();
}
}
AdEntity ad = new AdEntity();
ad.setKeyword(keyword);
ad.setUrl(keyword + ".com");
ad.setContent("Buy " + keyword + " from " + ad.getUrl());
return ad;
}
}
In this section, I'll create the controller servlet with JDeveloper. Then, I'll show how to use dependency injection, invoke the EJB's methods and generate the response for the Ajax client.
Right-click the ajaxejb project in the Applications navigator and click New. In the New Gallery window, expand the Web Tier node in the left panel and select Servlets. Then, select HTTP Servlet in the right panel of the same window and click OK:
Figure 16
Skip the Welcome page of the Create HTTP Servlet wizard and select Servlet 2.4\JSP 2.0 (J2EE 1.4):
Figure 17
Enter the AdServlet name of the servlet class, select the ajaxejb package for the servlet class, select the XML content type and click Next:
Figure 18
The next page of the wizard allows you to configure the servlet mapping:
Figure 19
The final page lets you enter information about the request parameters. In the case of this example, we'll use a single parameter named userInput:
Figure 20
Click Finish to generate the AdServlet class:
Figure 21
Now we have to modify the servlet. First of all, add the following lines that inject the AdSession bean, so that we can access it in the doGet() method of the servlet:
import javax.ejb.EJB;
...
public class AdServlet extends HttpServlet {
@EJB(name="AdSession")
private AdSession adSession;
...
}
In this example, we want the doGet() method to return an Ajax response that will contain information extracted from an AdEntity instance. The value of the userInput parameter is passed to the selectAd() method of the AdSession bean. The business method returns an instance of AdEntity that is stored into the request scope with setAttribute(). After that, doGet() sets the Content-Type and Cache-Control headers, and includes the content generated by a page named AdResponse.jsp. Here is the complete source of the AdServlet class:
package ajaxejb;
import javax.ejb.EJB;
import java.io.IOException;
import javax.servlet.*;
import javax.servlet.http.*;
public class AdServlet extends HttpServlet {
@EJB(name="AdSession")
private AdSession adSession;
public void init(ServletConfig config) throws ServletException {
super.init(config);
}
public void doGet(HttpServletRequest request,
HttpServletResponse response)
throws ServletException, IOException {
String userInput = request.getParameter("userInput");
AdEntity adEntity = adSession.selectAd(userInput);
request.setAttribute("adEntity", adEntity);
response.setHeader("Cache-Control", "no-cache");
response.setContentType("text/xml");
request.getRequestDispatcher("/AdResponse.jsp")
.include(request, response);
}
}
Right-click the Web Content folder of the ajaxejb project in the Applications navigator and click New. In the New Gallery window, expand the Web Tier node in the left panel and select JSP. Then, select the JSP item in the right panel of the same window and click OK:
Figure 22
Skip the Welcome screen of the Create JSP wizard and enter the AdResponse.jsp name of the JSP page:
Figure 23
The next step of the Create JSP wizard lets you select an error handling option:
Figure 24
At the third step, you can select the JSP libraries that will be used in the page. For this example, select JSTL Core 1.1 in the left panel and move it into the right panel with the arrow button:
Figure 25
The fourth step allows you to provide other page options. Since the AdResponse.jsp page is used to generate an Ajax response, select None as the HTML version and leave an empty title:
Figure 26
Click the Finish button to generate the JSP page:
Figure 27
Remove the HTML content generated by JDeveloper and add the following lines that produce a link:
<a href='<c:out'>
<c:out
</a>
The AdEntity instance used above is set as a request attribute in the AdServlet class.
When you generated the AdServlet class, JDeveloper also created the web.xml application descriptor, which contains the servlet mapping information:
<web-app ...>
...
<servlet>
<servlet-name>AdServlet</servlet-name>
<servlet-class>ajaxejb.AdServlet</servlet-class>
</servlet>
...
<servlet-mapping>
<servlet-name>AdServlet</servlet-name>
<url-pattern>/adservlet</url-pattern>
</servlet-mapping>
...
</web-app>
You must change the Web application version from 2.4 to 2.5 and the schema file name from web-app_2_4.xsd to web-app_2_5.xsd so that you can use dependency injection in the AdServlet class:
<web-app ...
xsi:schemaLocation=".../web-app_2_5.xsd"
version="2.5" ...>
...
</web-app>
If you don't change the Web application version, the EJB will not be injected and the application will throw a NullPointerException.
In this section, I'll create a JSF page and I'll use JDeveloper to add ADF Faces components. After that, I'll present the JavaScript code that invokes the controller servlet, passing the state of the ADF Faces components. The JSF page will be updated with the information retrieved from the Ajax response.
Right-click the Web Content folder of the ajaxejb project in the Applications navigator and click New. In the New Gallery window, expand the Web Tier node in the left panel and select JSF. Then, select the JSF JSP item in the right panel of the same window and click OK:
Figure 28
Skip the Welcome screen of the Create JSF JSP wizard and enter the AdForm.jsp name of the JSF page:
Figure 29
The next step of the Create JSF JSP wizard lets you select a component binding option:
Figure 30
At the third step, you can select the JSP libraries that will be used in the JSF page. JSF Core and JSF HTML are selected by default. In this example, we'll also use the ADF Faces components. Therefore select ADF Faces Components and ADF Faces HTML in the left panel and move them into the right panel with the arrow button:
Figure 31
The fourth step allows you to provide other page options. Enter the Ajax-JSF Page title:
Figure 32
Click the Finish button to generate the JSF page:
Figure 33
Since this is the first JSF page of the application, JDeveloper creates the faces-config.xml file and configures the Faces Servlet in the web.xml file:
<web>
Right click the h:form element of the AdForm.jsp page in the Structure navigator and click Convert. In the Convert Form dialog, select the ADF Faces Core library, select the Form element, and click OK:
Figure 34
JDeveloper will replace the <h:form> component with <af:form>. Since this is the first use of ADF Faces, JDeveloper automatically replaces the HTML tags of the page with <afh:html>, <afh:head> and <afh:body>. In addition, JDeveloper creates the adf-faces-config.xml file, and configures the adfFaces filter and the resources servlet in the web.xml file:
<web-app ...>
<filter>
<filter-name>adfFaces</filter-name>
<filter-class>
oracle.adf.view.faces.webapp.AdfFacesFilter
</filter-class>
</filter>
<filter-mapping>
<filter-name>adfFaces</filter-name>
<servlet-name>Faces Servlet</servlet-name>
</filter-mapping>
...
<servlet>
<servlet-name>resources</servlet-name>
<servlet-class>
oracle.adf.view.faces.webapp.ResourceServlet
</servlet-class>
</servlet>
...
<servlet-mapping>
<servlet-name>resources</servlet-name>
<url-pattern>/adf/*</url-pattern>
</servlet-mapping>
...
</web-app>
Go back to the AdForm.jsp page and right click the af:form element in the Structure navigator. Then, select Insert inside af:form and click JSF HTML:
Figure 35
Select Panel Grid in the Insert JSF HTML Item dialog and click OK:
Figure 36
Skip the Welcome screen of the Create PanelGrid wizard and enter 1 in the Number of Columns field:
Figure 37
Click Finish to insert the <h:panelGrid> component within the <af:form> element of the AdForm.jsp page. Right click the h:panelGrid element in the Structure navigator, select Insert inside h:panelGrid and click ADF Faces Core:
Figure 38
Select InputText in the Insert ADF Faces Core Item dialog and click OK:
Figure 39
JDeveloper inserts an <af:inputText> component within the <h:panelGrid> element of the AdForm.jsp page. Use the Property Inspector to change the label to User Input and the rows property to 4:
Figure 40
You can also use the Component Palette to insert components. In the text editor, move the caret cursor after the <af:inputText> element, select the ADF Faces Core library in the Component Palette and click Input Text to insert a second text field. Then, use the Property Inspector to change the label to More Input:
Figure 41
ADF Faces components support Partial Page Rendering (PPR) with the following attributes: autoSubmit, partialTriggers, and partialSubmit. You'll use these attributes if you want to update a component when the input value of another component (called trigger) is changed. If the autoSubmit attribute of an input component is true, the enclosing form is submitted when the component's value is changed in the browser. The components that must be updated use the partialTriggers attribute to specify the IDs of the trigger components.
When you use the attributes mentioned above, the ADF Faces components handle the communication and you don't have to worry about JavaScript event handlers and Ajax callbacks. There are cases, however, when you must use JavaScript and the XMLHttpRequest API in the browser. For example, if you want to submit the form data while the user is typing something or if you want to access resources such as our controller servlet, you'll need to provide your own JavaScript code.
The sample application contains a JavaScript file named AdScript.js. This file must be imported in the header of the page, using a <script> element within a <f:verbatim> component:
<f:verbatim>
<script src="AdScript.js" type="text/javascript">
</script>
</f:verbatim>
The AdScript.js file contains the JavaScript code for invoking the controller servlet. The Ajax response generated with AdResponse.jsp will be inserted within a <div> section of AdForm.jsp. The <div> element must be included in a <f:verbatim> component:
<f:verbatim>
<div id="ad">
</div>
</f:verbatim>
The form page needs some initialization after the browser finishes the loading. The initialization code can be found in the init() function of the AdScript.js file. Select the afh:body element of AdForm.jsp in the Structure navigator and set the onload property:
Figure 42
JDeveloper will modify the <afh:body> element:
<afh:body
Here is the complete source code of the AdForm.jsp page:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"">
<%@ page contentType="text/html;charset=windows-1252"%>
<%@ taglib uri="" prefix="af"%>
<%@ taglib uri="" prefix="afh"%>
<f:view>
<afh:html>
<afh:head
<meta http-
<f:verbatim>
<script src="AdScript.js" type="text/javascript">
</script>
</f:verbatim>
</afh:head>
<afh:body
<af:form>
<h:panelGrid
<f:verbatim>
<div id="ad">
</div>
</f:verbatim>
<af:inputText
<af:inputText
</h:panelGrid>
</af:form>
</afh:body>
</afh:html>
</f:view>
The following subsection explains the code of the AdScript.js file.
Right-click the Web Content folder of the ajaxejb project in the Applications navigator and click New. In the New Gallery window, expand the Web Tier node in the left panel and select HTML. Then, select the JavaScript File item in the right panel of the same window and click OK:
Figure 43
Enter the AdScript.js name of the JSF page and click OK:
Figure 44
JDeveloper will create an empty JavaScript file. Next you have to add the code that will invoke the Ajax-EJB controller. First of all, you need a function for creating the XMLHttpRequest object. The createRequest() function takes three parameters: the HTTP method, the URL of the Ajax controller, and the callback function. The XMLHttpRequest object is initialized with its open() method and then createRequest() sets the onreadystatechange property, which will contain a reference to the callback function that the XMLHttpRequest object will invoke every time its state is changed. Finally, createRequest() returns the object that will be used later to send the Ajax request:
function createRequest(method, url, callback) {
var request;
if (window.XMLHttpRequest)
request = new XMLHttpRequest();
else if (window.ActiveXObject)
request = new ActiveXObject("Microsoft.XMLHTTP");
else
return null;
request.open(method, url, true);
function calbackWrapper() {
callback(request);
}
request.onreadystatechange = calbackWrapper;
return request;
}
The XMLHttpRequest object passes no parameter to the function that it calls when its state changes, but the callback will need the request object for obtaining the Ajax response. Therefore, createRequest() uses an inner function acting as a wrapper for the actual callback, which will obtain a reference to the request object.
Each XMLHttpRequest object should be used for a single Ajax request so that the application can work properly with all Ajax-capable browsers. In other words, you should not reuse the same XMLHttpRequest object for multiple requests. Nevertheless, these objects should be destroyed after they are used to avoid a memory leak in the browser. Form my experience, the safest way to destroy an XMLHttpRequest object is to set first its onreadystatechange property to a JavaScript function that does nothing, call the abort() method, and then free the memory with delete:
function deleteRequest(request) {
function doNothing() {
}
request.onreadystatechange = doNothing;
request.abort();
delete request;
}
The adUpdate() function is the callback passed to createRequest(). If the request is completed (ready state is 4) and successful (status is 200), adUpdate() gets the content of the Ajax response from the responseText property of the request object. Then, adUpdate() stores the responseText into the innerHTML property of the element that has the ad id. This is the equivalent of inserting the Ajax response within the <div id="ad"></div> element of the AdForm.jsp page:
var debug = true;
function adUpdate(request) {
if (request.readyState == 4) {
if (request.status == 200) {
var adSection = document.getElementById("ad");
adSection.innerHTML = request.responseText;
} else if (debug) {
if (request.statusText)
alert(request.statusText);
else
alert("HTTP Status: " + request.status);
}
}
}
In the sample application of this article, the user's input must be passed to the Ajax-EJB controller servlet. The getUserInput() function gets the values of all text fields and concatenates these values, returning a single string:
function getUserInput() {
var userInput = "";
var forms = document.forms;
for (var i = 0; i < forms.length; i++) {
var elems = forms[i].elements;
for (j = 0; j < elems.length; j++) {
var elemType = elems[j].type.toLowerCase();
if (elemType == "text" || elemType == "textarea") {
var elemValue = elems[j].value;
userInput += " " + elemValue;
}
}
}
return userInput;
}
The sendAdRequest() function destroys the previous XMLHttpRequest object (if any), adds the userInput parameter to the URL, creates a new request object, and calls its send() method to submit the Ajax request to the controller servlet. The URL starts with .. because the servlet is invoked from a JSF page whose URL contains the /faces prefix:
var adURL = "../adservlet"
var adRequest = null;
function sendAdRequest() {
if (adRequest)
deleteRequest(adRequest);
var url = adURL + "?userInput=" + escape(getUserInput());
adRequest = createRequest("GET", url, adUpdate);
adRequest.send(null);
}
The init() function calls sendAdRequest() to initialize the ad section of the page. Then, setInterval() is used to instruct the browser to call sendAdRequest() every second:
function init() {
sendAdRequest();
setInterval("sendAdRequest()", 1000);
}
Right-click the AdForm.jsp page of the Web Content folder and click Run. JDeveloper will start the embedded OC4J server and will open the JSF page in a browser window. If you start typing something in one of the form's text fields, you'll notice how the ad changes every second.
In this article, you've learned how to build Java EE Ajax applications, using JDeveloper's visual editors and wizards for creating EJB3 components, Ajax-EJB controller servlets, and JSF pages based on ADF Faces. You've also learned several Ajax techniques, which will allow you to properly manage the XMLHttpRequest objects.
|
http://www.oracle.com/technetwork/articles/cioroianu-ajaxejb-092142.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
problem on server which works fine on another - Java Server Faces Questions
problem on server which works fine on another I am having a problem when a tab is clicked, the action listener class is called
Stateless Session Beans Example, EJB Tutorial
;${deploymentdescription}/jar/" includes="ejb-jar.xml,weblogic-cmp-rdbms-jar.xml,weblogic-ejb-jar.xml" />
</copy>
<!-- Create jar file and place in ear:
.
For more information,Examples and Tutorials on EJB visit to :
http...
, the interesting problem of managing the 100 clients with 25 bean instances...://
Thanks JNDI LOOK UP PROBLEM - EJB
EJB JNDI LOOK UP PROBLEM Hi,
I am using jboss4.2 and created...://
Hope that it will be helpful for you
EJB in jsp code - EJB
the following links: in jsp code Suppose in EJB we created the session bean, remote... can we access the EJB methods in the jsp file....
if u can present me
small query - JSP-Servlet
small query how to set value to textbox on browser side which is retrived from the database? Hi friend,
Plz explain problem in details which technology you have used e.g. JSP/Servlet/Java etc...
For more
Code Works - JSP-Servlet
Code Works Hi
The code provided is working fine along with the pagination . i edited the queries and that makes difference..
here is the code.
Thank you
Regards
Eswaramoorthy
Pagination of JSP page
EJB Example - EJB
EJB Example Hi,
My Question is about enterprise java beans, is EJB stateful session bean work as web service? if yes can you please explain with small example.
Thanks
sravanth Hi Friend,
Please visit
Frame works - Framework
Frame works I need tutorials on JAVA Plugin Frameworks and hibernate and springs ?
I new to these Technologies so i need with Examples to understand
Using above technologies i need to do project.
so ASAP pls
Struts 2 Format Examples
Struts 2 Format Examples
... Struts 2 Format Examples are very easy to grasp and you will
learn these concepts in very small time.
If you start developing any real world web application jQuery works?
How jQuery Works?
This section we will see how jQuery works with the help of simple program...'s of support. There are many examples and tutorials
available on internt
Ajax Examples
Ajax Examples
There are a few AJAX demos and examples on the web right now. While... on the status of the file upload. Another attempt at solving the problem
Struts 2 problem - Struts
Struts 2 problem Hello I have a very strange problem. I have an application that we developed in JAVA and the application works ok in Windows... seemed to worked fine, until the user reported to us a problem. After doing
Coding with Ejb
Coding with Ejb Hi Team,
I am learning Ejb now. Can anybody help me how to implement this Stateful Session Bean Example by using this Hibernate... those two examples for understanding whole thing. Thanks
Problem
got the answer,
i just turned off the firewall & its running fine
EJB Container or EJB Server
EJB Container or EJB Server
An EJB container is nothing but the program that runs on the server and
implements the EJB specifications. EJB container provides special type
unable to execute the examples
unable to execute the examples unable to execute the examples given for struts,ejb,or spring tutorials.Please help I am a beginner
EJB lookup example
EJB lookup example
This examples describes you the lookup method used in
EJB.
Bean30Remote.java:-This ... helpful in providing business-specific functionality of an
EJB. Here
to fine a software - Java Beginners
to fine a software Is ther any software or utility available to create a jar file as in the case of zip file utility
source code - EJB
source code
I am new to EJB. I am reading tutorial from..
@ "
Thanks
iPhone SDK Examples
small example code that you can copy and paste in your code.
Small examples... with examples.
In our examples we will be discussing about the Bug How to solve in my code
Small Bug How to solve in my code String season="";
pst = con.prepareStatement("SELECT DISTINCT(Season) season FROM specialdates where Regid... the problem
Simple EJB3.0 - EJB
the following links:
Thanks
how to make this java mail works? - Java Beginners
how to make this java mail works? Dear experts,
Recently, I borrowed a book by William Broden and managed to get the source codes from the author himself.
The problem is that netbean ide is sending many error message
JAVA CLASSPATH PROBLEM
servlets and problem raise for classpath.
I had a problem with servlet to call... solved this problem by putting the path of classes to the classpath and then it worked fine after doing that and my all files compiled fine and my project worked
HTML5 examples.
HTML5 quick examples.
HTML5 Examples, Definition of <!DOCTYPE >..., Implementation of <select> tag in HTML5.
Small tag html5, Example of <....
HTML5 input examples, Introduction and implementation of input tag.
HTML5
ejb
ejb what is ejb
ejb is entity java bean
ejb
ejb why
ejb components are invisible components.justify that ejb components are invisible
Array problem - JSP-Servlet
, the corresponding ID and name gets populated in the parent window.. It works fine for me..
Bur my problem is that if the pop up window contains only 1 record
EJB How is EJB different from servlets
EJB
EJB what is an EJB container?
Hi Friend,
Please visit the following link:
EJB Container
Thanks
iphone mail sending problem
, Set up recipients is left blank in the new code. that works fine.
examples
examples Hi sir......
please send me the some of the examples... questions .
Hello Friend,
Which type of connectivity examples do you want?Please clarify this.
Thanks
mysql connectivity examples
EJB - Java Interview Questions
state should not be retained i.e. the EJB container
destroys a stateless... invocation.
* When the bean works as the mediator between the client... in Stateless session beans
For more information on EJB visit to :
http
J2EE learning problem - Java Beginners
For read more information,examples and Tutorial on J2EE visit to :
Thanks
examples - Java Beginners
dialog indicating the problem to the user and allow the user to enter another number... output. Hi friend,
Code to help in solving the problem :
import
RACF code - EJB
on the server must be secure using RACF. Any help or examples would be greatly
Identify EJB 2.0 container requirements.
;Home Identify correct and incorrect statements or examples about EJB programming...
Identify EJB 2.0 container requirements.Prev Chapter 1. EJB Overview Next
Identify EJB 2.0 container
Chapter 9. EJB-QL
and incorrect statements or examples about the purpose and
use of EJB QL...
Chapter 9. EJB-QLPrev Part I. Exam Objectives Next
Chapter 9. EJB-QLIdentify
ejb - EJB
ejb how can i run a simple ejb hello application.please give me...:
Hope
JPA Examples In Eclipse
have provided examples of EJB QL, JPA Native Query,
JPA Named query, JPA... JPA Examples In Eclipse
In the JPA Examples section we will provide you almost all
ejb
Struts 2.3.8 Tutorials and Examples
and test.
Mostly these examples are small examples which gives the complete detail...
and examples of Struts 2.3.8.
Struts 2.3.8 is "General... download the examples given here in Eclipse by deploying on the
tomcat server
What is EJB 3.0?
of small and large enterprise applications.
The EJB container provides system-level...
What is EJB 3.0
This lesson introduces you with EJB 3.0, which is being used extensively
first entity bean example in eclipse europa - EJB
:
Hope that they will be helpful
jQuery - jQuery Tutorials and examples
jQuery - jQuery Tutorials and examples
The largest collection of jQuery examples... created many examples of using jQuery.
View the demo's here.
Learn jQuery
JSP Simple Examples
JSP Simple Examples
Index 1.
Creating... there is
any logic problem with the logic of the program... this problem by using
for loop defined inside the scriptlet directive
JSP Simple Examples
of two number
While solving any mathematical problem it becomes... is working exactly as for
loop works in a jsp or in java. i....
JSTL If- Else
The problem with <c:if>
How Voice over Internet Protocol (VoIP) Works
. It is still a mystery for them the how do voice over IP (VoIP) works. But using VoIP servies is very easy once you know how it works.
How VoIP Works?
Voice..., analog voice calls are converted into small discrete packets of data
Display Problem
Display Problem i am creating a small window application , i want to fetch data from ms-access(db) and want to display it on tables. what options are there to show result on table. is CSS helpfull
EJB 3.0 Tutorials
EJB 3.0 Tutorials
This tutorial gradually takes a new comer to master EJB
along....
What is
EJB-Container?
Transaction
VoIP Works
VoIP Works
How VoIP Works
If you've never heard of VoIP...;
How VoIP Works-Busting Out of Long Distance Rates... over Internet Protocol, and how VoIP works is actually quite revolutionary because
Problem in Blazeds with Jboss Clustering ( Mod_JK with SSL )
Problem in Blazeds with Jboss Clustering ( Mod_JK with SSL ) Hi... the RemoteObject for communicating with Java.
The Application is running fine when we... faced the same problem.
After the very first login, I can see 2 session created
what is difreence between javabeans and enterprise beans - EJB
business functionality.
2)JavaBean is standalone and works only in the same JVM while EJB is designed to work in distributed environment using rmi/iiop protocol... EJB's may be transactional and the EJB servers provide transactional support.
4
Flex Examples
Flex Examples
In this section we will see some examples in Flex.
This section... the various examples in Flex which will help you in to understand
that how... works.
Flex Alert Box Example : In
this tutorial you will learn
GPS Tracking: How it works
effective to track you vehicle or any asset to avoid any problem in communication
EJB - EJB
to :
Thanks
Match the name with a description of purpose or functionality, for each of the
following deployment descriptor elements: ejb-name,
abstract-schema-name,
ejb-relation,
ejb-relat
, for each of the
following deployment descriptor elements: ejb-name,
abstract-schema-name,
ejb-relation,
ejb-relationship-role, cmr-field...: ejb-name,
abstract-schema-name,
ejb-relation,
ejb
WEBSERVICE USING APACHE AXIS -TUTORIAL-2 UNDERSTANDING APACHE AXIS
as an XML-webservice, using Axis
and it works fine in tomcat as both are from...)
R.S.RAMASWAMY (rs.ramaswamy@gmail.com)
We
saw some simple examples for XML-RPC &... a Stateless
Session Bean EJB using WebLogic-7 Enterprise server, first
|
http://www.roseindia.net/tutorialhelp/comment/94317
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
I am having a problem including this line though: <?xml version="1.0" encoding="UTF-8"?>
Any ideas how to pharse this? Or am I just doing it wrong?
"Note: many XHTML pages begin with an optional XML prologue ( <?xml> ) that precedes the DOCTYPE and namespace declarations. Unfortunately, this XML prologue causes problems in many browsers..."
print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>";
Nick
toadhall, I know of the errors caused my user base is newer browsers, I have tested all my websites with that encoding and it works on all mine (IE Opera Mozilla).
print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
I use PHP and XHTML 1.0 strict. I do this:
<?php echo '<?xml version="1.0" encoding="iso-8859-1"?>'; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" ""> <html xmlns="" xml:
Toadhall is wrong about this. You *want* the brackets. If you do not use bracks, but instead use < it will not recognize your xml tag as a tag, but as text and you don't want that.
Incidentally, I had trouble with Unicode and validation (because I don't have an editor that doesn't add the .... what is it? BMsomething and W3C chokes on validation. What have you done?
Tom
<added>Its not PHP but I havent gotten far enough in development to do that</added>
<addeded> WOO HOO Mayed Preferred Member </addeded>
Parse error: parse error in localhostinfo/site.php on line 1
I don't understand. The site you listed looks perfect and you say the problem is not with your XHTML.
Then you say it isn't with PHP either, but you get a parse error, so that can only be PHP.
The PHP code that I posted works fine and generates valid XHTML. If you are getting a parse error, it's coming from somewhere else.
Try creating "skeleton" pages that just call stub functions (routines that just return without doing anything). Get rid of complexity until you locate the problem. If you still can't figure it out, post the code.
If I had to guess, upon thinking about it, I bet you're missing a semicolon after line 1 or something like that.
<?PHP print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>"; ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xml: <head>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" xml: <head>
Also tried: <?PHP echo '<?xml version="1.0" encoding="UTF-8"?>'; ?>
The reason for the URL in Profile was in reference to this
1. Your parentheses don't match.
2. I'm not certain, but the standard is <?php. I'm not sure whether or not uppercase is allowed, though it would take two seconds to test (but I'm too lazy).
That is exactly what I have, except for lowercase tags for invoking php, and it validates.
Thanks for yalls help.
Syntax highlighting, code completion, all that sort of stuff is just eye candy. It can make things easier to read, but it doesn't help much in the end. Brace matching is such a help when your eyes just won't work. There are a lot of editors that support it.
- I've once had emacs set up for it (just parens for Scheme programming)
- I mostly use HAPedit, which does it pretty nicely, but I'm sort of biased there because I'm the one who made the feature request and so it's sort of done to the way I like it.
- I think TextPad, EditPad and UltraEdit can all do this, but I don't use them so I don't know.
Cheers,
|
http://www.webmasterworld.com/forum88/474.htm
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
25 September 2009 12:46 [Source: ICIS news]
LONDON (ICIS news)--Jurong Island, Singapore’s vast chemical hub, has marked the completion of its land reclamation more than 20 years ahead of schedule, sources said on Friday.
Cedric Foo, the chairman of JTC Corp, which is the owner-operator of ?xml:namespace>
Fifteen years ago,
When reclamation work started in 1995, it was expected to be completed by 2030, he said.
The initial aim of the project was to facilitate
The concept behind
Foo noted that in 2000, 61 petrochemical companies with a total investment of $21bn (€14m) were represented on
Companies operating on the island include Air Products, AkzoNobel, Asahi Kasei, Celanese, Chevron Philips, Eastman Chemical, ExxonMobil, Huntsman Corp, Mitsui Chemicals, Shell, Sumitomo Chemicals and Teijin, according to JTC .
“Notwithstanding the milestone event today, we will continue to find ways to adjust the Jurong Island profile to bring about stronger integration for greater operating efficiencies by the companies, and in particular to include new entrants,” said Lim Hng Kiang, Singapore’s trade and industry minister.
“In the longer term, our plan is to position
For more
|
http://www.icis.com/Articles/2009/09/25/9250479/singapore-chemicals-hub-jurong-island-celebrates-milestone.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
It is often necessary to perform various actions at different stages
of a persistent object's lifecycle. JDO includes two mechanisms for
monitoring changes in the lifecycle of your persistent objects: the
InstanceCallbacks interface, and the
InstanceLifecycleListener event framework.
Your persistent classes can implement the
InstanceCallbacks
family of interfaces to to receive callbacks when certain JDO
lifecycle events take place. There are four callbacks available:
The
LoadCallback.jdoPostLoad
method is called by the JDO implementation after the
default fetch group
fields of your class have been loaded from the datastore.
Default fetch groups are explained in
Chapter 5,.
StoreCallback.jdoPreStore is
called just before the persistent values in your object
are flushed to the datastore..
Note that the persistent identity of the object may not
have been assigned yet when this method is called.
The
ClearCallback.jdoPreClear
method is called before the persistent fields of your
object are cleared.
JDO implementations clear the persistent state of objects
for several reasons, most of which will be covered later in
this document. You can use
jdoPreClear
to clear non-persistent cached data and null
relations to other objects. You should not access the
values of persistent fields in this method.
DeleteCallback.jdoPreDelete is
called before an object transitions to the deleted state.
Access to persistent fields is valid within this method.
You might
use this method to cascade the deletion to related objects
based on complex criteria, or to perform other cleanup.
Unlike the
PersistenceCapable interface,
you must implement the
InstanceCallbacks
interfaces explicitly if you want to receive lifecycle callbacks.
Example 4.3. Using Callback Interfaces
/** * Example demonstrating the use of the InstanceCallbacks interface to * persist a java.net.InetAddress and implement a privately-owned relation. */ public class Host implements LoadCallback, StoreCallback, DeleteCallback { // certain related devices when this object is deleted, based // on business logic PersistenceManager pm = JDOHelper.getPersistenceManager (this); pm.deletePersistentAll (filterDependents (devices)); } }
Only persistent classes can implement the
InstanceCallbacks interfaces. This makes sense for
lifecycle actions such as caching internal state and deleting
dependent relations, but is clumsy for cross-cutting concerns like
logging and auditing. The lifecycle listener event framework solves
this problem by allowing non-persistent classes to subscribe
to lifecycle events. The framework consists of a common event
class, a common super-interface for event listeners, and several
individual listener interfaces. A concrete listener class can
implement any combination of listener interfaces.
InstanceLifecycleEvent: The
event class. The source of a lifecycle event is the
persistent object for which the event was triggered.
InstanceLifecycleListener:
Common base interface for all listener types.
InstanceLifecycleListener has no operations, but
gives a measure of type safety when adding listener objects
to a
PersistenceManager or
PersistenceManagerFactory.
See Chapter 8, PersistenceManager and
Chapter 7, PersistenceManagerFactory.
LoadLifecycleListener:
Listens for persistent state loading events. Its
postLoad method is equivalent to
the
InstanceCallbacks.jdoPostLoad
method described above.
StoreLifecycleListener:
Listens for persistent state flushes. Its
preStore method is equivalent to
the
InstanceCallbacks.jdoPreStore
method described above. Its
postStore
handler is invoked after the data for
the source object has been flushed to the database. Unlike
preStore, the source object is
guaranteed to have a persistent identity by the time
postStore is triggered.
ClearLifecycleListener:
Receives notifications when objects clear their persistent
state. Its
preClear method is
equivalent to
InstanceCallbacks.jdoPreClear.
The
postClear event is sent
just after the source object's state is cleared.
DeleteLifecycleListener:
Listens for object deletion events. Its
preDelete method is equivalent
to
InstanceCallbacks.jdoPreDelete.
Its
postDelete handler is triggered
after the source object has transitioned to the deleted
state. Access to persistent fields is not allowed in
postDelete.
CreateLifecycleListener:
The
postCreate event is fired when
an object first transitions from unmanaged to
persistent-new, such as during a call to
PersistenceManager.makePersistent.
DirtyLifecycleListener:
Dirty events fire when an object is first modified
(in JDO parlance, becomes dirty) within a transaction.
The runtime invokes
preDirty before
applying the change to the object, and
postDirty
after applying the change.
|
http://docs.oracle.com/cd/E15523_01/apirefs.1111/e13946/jdo_overview_pc_callbacks.html
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Branch: refs/heads/nested
Commit: 19f202cc163ce24756aa0493936eead05ed8ec8b
Author: William S Fulton <wsf@...>
Date: 2013-11-30 (Sat, 30 Nov 2013)
Changed paths:
M Examples/test-suite/nested_structs.i
Log Message:
-----------
C nested struct passed by value example
This was causing problems in Octave as wrappers were compiled as C++.
Solution has already been committed and required regenerating the inner struct into
the global C++ namespace (which is where it is intended to be in C).
Commit: df679071681242ec2619c82693f261f1f1c34b80
Author: William S Fulton <wsf@...>
Date: 2013-12-01 (Sun, 01 Dec 2013)
Changed paths:
M Examples/test-suite/common.mk
A Examples/test-suite/nested_private.i
Log Message:
-----------
Testcase of private nested class usage causing segfault
Needs fixing for C#/Java
Compare:
|
http://sourceforge.net/p/swig/mailman/swig-cvs/thread/529c46f03f90f_4246134dd4c926e9@hookshot-fe6-pe1-prd.aws.github.net.mail/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Spring 3 MVC: Handling Forms in Spring 3.0 MVC
- By Viral Patel on July 5, 2010
-
Our Goal
Our goal is to create basic Contact Manager application. This app will have a form to take contact details from user. For now we will just print the details in logs. We will learn how to capture the form data in Spring 3 MVC.
Getting Started
Let us add the contact form to our Spring 3 MVC Hello World application. Open the index.jsp file and change it to following:
File: WebContent/index.jsp
The above code will just redirect the user to contacts.html page.
The View- contact.jsp
Create a JSP file that will display Contact form to our users.
File: /WebContent/WEB-INF/jsp/contact.jsp
<%@taglib uri="" prefix="form"%>
Contact Manager
Here in above JSP, we have displayed a form. Note that the form is getting submitted to addContact.html page.
Adding Form and Controller in Spring 3
We will now add the logic in Spring 3 to display the form and fetch the values from it. For that we will create two java files. First the
Contact.java which is nothing but the form to display/retrieve data from screen and second the
ContactController.java which is the spring controller class.
File: net.viralpatel.spring3.form.Contact
package net.viralpatel.spring3.form; public class Contact { private String firstname; private String lastname; private String email; private String telephone; //.. getter and setter for all above fields. }.controller; import net.viralpatel.spring3.form.Contact; import org.springframework.stereotype.Controller;.servlet.ModelAndView; @Controller @SessionAttributes public class ContactController { @RequestMapping(value = "/addContact", method = RequestMethod.POST) public String addContact(@ModelAttribute("contact") Contact contact, BindingResult result) { System.out.println("First Name:" + contact.getFirstname() + "Last Name:" + contact.getLastname()); return "redirect:contacts.html"; } @RequestMapping("/contacts") public ModelAndView showContacts() { return new ModelAndView("contact", "command", new Contact()); } }
In above controller class, note that we have created two methods with Request Mapping /contacts and /addContact. The method
showContacts() will be called when user request for a url contacts.html. This method will render a model with name “contact”. Note that in the
ModelAndView object we have passed a blank Contact object with name “command”. The spring framework expects an object with name command if you are using
in your JSP file.
Also note that in method
addContact() we have annotated this method with
RequestMapping folks
The form is completed now. Just run the application in eclipse by pression Alt + Shift + X, R. It will show the contact form. Just enter view values and press Add button. Once you press the button, it will print the firstname and lastname in sysout logs.
Download Source Code
Moving on
In this article we learn how to create a form using Spring 3 MVC and display it in JSP. Also we learn how to retrieve the form values using ModelAttribute annotation. In next section we will go through form validations and different data conversion methods in Spring 3 MVC.
Related: Spring 3 MVC Multiple Row Form Submit example
Get our Articles via Email. Enter your email address.
|
http://viralpatel.net/blogs/spring-3-mvc-handling-forms/
|
CC-MAIN-2015-06
|
en
|
refinedweb
|
Prototyping with qmlscene
Qt includes
qmlscene, a utility that loads and displays QML documents even before the application is complete. This utility also provides the following additional features that are useful while developing QML applications:
- View the QML document in a maximized window.
- View the QML document in full-screen mode.
- Make the window transparent.
- Disable multi-sampling (anti-aliasing).
- Do not detect the version of the .qml file.
- Run all animations in slow motion.
- Resize the window to the size of the root item.
- Add the list of import paths.
- Add a named bundle.
- Use a translation file to set the language.
The
qmlscene utility is meant to be used for testing your QML applications, and not as a launcher in a production environment. To launch a QML application in a production environment, develop a custom C++ application or bundle the QML file in a module. See Deploying QML applications for more information. When given a bare Item as root element,
qmlscene will automatically create a window to show the scene. Notably, QQmlComponent::create() will not do such a thing. Therefore, when moving from a prototype developed with
qmlscene to a C++ application, you need to either make sure the root element is a Window or manually create a window using QtQuick's C++ API. On the flip side, the ability to automatically create a window gives you the option to load parts of your prototype separately with
qmlscene.
To load a .qml file, run the tool and select the file to be opened, or provide the file path on the command prompt:
qmlscene myqmlfile.qml
To see the configuration options, run
qmlscene with the
-help argument.
Adding Module Import Paths
Additional module import paths can be provided using the
-I flag. For example, the QML plugin example creates a C++ plugin identified with the namespace,
TimeExample. To load the plugin, you must run
qmlscene with the
-I flag from the example's base directory:
qmlscene -I imports plugins.qml
This adds the current directory to the import path so that
qmlscene will find the plugin in the
imports directory.
Note: By default, the current directory is included in the import search path, but modules in a namespace such as
TimeExample are not found unless the path is explicitly added.
Often, QML applications are prototyped with test data that is later replaced by real data sources from C++ plugins. The
qmlscene utility assists in this aspect by loading test data into the application context. It looks for a directory named
dummydata in the same directory as the target QML file, and loads the .qml files in that directory as QML objects and bind them to the root context as properties named after the files.
For example, the following QML document refers to a
lottoNumbers property which does not exist within the document:
import QtQuick ListView { width: 200; height: 300 model: lottoNumbers delegate: Text { text: number } }
If, within the document's directory, there is a
dummydata directory which contains a
lottoNumbers.qml file like this:
import QtQuick ListModel { ListElement { number: 23 } ListElement { number: 44 } ListElement { number: 78 } }
Then this model would be automatically loaded into the ListView in the previous document.
Child properties are included when loaded from
dummydata. The following document refers to a
clock.time property:
The text value could be filled by a
dummydata/clock.qml file with a
time property in the root context:
To replace this with real data, bind the real data object to the root context in C++ using QQmlContext::setContextProperty(). This is detailed in Integrating QML and.
|
https://doc.qt.io/qt-6/qtquick-qmlscene.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
- buster 4.20.1-2
- buster-backports 5.10.1-1~bpo10+1
- testing 5.10.1-2
- unstable 5.10.1-2
- experimental 5.13-1
NAME¶btrfs-balance - balance block groups on a btrfs filesystem
SYNOPSIS¶btrfs balance <subcommand> <args>
DESCRIPTION¶The primary purpose of the balance feature is to spread block groups across. Extent sharing is preserved and reflinks are not broken. Files are not defragmented nor recompressed, file extents are preserved but the physical location on devices will change.
The balance operation is cancellable by the user. The on-disk state of the filesystem is always consistent so an unexpected interruption (eg. system crash, reboot) does not corrupt the filesystem. The progress of the balance operation is temporarily stored as an internal state and will be resumed upon mount, unless the mount option skip_balance is specified.
Warning
running balance without filters will take a lot of time as it basically move data/metadata from the whol filesystem and needs to update all block pointers.
The filters can be used to perform following actions:
The filters can be applied to a combination of block group types (data, metadata, system). Note that changing only the system type needs the force option. Otherwise system gets automatically converted whenever metadata profile is converted.
When metadata redundancy is reduced (eg. from RAID1 to single) the force option is also required and it is noted in system log.
Note
the balance operation needs enough work space, ie. space that is completely unused in the filesystem, otherwise this may lead to ENOSPC reports. See the section ENOSPC for more details.
COMPATIBILITY¶
Note
The balance subcommand also exists under the btrfs filesystem namespace. This still works for backward compatibility but is deprecated and should not be used any more.
Note
A short syntax btrfs balance <path> works due to backward compatibility but is deprecated and should not be used any more. Use btrfs balance start command instead.
PERFORMANCE IMPLICATIONS¶Balancing operations are very IO intensive and can also be quite CPU intensive, impacting other ongoing filesystem operations. Typically large amounts of data are copied from one location to another, with corresponding metadata updates.
Depending upon the block group layout, it can also be seek heavy. Performance on rotational devices is noticeably worse compared to SSDs or fast arrays.
SUBCOMMAND¶cancel <path>
Since kernel 5.7 the response time of the cancellation is significantly improved, on older kernels it might take a long time until currently processed chunk is completely finished.
pause <path>
resume <path>
start [options] <path>
Note
the balance command without filters will basically move everything in the filesystem to a new physical location on devices (ie. it does not affect the logical properties of file extents like offsets within files and extent sharing). The run time is potentially very long, depending on the filesystem size. To prevent starting a full balance by accident, the user is warned and has a few seconds to cancel the operation before it starts. The warning and delay can be skipped with --full-balance option.
Note
when the target profile for conversion filter is raid5 or raid6, there’s a safety timeout of 10 seconds to warn users about the status of the feature
-d[<filters>]
-m[<filters>]
-s[<filters>]
-f
--background|--bg
--enqueue
-v
status [-v] <path>
Options
-v
FILTERS¶From kernel 3.3 onwards, btrfs balance can limit its action to a subset of the whole structure: type[=params][,type=...]
The available types are:
profiles=<profiles>¶The way balance operates, it usually needs to temporarily create a new block group and move the old data there, before the old block group can be removed. For that it needs the work space, otherwise it fails for ENOSPC reasons. This is not the same ENOSPC as if the free space is exhausted. This refers to the space on the level of block groups, which are bigger parts of the filesystem that contain many file extents.. After that it might be possible to run other filters.
CONVERSIONS ON MULTIPLE DEVICES
Conversion to profiles based on striping (RAID0, RAID5/6) require the work space on each device. An interrupted balance may leave partially filled block groups that consume the work space.
EXAMPLES¶A more comprehensive example when going from one to multiple devices, and back, can be found in section TYPICAL USECASES of btrfs-device(8).
MAKING BLOCK GROUP LAYOUT MORE COMPACT¶The layout of block groups is not normally visible; most tools report only summarized numbers of free or used space, but there are still some hints provided.
Let’s use the following real life example and start with the output:
$ btrfs filesystem df /path Data, single: total=75.81GiB, used=64.44GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.84GiB GlobalReserve, single: total=512.00MiB, used=0.00B
Roughly calculating for data, 75G - 64G = 11G, the used/total ratio is about 85%. How can we can interpret that:
Compacting the layout could be used on both. In the former case it would spread data of a given chunk to the others and removing it. Here we can estimate that roughly 850 MiB of data have to be moved (85% of a 1 GiB chunk).
In the latter case, targeting the partially used chunks will have to move less data and thus will be faster. A typical filter command would look like:
# btrfs balance start -dusage=50 /path Done, had to relocate 2 out of 97 chunks $ btrfs filesystem df /path Data, single: total=74.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.84GiB GlobalReserve, single: total=512.00MiB, used=0.00B
As you can see, the total amount of data is decreased by just 1 GiB, which is an expected result. Let’s see what will happen when we increase the estimated usage filter.
# btrfs balance start -dusage=85 /path Done, had to relocate 13 out of 95 chunks $ btrfs filesystem df /path Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=15.87GiB, used=8.85GiB GlobalReserve, single: total=512.00MiB, used=0.00B
Now the used/total ratio is about 94% and we moved about 74G - 68G = 6G of data to the remaining blockgroups, ie. the 6GiB are now free of filesystem structures, and can be reused for new data or metadata block groups.
We can do a similar exercise with the metadata block groups, but this should not typically be necessary, unless the used/total ratio is really off. Here the ratio is roughly 50% but the difference as an absolute number is "a few gigabytes", which can be considered normal for a workload with snapshots or reflinks updated frequently.
# btrfs balance start -musage=50 /path Done, had to relocate 4 out of 89 chunks $ btrfs filesystem df /path Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=14.87GiB, used=8.85GiB GlobalReserve, single: total=512.00MiB, used=0.00B
Just 1 GiB decrease, which possibly means there are block groups with good utilization. Making the metadata layout more compact would in turn require updating more metadata structures, ie. lots of IO. As running out of metadata space is a more severe problem, it’s not necessary to keep the utilization ratio too high. For the purpose of this example, let’s see the effects of further compaction:
# btrfs balance start -musage=70 /path Done, had to relocate 13 out of 88 chunks $ btrfs filesystem df . Data, single: total=68.03GiB, used=64.43GiB System, RAID1: total=32.00MiB, used=20.00KiB Metadata, RAID1: total=11.97GiB, used=8.83GiB GlobalReserve, single: total=512.00MiB, used=0.00B
GETTING RID OF COMPLETELY UNUSED BLOCK GROUPS¶Normally the balance operation needs a work space, to temporarily move the data before the old block groups gets removed. If there’s no work space, it ends with no space left.
There’s a special case when the block groups are completely unused, possibly left after removing lots of files or deleting snapshots. Removing empty block groups is automatic since 3.18. The same can be achieved manually with a notable exception that this operation does not require the work space. Thus it can be used to reclaim unused block groups to make it available.
# btrfs balance start -dusage=0 /path
This should lead to decrease in the total numbers in the btrfs filesystem df output.
EXIT STATUS¶Unless indicated otherwise below, all btrfs balance subcommands return a zero exit status if they succeed, and non zero in case of failure.
The pause, cancel, and resume subcommands exit with a status of 2 if they fail because a balance operation was not running.
The status subcommand exits with a status of 0 if a balance operation is not running, 1 if the command-line usage is incorrect or a balance operation is still running, and 2 on other errors.
|
https://manpages.debian.org/experimental/btrfs-progs/btrfs-balance.8.en.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
First solution in Clear category for Find Quotes by jusha
import re
def find_quotes(a):
return re.findall(r'"(.*?)"', find_quotes('count empty quotes ""') == ['']
print("Coding complete? Click 'Check' to earn cool rewards!")
March 28, 2020
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners
|
https://py.checkio.org/mission/find-quotes/publications/jusha/python-3/first/share/5fc7b3f54449f3b5ba7a2bca31b81344/
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
This keyword in Java Java is an object-oriented and class-based language developed by Sun Microsystems. It has a diverse range of keywords that are used to perform predefined or some internal processes in the code. We cannot use these keywords as variables, classes, objects, or any other type of classifiers. The programmer can simply use the keyword and perform the needed action. There are approximately 51 keywords in java language. Some of the most useful ones are abstract, break, case, char, class, continue, float, for, if, null, private, protected, try, this, void, while etc. Out of these 51, 2 keywords are reserved that means for future use: ‘const’ and ‘goto’.
One of these keywords in Java is ‘This’ keyword. When we initialize an object in the class, this keyword helps to refer to that particular object of the class whose method is being called by the coder. This keyword can be used either inside the class of the program or in the constructor. This() can invoke the constructor of the current class being worked on which is then passed as an argument in the method call or in the constructor call.
This keyword has the following properties:
- This keyword is used to get the current object in the program P.
- The keyword is used to invoke the current method of the object.
- This keyword is used to return the current object from the method.
- This() can be invoked in current class constructor C.
- This keyword can also be passed as a parameter to the constructor C.
- The keyword can be passed as an argument in the constructor call.
- This keyword eliminates the chance of naming conflicts in a constructor or method for the object.
public class ClassX{ //Initialise the class int a; public ClassX(int a) { //Constructor with a parameter this.a = a; } public static void main(String[] args) { ClassX myObj = new ClassX(5); //Calling the constructor System.out.println("Value of a = " + myObj.a); } }
Output :
Value of a = 5.
Hence, the most useful property of this keyword that we understand is that it gets rid of the confusion between class parameters and attributes that too with the same name. Try rephrasing the code above without using ‘this’ keyword. You will get the output as 0.
Example for better understanding of this keyword
class ClassA{ //Initialising class int x; int y; ClassA (){ // Default constructor x = 5; y = 10; } void display(ClassA obj){ System.out.println("x = " +obj.x + " y = " + obj.y); } void get(){ // Method that returns current class instance display(this); } public static void main(String[] args){ ClassA object = new ClassA(); object.get(); } }
Output :
x = 5 y = 10
Class: ClassA .
Instance variables: x and y.
Method get(): To get the data for x and y.
Difference between this and super keyword in Java
As we have discussed in the previous section, ‘this’ keyword is used to access the methods of the current class in java. However ‘super’ keyword can access the methods of parent class in java. We can never use this keyword as an identifier as it is one of the reserved keywords in java.
This keyword can also be used for accessing static members in java. Whereas, the super keyword is majorly used to access parent class methods as well as parent class constructors. If both these keywords are used together, they will give a compile-time error. Hence, they can never be used together.
|
https://www.developerhelps.com/this-keyword-in-java/
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
The training of Machine models is often a heavy and above all extremely time-consuming task. This is therefore a job that must be able to be serialized somewhere so that programs using it do not have to re-perform this long operation. This is called persistence, and frameworks such as Scikit-Learn, XGBoost and others provide for this type of operation.
With Scikit-Learn
If you are using Scikit-Learn , nothing is easier. you will need to use the dump and load methods and voila. Follow the guide…
First of all we will train a simple model (a good old linear regression):
import pandas as pd import matplotlib.pyplot as plt from sklearn import linear_model data = pd.read_csv("./data/univariate_linear_regression_dataset.csv") plt.scatter (data.col2, data.col1) X = data.col2.values.reshape(-1, 1) y = data.col1.values.reshape(-1, 1) regr = linear_model.LinearRegression() regr.fit(X, y)
Then we will test our model thus trained with the fit () method
regr.predict([[30]])
We get a forecast of 22.37707681
Now let’s dump our model. We will thus save it in a file (here myfirstmodel.modele):
from joblib import dump, load dump(regr, 'monpremiermodele.modele')
The trained model is thus saved in a binary file. We can now imagine turning off our computer, and turning it back on for example. We will reactivate our model via the load () method combined with the file previously saved on disk:
regr2 = load('monpremiermodele.modele') regr2.predict([[30]])
If we re-test the prediction with the same value as just after training we get – not by magic – exactly the same result.
As usual you will find the full code on Github .
With XGBoost
We’ve already seen this in the article on XGBoost, but here’s a little recap. The XBoost library (in standalone mode) includes of course the possibility of saving and reloading a model:
boost._Booster.save_model('titanic.modele')
boost = xgb.Booster({'nthread': 4}) boost.load_model('titanic.modele')
With CatBoost
We did not mention this aspect there in the article which presented the CatBoost algorithm . We are going to remedy this shortcoming as much as obviously we will still proceed in a different way (well on some details…).
To save a Catboost model:
cb.CatBoost.save_model(clf, "catboost.modele", format="cbm", export_parameters=None, pool=None)
you will notice that we have many more parameters and therefore possibilities to save the model (format, export of parameters, training data, etc.). Do not hesitate to consult the documentation to see the description of these parameters.
And to reload an existing model (from the file):
from catboost import CatBoostClassifier clf2 = CatBoostClassifier() clf2.load_model(fname="catboost.modele", format="cbm")
The nuance here is that it is the model object (clf2) that calls the load_model () method and not the CatBoost object.
And now you will be able to prepare your models to be able to reuse them directly (ie without training) from your programs or API.
|
http://aishelf.org/model-persistence/
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
FUNK OFF :chart_with_upwards_trend:
An over-powered engine to evaluate mathematical graph functions. :chart_with_upwards_trend:
About :books:
The other day I wanted to compute a number of values for a function formula. I wondered how I could implement a function that handles cubes, squares and linear functions fed into a code function as a string and produce a list of values as Ys from a given list of Xs. Funk Off is that library. It supports linear functions, quadratic functions, and cubic functions!
Installation :inbox_tray:
Adding to your project
To add Funk Off to your project's dependencies, add this line to your project's
pubspec.yaml:
From GitHub
depdencies: ... funkoff: git: git://github.com/iamtheblackunicorn/FunkOff.git
From Pub.dev
depdencies: ... funkoff: ^1.4.0
Usage :hammer:
Importing
Import the engine-API like this:
import 'package:funkoff/funkoff.dart';
Note :scroll:
- Funk Off :chart_with_upwards_trend: by Alexander Abraham :black_heart: a.k.a. "The Black Unicorn" :unicorn:
- Licensed under the MIT license.
|
https://pub.dev/documentation/funkoff/latest/
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
05 Feb 2021 03:39 AM
Hi
where i can find a step by step procedure to monitor Cloud Run in Google public Cloud with Dynatrace ?
This post... is not very cler
regards
05 Feb 2021 03:55 AM
I guess here's how you can start:...
05 Feb 2021 04:03 AM
Hi
i need specific step to step information, when i connect on google cloud console for cloud run i cannot create a dynatrace namespace as written in the documentation....
05 Feb 2021 04:33 AM
Have you configured your kubectl for Google cloud?
05 Feb 2021 04:44 AM
I don't like to guess Radoslaw, i need a procedure to monitor Cloud Run on Google Platform.
I don't understand if i have to use the
regards
05 Feb 2021 04:47 AM
And I don't know what environment you have to help you. Maybe reach out to Dynatrace ONE so they guide you?
07 May 2021 07:41 AM
|
https://community.dynatrace.com/t5/Dynatrace-Open-Q-A/How-can-i-monitor-a-Google-Cloud-Run-environment-in-Dynatrace/m-p/121516/highlight/true
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
If that's true then why did you show something different?
If that's true then why did you show something different?
I doubt that very much. You are clearly a braindead moron. :cool:
x is supposed to be an integer, and presumably positive even though it doesn't specify.
What are you babbling about? Don't tell people what they can and can't do. You're just a guest here.
(BTW, the "Reply With Quote" button worked this time, but it sometimes doesn't.)
It seems well-defined to me, although (x - (x - 1)) is a strange way to write 1. And the answer for x=5 is not 13.333334 but 13.3333333... (so no possibility of rounding up to a 4, really).
This...
This sounds totally idiotic.
You really don't seem to know what you're talking about (and I know assembly quite well).
Forget I said anything.
Maybe you meant something more like this.
#include <ctype.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
int main() {
It can't be written as nested loops without totally changing what it does.
It is by nature a single-dimensional loop.
Why do you think you want a double loop?
WTF?
I didn't realize you had a learning disability.
Nope. A "global" is a variable that is accessible throughout the program, in all translation units. Although a global never gets "destroyed" until the program ends (when else would it possibly be...
What a moron.
Your quest is pointless.
You are an idiot.
Those are the facts.
This is pointless.
The problem is that you make wrapLibraryFunc static in the header file.
This causes it to not be seen outside of it's translation unit.
Get rid of the static.
Also, your char* parameters should...
You are probably stepping on memory you shouldn't somewhere.
The different memory layouts of debug and release mode with or without inlining is what is causing the different behavior.
Have you...
All of your return paths return 1.
Presumably the last one should return 0. :)
You need to compile with full warnings and fix all of them. Here are a couple of mistakes:
if (error = mkfifo(fifo , 0666) == -1) {
...
if (error = pthread_create(calc_arr[i],...
You aren't allocating any memory for the visited array.
Nothing is happening.
Your machine (our common Intel/AMD machines) are little endian.
So the value 0xd8754a52 is stored in memory with the "little end" (least significant byte) "first" (lowest) in...
"Conversion loses qualifiers" suggests that the "const" is causing the difference.
Well, "triangle" has an 'n' in it.
And you aren't storing "points" but side lengths.
Also, the exercise says to pass a triangle to the function, not it's separate side lengths.
Maybe something like this. I couldn't test it since you didn't give me a full program. I'm assuming "index_stock" is what might better be called stock_size, the current number of active elements in...
That's the difference between "text" and "binary" modes in both C and C++.
In text mode, the '\r' chars are removed.
In binary mode they aren't.
The OP's problem counting newlines had to do with...
No, '\n' is not "twice as large".
The problem is that you are reading a file with Windows line endings ('\r' '\n') in text mode, so your fread will never see the '\r' characters.
However, the fseek...
GetTextMetrics has nothing to do with C++. It's a Windows API function that would appear to do what you want. Is your program a Windows GUI program?
|
https://cboard.cprogramming.com/search.php?s=4034049aa1b526c9381d3a7af26c751e&searchid=7058517
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Hello All,
i have a task in which i have to design a time schedular in java in which i can automate the batch file execution for the wholw day.
For example i should have a GUI in which i can see the like calander and assign the time slot 07:00 to 9:00 for batch1.bat execution. 09:30 to 10:30 for batch2.bat execution. therfore when i open it again also than i can see which time slot is free to assign another execution.
Please help me on this.
hi friend,
Try the following code, may this will be helpful for you. Code that is being given below will help you understand how to execute a batch file as a schedule.
package net.roseindia.schedular; import java.io.IOException; import java.util.TimerTask; import java.util.Date; import java.util.Timer; import java.util.concurrent.TimeUnit; class Task extends TimerTask { public void run() { try { Runtime.getRuntime().exec("cmd.exe /c start C:\\BatchFile\\run.bat"); } catch (IOException e) { e.printStackTrace(); } } } public class JavaSchedular { public static void main(String args[]) throws InterruptedException { Timer timer = new Timer(); Date date = new Date(); Task task = new Task(); timer.schedule(task, date ,TimeUnit.SECONDS.toMillis(2)); } }
Thanks.
Thanks alot for the reply. Is it possible that user should have a dialog box in which he will enter the batch file paths and enter the date and time for execution.
After this he should press a button EXECUTE and after that the batch file should run in the entered Time.
Please help me on this as i do not have much exoerinec with GUI therefore struggling with this task... :(
I am waiting for your solution.
Ads
|
https://www.roseindia.net/answers/viewqa/Java-Beginners/30559-Time-schedular-for-multiple-batch-file-execution-in-java.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Firstly, here is the code I used to find the bug.
Here, we defined a custom event handle called MyHandle. I the run method doesn't actually do anything because I discovered the bug before I wrote any handling code. In the main program, we set the event manager to threaded mode. Next, we subscribe our custom event handle to the EventSetPush event. This means that every time Set.push() is invoked, so is MyHandle.run() (in a new thread since we are running in threaded mode here). We then create two set instances and push some data onto each set. Finally, we print the underlying Python lists associated with each set instance.
#Example; boduch Set bug.
from boduch.data import Set
from boduch.handle import Handle
from boduch.event import subscribe, threaded, EventSetPush
class MyHandle(Handle):
def __init__(self, *args, **kw):
Handle.__init__(self, *args, **kw)
def run(self):
pass
if __name__=="__main__":
threaded(True)
subscribe(EventSetPush, MyHandle)
set_obj1=Set()
set_obj2=Set()
set_obj1.push("data1")
set_obj2.push("data2")
print "SET1",set_obj1.data
print "SET2",set_obj2.data
Here is my initial output.
Slightly different from what was expected. Each set instance should have had one element each. Instead, the lists look identical. Naturally, I assumed that they were the same list. This lead me to start examining the thread manager, thinking that since I was testing in threaded mode, there must be some sort of cross-thread data contamination. Thankfully, the problem got much simpler since I was able to eliminate this as a potential cause. Next in line, the event manager. I tried everything to try and prove that the Set instances were in fact the same instance. Not so. The instances had different memory addresses.
SET1 ['data1', 'data2']
SET2 ['data1', 'data2']
I then realized that Set inherits from Type but the constructor for type is not invoked. Odd. I tried to think of a reason why I would want to inherit something with no static functionality and not initialize it. I think I may have done this because the underlying list instances of Set objects are stored in an attribute called data. Type instances also define a data attribute. I must have thought, during the original implementation, that defining a data attribute for the Set class would have some adverse effect on the Type functionality. Not so. So now, the Type constructor is invoked but with no parameters. This means that the initial value of the Set.data attribute is actually an empty dictionary since this is what the Type constructor will initialize it as. The Set constructor will then initialize the data attribute to a list accordingly.
This however, wasn't the problem either. I was still getting the strange output that pointed so convincingly at the fact that the Set.data attribute was pointing to the same list instance. So, I took a look at the way in which the data attribute is initialized for Set instances. The Set constructor will accept a data keyword parameter. The default value of this parameter is an empty list. This parameter then becomes the Set.data attribute. Just for fun, I decided to take away this parameter and have the data attribute be initialized as an empty list inside the constructor.
Sure enough, that did it. I got the correct output for my two set instances. The data attribute must have been pointing to the same keyword parameter variable. I have a felling that this may be caused somewhere in the event manager. Or maybe not. I haven't tested this scenario outside the library yet.
I'm going to get this release out as soon as possible. The data keyword parameter for the Set constructor will be removed for now. As a side note, this will also affect the iteration functionality for Hash instances in the next release since the Hash.__iter__() method will return a SetIterator instance containing the hash keys. Each key will simply need to be pushed onto the set instead.
If I understood correctly:
In [66]: def some_func(data =[]):
data.append(1)
....: return data
In [68]: some_func()
Out[68]: [1]
In [69]: some_func()
Out[69]: [1, 1]
In [70]: some_func()
Out[70]: [1, 1, 1]
mutable types should not be used as default arguments!
|
http://www.boduch.ca/2009/03/interesting-bug-found-in-boduch-python.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
From: Johan Nilsson (r.johan.nilsson_at_[hidden])
Date: 2006-11-10 08:56:29
Christopher Kohlhoff wrote:
> Hi Johan,
>
> Johan Nilsson <r.johan.nilsson_at_[hidden]> wrote:
> [...]
>> 2) I've implemented similar code previously, but only designed
>> for portability with linux and Win32. What I very often use,
>> is something corresponding to retrieving errno or calling
>> GetLastError (on Win32).
>>
>> With the current design, it is harder to implement library
>> code throwing system_errors (retrieving error codes) as you'll
>> need to get the corresponding error code in a
>> platform-specific way. Or did I miss something?
>
> I think the intention is that platform specific library
> implementations will initialise the error_code in a
> platform-specific way. At least, this is what asio does.
Boost.Asio as such isn't platform-specific. Wouldn't you (as a library
implementor) prefer to do as much as possible in a portable way?
>
>> 4) I'm wondering about best practices for how to use the
>> error_code class portably while maintaining as much of the
>> (native) information as possible. Ideally, it should be
>> possible to check for specific error codes portably, while
>> having the native descriptions(messages) available for as much
>> details as possible. Using native_ecat and compare using
>> to_errno?
>
> Asio and its TR2 proposal provide a set of global constants of
> type error_code that can be used for portably checking for well
> known error codes. See sections 5.2 and 5.3.2.6 in N2054 for
> more detail:
>
>
I was looking for something like that, yes. Shouldn't that part be included
(but more exhaustively) in the Diagnostics proposal?
Also, I would personally prefer names that are less likely to clash with
others if the containing namespace is brought into scope by using namespace
<ns containing error codes>; error_access_denied,
error_address_family_not_supported etc.
Regards,
Johan
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2006/11/113060.php
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
Looks like the venerable MD5 cryptographic hash has developed a crack: A real MD5 collision. A team has published two different input streams which hash to the same MD5 value. Of course, because of the pigeonhole principle, everyone knew this had to happen. But no one had ever found a pair before.
Now that they have, researchers will be working on the question of whether it is feasible to compute, for any given input stream, a different stream with the same hash. If that happens, then MD5 is useless cryptographically, and a lot of infrastructure will have to be thrown out, but not before a bunch of bad stuff (like theft and fraud) happens.
Mark Pilgrim provides this Python program to demonstrate:
# see
a = "\xd1\x31\xdd\x02\xc5\xe6\xee\xc4\x69\x3d\x9a\x06\x98\xaf\xf9\x5c" \
"\x2f\xca\xb5\x8771\x41\x5a" \
"\x08\x51\x25\xe8\xf7\xcd\xc9\x9f\xd9\x1d\xbd\xf2b4a8\x0d\x1e" \
"\xc6\x98\x21\xbc\xb6\xa8\x83\x93\x96\xf9\x65\x2b\x6f\xf7\x2a\x70"
b = "\xd1\x31\xdd\x02\xc5\xe6\xee\xc4\x69\x3d\x9a\x06\x98\xaf\xf9\x5c" \
"\x2f\xca\xb5\x07f1\x41\x5a" \
"\x08\x51\x25\xe8\xf7\xcd\xc9\x9f\xd9\x1d\xbd\x723428\x0d\x1e" \
"\xc6\x98\x21\xbc\xb6\xa8\x83\x93\x96\xf9\x65\xab\x6f\xf7\x2a\x70"
print a == b
from md5 import md5
print md5(a).hexdigest() == md5(b).hexdigest()
Running it prints:
False
True
Add a comment:
|
https://nedbatchelder.com/blog/200408/md5_collisions.html
|
CC-MAIN-2021-31
|
en
|
refinedweb
|
SYNOPSIS
use Mango;
# Declare a Mango helper
sub mango { state $m = Mango->new('mongodb://localhost:27017') }
# or in a Mojolicious::Lite app
helper mango => sub { state $m = Mango->new('mongodb://localhost:27017') };
# Insert document
my $oid = mango->db('test')->collection('foo')->insert({bar => 'baz'});
# Find document
my $doc = mango->db('test')->collection('foo')->find_one({bar => 'baz'});
say $doc->{bar};
# Update document
mango->db('test')->collection('foo')
->update({bar => 'baz'}, {bar => 'yada'});
# Remove document
mango->db('test')->collection('foo')->remove({bar => 'yada'});
# Insert document with special BSON types
use Mango::BSON ':bson';
my $oid = mango->db('test')->collection('foo')
->insert({data => bson_bin("\x00\x01"), now => bson_time});
# Non-blocking concurrent find
my $delay = Mojo::IOLoop->delay(sub {
my ($delay, @docs) = @_;
...
});
for my $name (qw(sri marty)) {
my $end = $delay->begin(0);
mango->db('test')->collection('users')->find({name => $name})->all(sub {
my ($cursor, $err, $docs) = @_;
$end->(@$docs);
});
}
$delay->wait;
# Event loops such as AnyEvent are supported through EV
use EV;
use AnyEvent;
my $cv = AE::cv;
mango->db('test')->command(buildInfo => sub {
my ($db, $err, $doc) = @_;
$cv->send($doc->{version});
});
say $cv->recv;
DESCRIPTIONMangooDB 2.6 support, use Mango 1.16.
To learn more about MongoDB you should take a look at the official documentation <>, the documentation included in this distribution is no replacement for it.
Look at Mango::Collection for CRUD operations.
Many arguments passed to methods as well as values of attributes get serialized to BSON with Mango::BSON, which provides many helper functions you can use to generate data types that are not available natively in Perl. All connections will be reset automatically if a new process has been forked, this allows multiple processes to share the same Mango object safely.
For better scalability (epoll, kqueue) and to provide IPv6, SOCKS5 as well as TLS support, the optional modules EV (4.0+), IO::Socket::IP (0.20+), IO::Socket::Socks (0.64+) and IO::Socket::SSL (1.84+) will be used automatically if they are installed. Individual features can also be disabled with the "MOJO_NO_IPV6", "MOJO_NO_SOCKS" and "MOJO_NO_TLS" environment variables.
EVENTSMango inherits all events from Mojo::EventEmitter and can emit the following new ones.
connection
$mango->on(connection => sub { my ($mango, $id) = @_; ... });
Emitted when a new connection has been established.
ATTRIBUTESMango implements the following attributes.
default_db
my $name = $mango->default_db; $mango = $mango->default_db('test');
Default database, defaults to "admin".
hosts
my $hosts = $mango->hosts; $mango = $mango->hosts([['localhost', 3000], ['localhost', 4000]]);
Servers to connect to, defaults to "localhost" and port 27017.
inactivity_timeout
my $timeout = $mango->inactivity_timeout; $mango = $mango->inactivity_timeout(15);
Maximum amount of time in seconds a connection can be inactive before getting closed, defaults to 0. Setting the value to 0 will allow connections to be inactive indefinitely.
ioloop
my $loop = $mango->ioloop; $mango = $mango->ioloop(Mojo::IOLoop->new);
Event loop object to use for blocking I/O operations, defaults to a Mojo::IOLoop object.
j
my $j = $mango->j; $mango = $mango->j(1);
Wait for all operations to have reached the journal, defaults to 0.
max_bson_size
my $max = $mango->max_bson_size; $mango = $mango->max_bson_size(16777216);
Maximum size for BSON documents in bytes, defaults to 16777216 (16MB).
max_connections
my $max = $mango->max_connections; $mango = $mango->max_connections(5);
Maximum number of connections to use for non-blocking operations, defaults to 5.
max_write_batch_size
my $max = $mango->max_write_batch_size; $mango = $mango->max_write_batch_size(1000); Maximum number of write operations to batch together, defaults to C<1000>.
protocol
my $protocol = $mango->protocol; $mango = $mango->protocol(Mango::Protocol->new);
Protocol handler, defaults to a Mango::Protocol object.
w
my $w = $mango->w; $mango = $mango->w(2);
Wait for all operations to have reached at least this many servers, 1 indicates just primary, 2 indicates primary and at least one secondary, defaults to 1.
wtimeout
my $timeout = $mango->wtimeout; $mango = $mango->wtimeout(1);
Timeout for write propagation in milliseconds, defaults to 1000.
METHODSMango inherits all methods from Mojo::Base and implements the following new ones.
backlog
my $num = $mango->backlog;
Number of queued operations that have not yet been assigned to a connection.
db
my $db = $mango->db; my $db = $mango->db('test');
Build Mango::Database object for database, uses ``default_db'' if no name is provided. Note that the reference ``mango'' in Mango::Database is weakened, so the Mango object needs to be referenced elsewhere as well.
from_string
$mango = $mango->from_string('mongodb://sri:[email protected]:3000/test?w=2');
Parse configuration from connection string.
get_more
my $reply = $mango->get_more($namespace, $return, $cursor);
Perform low level "GET_MORE" operation. You can also append a callback to perform operation non-blocking.
$mango->get_more(($namespace, $return, $cursor) => sub { my ($mango, $err, $reply) = @_; ... }); Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
kill_cursors
$mango->kill_cursors(@ids);
Perform low level "KILL_CURSORS" operation. You can also append a callback to perform operation non-blocking.
$mango->kill_cursors(@ids => sub { my ($mango, $err) = @_; ... }); Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
new
my $mango = Mango->new; my $mango = Mango->new('mongodb://sri:[email protected]:3000/test?w=2');
Construct a new Mango object and parse connection string with ``from_string'' if necessary.
Not that is is strongly recommended to build your Mango object inside a helper function like shown in the synopsis. This is because the Mango's object reference inside Mango::Database objects is weakened to avoid memory leaks. This means your Mango instance is quickly going to get undefined after you use the "db" method. So, use a helper to prevent that.
If a username and password are provided, Mango will try to authenticate using SCRAM-SHA1. Warning: this will require Authen::SCRAM which is not installed by default.
query
my $reply = $mango->query($namespace, $flags, $skip, $return, $query, $fields);
Perform low level "QUERY" operation. You can also append a callback to perform operation non-blocking.
$mango->query(($namespace, $flags, $skip, $return, $query, $fields) => sub { my ($mango, $err, $reply) = @_; ... }); Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
DEBUGGINGYou can set the "MANGO_DEBUG" environment variable to get some advanced diagnostics information printed to "STDERR".
MANGO_DEBUG=1
SPONSORSSome of the work on this distribution has been sponsored by Drip Depot <>, thank you!
AUTHORSebastian Riedel, "[email protected]".
Current maintainer: Olivier Duclos "[email protected]".
CREDITSIn alphabetical order:
- alexbyk
Andrey Khozov
Colin Cyr
This program is free software, you can redistribute it and/or modify it under the terms of the Artistic License version 2.0.
|
http://manpages.org/mango/3
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
0
So here is how the program.
Heres the code:
//Calculator V2 #include <iostream> #include <string> #include <vector> #include <cstdlib> #include <cstdio> #include <cstring> using namespace std; int main() { vector <string> myList; string operation; char add[2]="+"; char substract[2]="-"; char multiply[2]="*"; char divide[2]="/"; cout<<"Please write the operation you would like to do"; getline(cin,operation); size_t pos = operation.find('+-*/^'); myList.push_back(operation.substr(0,pos)); myList.push_back(operation.substr(pos)); myList.push_back(operation.substr(pos+1)); double num1 = atof(myList[0].c_str()); double num2 = atof(myList[2].c_str()); if ( myList[1]==add ) { cout<<"The result is " << num1 + num2<< "\n"; } else if ( myList[1] == substract ) { cout<<"The result is " << num1 - num2<< "\n"; } else if (myList[1] == multiply) { cout<<"The result is " << num1 * num2<< "\n"; } else if (myList[1] == divide) { cout<<"The result is " << num1 / num2<< "\n"; } cin.get(); return 0; }
It compiles fine, even though it gives me a warning: "character constants too long". So when I write the input it gives a runtime error and quits automatically.
Any help would be appreciated,
Thank You
|
https://www.daniweb.com/programming/software-development/threads/206790/need-help-with-program
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
- Advertisement
Friend functions in namespaces
By
noatom, in General and Gameplay Programming
This topic is 1960 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
If you intended to correct an error in the post then please contact us.
Recommended Posts
- Advertisement
|
https://www.gamedev.net/forums/topic/642781-friend-functions-in-namespaces/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
This conversation has reminded me of something I started writing several
times in the past 8 months or so, an apache module to handle blocking /
notifying of infected hosts. While i appreciate what EarlyBird does, I
think it's implementation could be improved (ie, grep'ing through a flat
file to see if a host has been blocked / notifyed before). So I started
on this module in perl. Initially, i had a configuration file that was
read in upon startup containing regexes matching .exe/.ida/../../../ /
etc. I went through a couple of versions using different methods to log
previous attacks so that the same admin wasn't notified multiple times
(flatfile originally, then berkeley db, then mysql), and then I stopped.
The average admin isn't going to want to run mysql (or any other db
daemon) on their box simply to not have to parse through webserver logs
anymore. So i think i'm going to go back and rewrite based on berkeley
db again. This is a request for input on what features you (admins)
would like / appreciate / wish for. Currently, this module does the
following:
1) logs the attack, and provides a event based handler for responding
(ie, firewall rules, realtime email/monitoring notification,
counterattack, etc)
2) once a night (via cron), the db is parsed, and email to admins is
prepared. No admin/abuse contact recieves more than one email per night
(all hosts from that netblock are condensed into one report), and no one
is notified about a host more than once per week. These are all
configurable (not easily yet). There's also a email template file that
you can edit. the code that looks up admins via arin/apnic/etc is
currently real dirty; this actually has been the most difficult task
involved in the project.
And that's that. suggestions? ideas? one thing i was bouncing around was
a cgi-generated page that allows you to choose who gets notified and who
doesn't (like spamcop). I'm nervous about sending email unattended, even
though i've tested it a bit. So i'll probably have this ready for public
review sometime this weekend. I doubt i can get it in the Apache::
namespace though, but i'll let you all know when it's up in my cpan
directory. It may take longer than this because 1) i'm moving this week,
2) i have no dsl at my new place, and 3) i'm in the middle of a launch
at my day job, but we'll see.
-jon
--
jon@divisionbyzero.com ||
gpg key:
think i have a virus?:
"You are in a twisty little maze of Sendmail rules, all confusing."
|
http://mail-archives.apache.org/mod_mbox/httpd-users/200201.mbox/%3C1012337292.25224.29.camel@devotchka.sonicopia.com%3E
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Provide script API for Stellarium global functions. More...
#include <StelMainScriptAPI.hpp>
Public slots in this class may be used in Stellarium scripts, and are accessed as member function to the "core" scripting object. Module-specific functions, such as setting and clearing of display flags (e.g. LandscapeMgr::setFlagAtmosphere) can be accessed directly via the scripting object with the class name, e.g. by using the scripting command: LandscapeMgr.setFlagAtmosphere(true);
Preset states:
You should do this before the end of your script.
You should do this before the end of your script.
Wrapper for StelSkyDrawer::getBortleScaleIndex
"2008-03-24T13:21:01"
"0h1m68.2s"
The image is projected like a deep-sky object, with a notion for surface magnitude of the brightest parts. Transparent sections in the image are possibly rendered white, so make your image just RGB with black background. The black background covers the milky way, but is brightened by the Zodiacal light.
"12d 14m 8s" or "5h 26m 8s" - formats accepted by StelUtils::getDecAngle()).
Note that the edges will not be aligned with edges at center plus/minus size!
Parameters are the same as the version of this function which takes double values for the lon and lat, except here text expressions of angles may be used.
The move will run in AltAz coordinates. This will look different from moveToRaDec() when timelapse is fast. angles may be specified in a format recognised by StelUtils::getDecAngle()
The move will run in equatorial coordinates. This will look different from moveToAltAzi() when timelapse is fast. angles may be specified in a format recognised by StelUtils::getDecAngle()
Note that you may need to use a key sequence like 'Ctrl-D,R' or the GUI to resume script execution.
Subsequent playSound calls will resume playing from the position in the file when it was paused.
Subsequent playVideo() calls will resume playing from the position in the file when it was paused.
The video appears out of fromX/fromY, grows within popupDuration to size finalSizeX/finalSizeY, and shrinks back towards fromX/fromY at the end during popdownDuration.
This is required to allow reading with other program on Windows while output.txt is still open.
Wrapper for StelSkyDrawer::setBortleScaleIndex Valid values are in the range [1,9]
"2008-03-24T13:21:01"
Note this only applies to GUI plugins which provide the public slot "setGuiVisible(bool)".
Usually this minimum will be switched to after there are no user events for some seconds to save power. However, if can be useful to set this to a high value to improve playing smoothness in scripts.
in wide cylindrical panorama screens to push the horizon down and see more of the sky.
Use this e.g. in startup.ssc. Implemented for 0.15 for a setup with 5 projectors with edge blending. The 9600x1200 get squeezed somewhat which looks a bit odd. Use this stretch to compensate. Experimental! To avoid overuse, there is currently no config.ini setting available.
This resets the position in the sound to the start so that subsequent playSound calls will start from the beginning.
This resets the position in the video to the start so that subsequent playVideo() calls will start from the beginning.
This function will take into account the rate (and direction) in which simulation time is passing. e.g. if a future date is specified and the time is moving backwards, the function will return immediately. If the time rate is 0, the function will not wait. This is to prevent infinite wait time.
|
http://stellarium.org/doc/0.17/classStelMainScriptAPI.html
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
1.1 Introduction to Java Immutable
Immutable means that content of the object cannot be changed once it is created. In java, String is immutable and its content cannot be changed.
By the statement “content cannot be changed” we mean that whenever you try to update the content of a string, a new object of string is created and returned.
We know there are several methods available like concat which is used to update the String but such methods returns the new String.
1.2 Example
Let’s use concat method and we can see that initial string content will remain same.
public class Test { public static void main(String[] args) { String str = "This is the intital String "; System.out.println("INITIAL STRINg="+str); String str1= str.concat("APPEND"); System.out.println("INITAL STRING PSOT CONCAT="+str); System.out.println("RETURNED STRING =" + str1); } }
Output
|
http://www.wideskills.com/java-tutorial/java-immutable
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
I have a problem. I want to write an application using long running transactions.
My usecase is with several steps:
1. I want to create and persist an object
2. This object is subsequently changed
3. At the end I want to save (flush) changes to the database.
To achieve this behavior I tried to use PersistentContext in EXTENDED mode. I have written stateful session bean which implements all steps. This did not work so I have tried how TraiBlazer application is working
In the module Runtime services ? Application transactions ? there is an example of stateful session bean running with a persistent context in the extended mode.
@Stateful
public class ApptransCalculator implements Calculator, Serializable {
@PersistenceContext(
type=PersistenceContextType.EXTENDED
)
protected EntityManager em;
// ... ...
}
In the example the flush and commit to the database should happen only when the checkout method is called. In example the checkout method is called in the update2.jsp (check.jsp). I have repacked the EJB3Trail.ear with some changes:
1. I enabled show_sql property for Hibernate.
2. I have changed the datasource to oracle.
I monitored the changes in the database after each step. I found out that it behaves not as the example is trying to show. The click on update button on update.jsp (calculator.jsp) writes to the database. Now the question is: Am I understanding something wrong or is these just not working as it should? As I understand flush should happen when checkout method (which is annotated with @Remove) is called.
Thanks
Peter Repinc
what are your id generator? What is your underlying DB?
are all the mehtods except checkout run outside a transaction?
|
https://developer.jboss.org/thread/107104
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
selectOneMenu ExampleArnaud Morel Aug 11, 2009 3:30 PM
Hello,
I'm beginning with Seam and I've looked at examples but none cover my needs. In an application, you usually have some dropdownlists for references like country, town, ...
In DvD application example, I found a category dropdownlist, but it is tied to components.xml. Well, I'm trying to do it in a bean. Here is my code:
JSF :
<h:selectOneMenu <s:selectItems value="#{pays.list}" var="pays" label="#{pays; @Factory("list") public List<ReferentielpaysBO> getList() { this.list = referenceService.retrievePays(); return this.list; } public void setList(List<ReferentielpaysBO> list) { this.list = list; } }
I haven't shown referenceService because it is working fine (setting a break point after service call allowed me to control and validate data).
After resuming execution, I have this error:
value of context variable is not an instance of the component bound to the context variable: pays. If you are using hot deploy, you may have attempted to hot deploy a session or application-scoped component definition while using an old instance in the session.
I guess it is due to bad use of annotations. I have troubles understanding some of annotations and documentation is sometimes unclear for a beginner (both in Java and Seam).
Any idea?
1. Re: selectOneMenu ExampleNikos Paraskevopoulos Aug 11, 2009 4:13 PM (in response to Arnaud Morel)
Hello,
The usage of <s:selectItems> seems suspicious:
The component and the iteration variable both have the same name
pays. Could you change the name of one (e.g. the iteration variable) and see what happens, just in case...
Additionally, you outject the list field of the pays component but access it through pays: #{pays.list}. This is redundant (I suggest you change the select items to: <s:selectItems value="#{list}" .../>).
Last, from the code it seems that the signature of #{individuAction.idpays} should be:
public void setIdpays(ReferentielpaysBO x) {...} public ReferentielpaysBO getIdpays() {...}
I.e. idpays is a property of type ReferentielpaysBO, same as the list.
2. Re: selectOneMenu ExampleArnaud Morel Aug 11, 2009 4:52 PM (in response to Arnaud Morel)
Thanks for help. I have changed my code following your instructions but it doesn't work. I have another error through.
JSF:
<h:selectOneMenu <s:selectItems value="#{listePays}" var="pay" label="#{payePays; @Factory("listePays") public List<ReferentielpaysBO> getListePays() { this.listePays = referenceService.retrievePays(); return this.listePays; } public void setListePays(List<ReferentielpaysBO> listePays) { this.listePays = listePays; } }
I have renamed list to listePays. I guess @Out contexts have to be unique. So listePays is more explicit than list.
In my JSF, I just refer to listePays as you suggested.
Relating to stored value, I'll see later. It should print my menu at least (I've tried with some selectItem and it worked.)
My final goal is to stored selected country into people bean (individuAction). indivduAction is a bean used to view and edit IndividuadresseBO. But I haven't finished this part yet. I'm starting with simple things like oneMenuSelect...
For information, here is the BO:
@Entity @Table(name = "INDIVIDUADRESSE") @NamedQueries({ @NamedQuery( name=ALL, query="from IndividuadresseBO") }) @Name("individuadresseBO") public class IndividuadresseBO implements java.io.Serializable { public class QN{ public static final String ALL = "IndividuadresseBO.all"; } private static final long serialVersionUID = 1L; @Id @Column(name = "IDINDIVIDU" , nullable=false) private Long idindividu; ... @ManyToOne @JoinColumn(name="IDPAYS") @ForeignKey(name="FK_INDADRESSE_REFPAYS") @Fetch(FetchMode.JOIN) private ReferentielpaysBO pays;
3. Re: selectOneMenu ExampleArnaud Morel Aug 11, 2009 4:55 PM (in response to Arnaud Morel)
I forgot to print my new error:
Could not instantiate Seam component: org.jboss.seam.ui.entityLoader
In console:
Can not set static javassist.util.proxy.MethodFilter field org.jboss.seam.ui.JpaEntityLoader_$$_javassist_seam_10._method_filter to org.jboss.seam.Component$1
Can not set static javassist.util.proxy.MethodFilter field org.jboss.seam.ui.JpaEntityLoader_$$_javassist_seam_10._method_filter to org.jboss.seam.Component$1
4. Re: selectOneMenu ExampleArnaud Morel Aug 12, 2009 7:37 AM (in response to Arnaud Morel)
Nobody can help me on this?
If I could have find somewhere a working example of a selectOneMenu...
I tried this example :
But it doesn't work!
5. Re: selectOneMenu ExampleNikos Paraskevopoulos Aug 12, 2009 9:26 AM (in response to Arnaud Morel)
Hello again,
Have you completed the settings required for <s:convertEntity> in components.xml, as described in ch.32 of the documentation? Could you show the relevant code?
Could you show the code of the individuAction component, especially the part related to idpays?
6. Re: selectOneMenu ExampleArnaud Morel Aug 12, 2009 9:56 AM (in response to Arnaud Morel)
It worked!
You was right about s:convertEntity. I have read nowhere that I needed to set components.xml.
In fact, Seam is meant to have very little xml configuration... But I admit that I haven't read documentation yet.
So, I've just removed convertEntity and it worked immediatly :)
A big thank you for your precious help!
7. Re: selectOneMenu ExampleArnaud Morel Aug 17, 2009 5:35 PM (in response to Arnaud Morel)I'm still having problems with selectOneMenu.
As said in my last post, my list is now working : I can load my page without any error, and the list is working as intended.
But if I select something and validate, I get this error:
Erreur de conversion quand la valeur com.pndata.pnasso.business.bo.ReferentielpaysBO@2b9789b9 est commise pour le modele null Converter.
(in english, something like Converting error when value ... is commited for model ...
I suppose I need to add <s:convertEntity /> to my selectOneMenu but well, I had this error when added:
Could not instantiate Seam component: org.jboss.seam.ui.entityLoader
I have followed this:
Read many posts relating to this problem, but I can't manage to solve this.
My entity manager is declared in my service:
@PersistenceContext
private EntityManager em;
I guess I will use an id instead of object for the value, so no converter are needed...
8. Re: selectOneMenu ExampleJaime Martin Aug 18, 2009 11:26 AM (in response to Arnaud Morel)
Have you tried with a new converter? (not s:convertEntity) but a new one special for you type, and then use it this way inside h:selectOneMenu code block:
<f:converter
And don´t forget to configure it in faces-config.xml
9. Re: selectOneMenu ExampleLeo van den berg Aug 18, 2009 1:15 PM (in response to Arnaud Morel)
Hi Arnaud,
Be aware that Seam intercepts stuff with the @In annotation, meaning that a refernce to Entitymanagere annotated with @PersistenceContext is NOT handled by Seam but by the ejb-container.
Try changing the anntoation to @In and see if that helps. The convertEntity seam.tag is very usefull if you want direct entityhandling in your pages.
|
https://developer.jboss.org/thread/189186
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Python programming language began at the end of 1982 when Guido van Rossum entered the team developing the ABC language. After the termination of the ABC project, he joined the Amoeba OS team while he worked on a simple project on his free time. Python is the result of the work he performed in his spare time. The agenda of this programming language was flexible and general-purpose language.
Python is simple to use, easy to read, and contains multiple programming diagrams like object-oriented, functional, and parallel programming. Also, it is supported by vast groups who have created a variety of open-source libraries centering on Python. A brief list of these third-party libraries include:
1. NumPy: Used for scientific programming such as matrix calculations
2. NLTK: A toolkit for language processing for Python
3. PySerial: Gives the ability to use serial communication
4. PyGame: Helps to build games
5. PyBrain: Helps to build artificial intelligence
Differences between Python and C
Unlike C, Python is a general-purpose programming language and can be used to build anything from a web UI to visual applications. It's also a dynamic language that can manage memory automatically. On the other hand, since Python's interpreting level is higher than C, it cannot compete with C in running speed.
However, developing a program using Python may save a lot of time and resources because it is much simpler than C.
In Python, like PHP and Perl, it’s not necessary to define the types of variables. There is no type definition in Python while C needs to know variable types.
For example, in C, defining an integer variable looks like this:
int a=4;
The above line says that “a” is a variable in memory and the size of an integer. However, in Python, we can just declare a variable without mentioning its type:
a=1
The type of variable is dynamic in Python and can change during run-time. The above code only states that “a” is referencing a part of the memory. The above code is interpreted like this:
1. A part of memory with the size of an integer is created, because Python knows that “1” is an integer.
2. Python saves the name of “a” in another part of the memory.
3. A link is created that says “a” is referencing to “1”.
As I mentioned above Python can manage memory by itself, unlike C. In Python, every variable has a reference number that states the total amount referencing it. After each declaration, Python increases the reference number. After removing it, it decreases the references number. Finally, after the reference number of a variable becomes zero, Python will eliminate the variable from the memory by itself.
This technique is called Garbage Collection. By importing sys module and using getrefcount() function, we can find out how many variables are referenced by an object. The code is shown below:
import sys a = 1 sys.getrefcount(1) 760 a = 2 sys.getrefcount(1) 759 sys.getrefcount(2) 96 b = a sys.getrefcount(2) 97
Working with Lists in Python
Working with arrays is very simple in Python. The type of list is the most flexible pre-ready object in Python. A list object can be defined by using [ ] and can have members regardless of their type. For example, the below code defines a list with everything:
L = [15, 3.14, 'string', [1, 2]]
With this list architecture, matrix implementation is very easy. Here's an example:
L = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
As stated above, by assigning a variable to another variable, Python simply adds a reference to the memory. This behavior is comparable to using pointers in C. For example, we could copy a list like this:
L1 = [1, 2, 3] L2 = L1
If we change an element value in the L2 list, that element will be changed in the L1 list, too. This is because L2 is nothing but a pointer to the L1 list's memory address.
We can avoid this by making a copy of the L1 list and then assigning it to L2 like this:
L2 = L1[:]
As you can see, working with lists in Python is very easy and, like Matlab, we can access all list elements by using [:]. Another useful thing in Python is dict object. A dict object is like a map and has a key and value. A dict object can be defined as below:
d = {1:'One', 2:'Two', 3:'Three'}
For more introduction, some important syntax templates are brought down here. This is how you can define a conditional command:
if percent == 100: print('100 %') elif percent >= 75: print('75-100 %') elif percent >= 50: print('50-75 %') elif percent >= 25: print('25-50 %') else: print('less than 25 %')
This is how you can define a "for" loop:
for target in object: statements
This is how you can define a "while" loop:
while condition : statements
Another important difference is that in Python we don’t need to use {} as an indication of a block. Python will interpret blocks according to their indentation. As can be seen in Figure 1, lines with the same indentation are assumed to be the same block; that is, all statements with the same distance to the right hand side belong to the same block.
Figure 1. Python clock indentation. Image source:.
Working with Python in Raspberry Pi
The number of designers utilizing Raspberry Pi in advanced projects is increasing rapidly. Many use Raspbian OS for their Raspberry Pi because of its efficiency.
In Raspbian, we can benefit from Python's strengths by using the pre-installed RPi.GPIO library. In the following parts, we will examine using Python in Raspbian. Frst, we need to import the RPi.GPIO library like this:
import RPi.GPIO as GPIO
Calling Raspberry Pi pins in GPIO library can be done in two ways.
The first way is the BOARD option, which means that we can call pins according to their number printed on the Raspberry Pi. This numbering won’t change between Raspberry Pi models.
The second option is BCM. By using BCM, we can call pins according to their number assigned for every Raspberry Pi. For example, Raspberry Pi 2 model B pinout is shown in Figure 2:
Figure 2. Raspberry Pi 2 Model B pinout. Image courtesy of Raspberry Pi.
As you can see in the connector, pin 3 is assigned to GPIO2. If we want to use this pin in the BOARD option, we should call it Pin3. To use the BCM option, we need to call it GPIO2. By using the following codes we can set the pin modes:
#set up GPIO using BCM numbering GPIO.setmode(GPIO.BCM) #setup GPIO using Board numbering GPIO.setmode(GPIO.BOARD)
After setting the pin modes, we can set their direction and also, if needed, set a pull-up or pull-down resistor for them. The following codes will do that:
GPIO.setup(23, GPIO.IN, pull_up_down = GPIO.PUD_DOWN) GPIO.setup(24, GPIO.IN, pull_up_down = GPIO.PUD_UP)
One important thing is to clean the GPIO mode before leaving the program. If we don’t clean the GPIO, it will stay at its latest value. The GPIO.cleanup() will do that for us.
Coding for Raspberry Pi with Python is very simple and helpful because you can write an application quickly and without a lot of specialized knowledge. For example, by using the below code, we can simply define an interrupt on a pin and set a callback:
GPIO.add_interrupt_callback(7, do_something, debounce_timeout_ms=100)
Development Resources
CodeSkulptor presents a very good environment which you can use to run Python scripts on the web. This is a free tool!
When you launch the program, you are treated to a very useful example case that shows you how to run a simple GUI on the web using Python and the “simplegui” library.
Figure 3. The example. Image courtesy of Codeskulptor.
As you can see in Figure 3, an example which runs a simple GUI is presented. By running it, we can see the power of Python and its flexibility for running everywhere, even in web applications. The result of the example picture is shown in Figure 4.
Figure 4. Running the Codeskulptor example
You can access the demo section here to view other contributions.
Conclusion
In this brief introduction to Python, we examined some differences between Python and C and drew some examples of syntax to create a better understanding of Python. We then briefly learned how to build an application using the GPIO library for Raspberry Pi and, finally, we got familiar with Codeskulptor and its environment.
This is just a quick overview of some aspects of Python. If you'd like to see more articles on programming languages, let us know in the comments below!
2 CommentsLogin
Thanks for the overview, Python is a language i hope to learn in he future. it should be beneficial for going into computer networking.
Thanks. We will cover more of this language as a project in the future.
|
https://www.allaboutcircuits.com/technical-articles/better-know-your-programming-languages-introduction-to-python/
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Opened 7 years ago
Closed 7 years ago
Last modified 7 years ago
#16105 closed Bug (invalid)
libgdal1-1.7.0 broke django.contrib.gis.gdal.libgdal
Description
importing django.contrib.gis.gdal.libgdal results in a segfault when I use libgdal1-1.7.0
Using 1.6 it doesn't.
Quickfix I made that worked was to install both but modify gdal.libgdal to prioritize 1.6.
This may be a problem with my libgdal maybe. Using debian-testing.
Change History (4)
comment:1 Changed 7 years ago by
comment:2 Changed 7 years ago by
This program:
from ctypes import CDLL
from ctypes.util import find_library
lib_path = find_library('gdal1.7.0')
lgdal = CDLL(lib_path)
_version_info = lgdalGDALVersionInfo?
Gives the following output:
Traceback (most recent call last):
File "test.py", line 4, in <module>
lgdal = CDLL(lib_path)
File "/usr/lib/python2.6/ctypes/init.py", line 353, in init
self._handle = _dlopen(self._name, mode)
OSError: /usr/lib/libgdal1.7.0.so.1: symbol cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
I'm using 1.7.3-4
I should find the right place to post this...
comment:3 Changed 7 years ago by
Sorry:
from ctypes import CDLL from ctypes.util import find_library lib_path = find_library('gdal1.7.0') lgdal = CDLL(lib_path) _version_info = lgdal['GDALVersionInfo']
comment:4 Changed 7 years ago by
Based on this traceback, I'd say it's a Debian bug. I suggest
The error message is similar to.
In my opinion, your segfault could be caused by:
ctypes—
django.contrib.gis.gdal.libgdaluses this module in a very straightforward way to load
libgdal1,
libgdal1— you're using a testing version, after all,
Since
ctypesis part of the standard library, I think you should take the offending code from, extract the minimum failing test case, and post that to Python's bug tracker.
If might be as simple as:
(Disclaimer: I don't have access to a box running debian-testing, so I don't know if this is sufficient to exhibit the bug.)
Anyway, Django does not contain any C code, it's written entirely in Python. So while it can raise exceptions, it can't (in theory) create segfaults. For this reason, I'm going to mark the bug as invalid.
|
https://code.djangoproject.com/ticket/16105
|
CC-MAIN-2018-39
|
en
|
refinedweb
|
Piste Maps
Contents
Approval
Even though a voting never took place, this feature has been accepted by the community. In Europe over 9 000 objects are tagged with these features.
Summary
This proposal is for a whole set of tags to describe piste maps. Formerly there were few possibilities to map a ski resort. (Few) Information is given here: WikiProject Piste Maps, Key:aerialway. JOSM already supports parts of the proposed features here, like the piste:difficulty or aerialway=station.
In winter 2005/2006 some tracks were recorded and piste maps were tagged. This proposal builds on that experience but takes into account the development of new additional techniques (such as areas) since then.
Tag prefix
Piste map-specific tags use the
piste: namespace to avoid conflicts with similar tags used in other contexts (e.g., capacity, speed, classification). The term
piste comes from the French word for track or trail and means a marked path or run down a mountain.
Areas
- landuse=winter_sports
Ski lifts
aerialway=* has already been accepted onto Map Features, but this proposal involves making the following changes to that key:
- Split gondola off as a new value, distinct from cable_car.
- Split aerialway=drag_lift into several typically used types of surface lifts. All these can be considered aerialways due to the aerial cable, even though the stay on the ground. aerialway=magic_carpet isn't an aerialway at all, but it fits better here than creating a new tag for it. This means, that old aerialway=drag_lift tags are deprecated and should be changed.
- additionally add a pylon node value.
- add several attributes which aerialways may have.
The resulting set of values would be as follows:
Railways going up a hill are discussed and proposed separately: railway=incline..
Pistes
A piste may be a route through a variety of different terrains. It might consist of a meadow or a mountain road or part of a glacier. Therefore piste mapping has two levels.
- Underlying elements: meadow, mountain road, part of glacier, etc. These might be defined as ways
or areas
.
- The underlying elements are grouped together by a
relation route=piste.
In this way one piste can consist of multiple elements and one element (road, etc.) can belong to several pistes (relations).
Below, there will be discussed the tags applicable to the underlying elements. For more information on the piste relation, see route=piste.
Survey of some piste types like Nordic or skitour leave considerable freedom of judgement to the mapper of what a Nordic/skitour is. It is suggested to tag recommended routes by some tourism authority and pistes used by many people during a season. But do not add pistes that are only used by very few people (e.g only yourself), or that are dangerous (avalanche, etc.). Keep in mind that adding a piste is a recommendation to other people for using the.
Frequently groomed Nordic tracks are dual (ab)used as hiking tracks and sledding tracks for skitouring and hiking. Some of these uses are unofficial. Currently there is no qualified way to tag dual use pistes.
Difficulty
Note: For showing signposting color of pistes (notably for crosscountry skiing and snowshoeing), user may wants to use the Tag:route=piste proposal, and keep the difficulty tag to give information on the actual difficulty of the way.
Grooming
Intended for use with piste:type=nordic, piste:type=downhill (classic, mogul, backcountry), or piste:type=sled (classic, backcountry) to describe the style of piste preparation.
Many providers of ski preparation data draw a line between tracks groomed by
snowmobile, and larger
Other features
If a ski trail is ungroomed due to having no-one to groom it (e.g., a permanently-closed commercial ski area), abandoned=yes may be used. The difference between a backcountry, not groomed and abandoned may seem fuzzy: Important difference is that low vegetation can be expected, and lack of clearing could be due to issues with land owners.
Priority
Grooming condition and priority: Ski-trails can be marked with piste:grooming:priority=1-5 (or piste:priority=* ?) to indicate the order in which they will be groomed.
As OSM is not directly suitable for live-ski-trail-condition monitoring, this will allow rendering maps that have a static way of depicting ski-trails that will be more likely to be in the best shape under less fortunate snow conditions. One example of a such map in Norway use different line widths for this.
Start counting from first priority and downwards. Assume the priority is relative within each area.
Other
Routes
- Tag:route=ski
- Tag:route=piste (has a wider range of values for piste:type=* consistent with the Piste Maps Proposal)
Rental
- Ski school (building=yes, amenity=ski_school)
- Ticket office (shop=ticket / or modify office=travel_agent proposal for general ticket sale)
- Mountain restaurant (amenity=restaurant, building=yes)
- Mountain rescue and ski patrol (amenity=mountain_rescue)
- Mountain hut/refuge/lean to (amenity=shelter)
- Bobsleigh track (sport=bobsleigh)
- Piste maps/skiing maps (tourism=information, information=map, ski=yes)
- Ski playgrounds leisure=ski_playground
|
http://wiki.openstreetmap.org/wiki/Pist
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Jann Horn <j...@thejh.net> writes: > On Mon, Oct 17, 2016 at 11:39:49AM -0500,. > > This looks good! Basically applies the same rules that already apply to > EUID/... changes to namespace changes, and anyone entering a user > namespace can now safely drop UIDs and GIDs to namespace root.
Advertising
Yes. It just required the right perspective and it turned out to be straight forward to solve. Especially since it is buggy today for unreadable executables. > This integrates better in the existing security concept than my old > patch "ptrace: being capable wrt a process requires mapped uids/gids", > and it has less issues in cases where e.g. the extra privileges of an > entering process are the filesystem root or so. > > FWIW, if you want, you can add "Reviewed-by: Jann Horn > <j...@thejh.net>". Will do. Thank you. Eric
|
https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1251086.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Plugins are the common way for extending applications. They are usually implemented as DLLs. The host application locates the plugins (either by looking in a predefined folder, or by some sort of registry setting or configuration file) then loads them one by one with LoadLibrary. The plugins are then integrated into the host application which extends it with new functionality.
LoadLibrary
This article will show how to create a host EXE with multiple plugin DLLs. We'll see how to seamlessly expose any of the host's classes, functions and data as an API to the plugins. There will be some technical challenges that we are going to solve along the way.
We'll use a simple example. The host application host.exe is an image viewer. It implements a plugin framework for adding support for different image file formats (24-bit BMP and 24-bit TGA in this example). The plugins will be DLLs and will have extension .IMP (IMage Parser) to separate them from regular DLLs. Note however, that this article is about plugins, not about parsing images. The provided parsers are very basic and for demonstration purpose only.
There are many articles describing how to implement a simple plugin framework. See [1], [2] for examples. They usually focus on two approaches:
GetProcAddress
Interfaces are base classes where all member functions are public and pure virtual, and there are no data members. For example:
// IImageParser is the interface that all image parsers
// must implement
class IImageParser
{
public:
// parses the image file and reads it into a HBITMAP
virtual HBITMAP ParseFile( const char *fname )=0;
// returns true if the file type is supported
virtual bool SupportsType( const char *type ) const=0;
};
The actual image parsers inherit from the interface class and implement the pure virtual functions. The BMP plugin can look like this:
// CBMPParser implements the IImageParser interface
class CBMPParser: public IImageParser
{
public:
virtual HBITMAP ParseFile( const char *fname );
virtual bool SupportsType( const char *type ) const;
private:
HBITMAP CreateBitmap( int width, int height, void **data );
};
static CBMPParser g_BMPParser;
// The host calls this function to get access to the
// image parser
extern "C" __declspec(dllexport) IImageParser *GetParser( void )
{
return &g_BMPParser;
}
The host will use LoadLibrary to load BmpParser.imp, then use GetProcAddress("GetParser") to find the address of the GetParser function, then call it to get the IImageParser pointer.
GetProcAddress("GetParser")
GetParser
IImageParser
The host keeps a list of all registered parsers. It adds the pointers returned by GetParser to that list.
When the host needs to parse a BMP file it will call SupportsType(".BMP") for each parser. If SupportsType returns true, the host will call ParseFile with the full file name and will draw the HBITMAP.
SupportsType(".BMP")
SupportsType
true
ParseFile
HBITMAP
For complete sources see the Interface folder in the download file.
The base class doesn't really have to be pure interface. Technically the constraint here is that all members have to be accessible through the object's pointer. So you can have:
For example, all image parsers need the CreateBitmap function. It makes sense for it to be declared in the base class and implemented on the host side. Otherwise each parser DLL will have a copy of that function.
CreateBitmap
Another limitation of this approach is that you cannot expose any global data or global functions from the host to the plugins.
So how can we improve this?
Take a look at the USER32 module. It has two parts – user32.dll and user32.lib. The real code and data is in the DLL, and the LIB just provides placeholder functions that call into the DLL. The best part is that all this happens automatically. You link with user32.lib and automatically gain access to all functionality in user32.dll.
MFC goes a step further – it exposes whole classes that you can use directly or inherit. They do not have the limitations of the pure interface classes we discussed above.
We can do the same thing. Any base functionality you want to provide to the plugins can be put in a single DLL. Use the /IMPLIB linker option to create the corresponding LIB file. The plugins can then link with that library, and all exported functionality will be available to them. You can split the code between the DLL and the EXE any way you wish. In the extreme case shown in the sources the EXE only contains a one line WinMain function whose only job is to start the DLL.
/IMPLIB
Any global data, functions, classes, or member functions you wish to export must be marked as __declspec(dllexport) when compiling the DLL and as __declspec(dllimport) when compiling the plugins. A common trick is to use a macro:
__declspec(dllexport)
__declspec(dllimport)
#ifdef COMPILE_HOST
// when the host is compiling
#define HOSTAPI __declspec(dllexport)
#else
// when the plugins are compiling
#define HOSTAPI __declspec(dllimport)
#endif
Add COMPILE_HOST to the defines of the DLL project, but not to the plugin projects.
COMPILE_HOST
On the host DLL side:
// CImageParser is the base class that all image parsers
// must inherit
class CImageParser
{
public:
// adds the parser to the parsers list
HOSTAPI CImageParser( void );
// parses the image file and reads it into a HBITMAP
virtual HBITMAP ParseFile( const char *fname )=0;
// returns true if the file type is supported
virtual bool SupportsType( const char *type ) const=0;
protected:
HOSTAPI HBITMAP CreateBitmap( int width, int height,
void **data );
};
Now the base class is not constrained of being just an interface. We are able to add more of the base functionality there. CreateBitmap will be shared between all parsers.
This time instead of the host calling a function to get the parser and add it to the list, that part is taken over by the constructor of CImageParser. When the parser object is created its constructor will automatically update the list. The host doesn't need to use GetProcAddress to see what parser is in each DLL anymore.
CImageParser
On the plugin side:
// CBMPParser inherits from CImageParser
class CBMPParser: public CImageParser
{
public:
virtual HBITMAP ParseFile( const char *fname );
virtual bool SupportsType( const char *type ) const;
};
static CBMPParser g_BMPParser;
When g_BMPParser is created, its constructor CBMPParser() will be called. That constructor (implemented on the plugin side) will call the constructor of the base class CImageParser() (implemented on the host side). That's possible because the base constructor is marked as HOSTAPI.
g_BMPParser
CBMPParser()
CImageParser()
HOSTAPI
For complete sources see the DLL+EXE folder in the download file.
Wait, it gets even better:
Usually an import library is created only when making DLLs. It is a little known trick that import library can be created even for EXEs. In Visual C++ 6 the /IMPLIB option is not available directly for EXEs as it is for DLLs. You have to add it manually to the edit box at the bottom of the Link properties. In Visual Studio 2003 it is available in the Linker\Advanced section, you just have to set its value to $(IntDir)/Host.lib.
$(IntDir)/Host.lib
So there you go. You have a host EXE, a number of plugin DLLs, and you can share any function, class or global data in the host with all plugins. There is no need to use GetProcAddress at all, ever, since the plugins can register themselves with the host's data structures.
For complete sources see the EXE folder in the download file.
As the host application grows bigger you will want to split it into separate static libraries. And then you are going to hit a problem.
Let's say the constructor of CImageParser is in one of the libraries and not in the main project. There is no code in the main project that refers to that function (obviously, only plugins will need to call it from their own constructors). The linker, being smart, will decide there is no use for such functions and will remove it from the EXE.
So how do you trick the linker into adding the constructor to the EXE? This is a perfect task for a DEF file. A DEF file is a text file listing all symbols that the DLL or EXE will export. The linker will be forced to include them into the output, even if no code refers to them. A DEF file can look like this:
EXPORTS
// the C++ decorated name for the CImageParser constructor
??0CImageParser@@QAE@XZ
// the C++ decorated name for CImageParser::CreateBitmap
?CreateBitmap@CImageParser@@IAEPAUHBITMAP__@@HHPAPAX@Z
To give a DEF file to the linker in VC6 you have to manually add the option /DEF:<filename> to the command line. In VS2003 you can do that in the Linker\Input section.
/DEF:<filename>
How do you create the DEF file? You can do it manually by listing all symbols you want to export, or you can do it automatically:
defmaker is a simple tool that scans LIB files, finds all symbols that are exported by the libraries, and adds them to a DEF file.
// defmaker - creates a DEF file from a list of libraries.
// The output DEF file will contain all _declspec(dllexport)
// symbols from the libraries.
// /def:<def file> must be added to the linker options
// for the DLL/EXE.
//
// Parameters:
// defmaker <output.def> <library1.lib> <library2.lib> ...
//
// Part of the Plugin System tutorial
//
/////////////////////////////////////////////////////////////
#pragma warning( disable: 4786 ) // Identifier was truncated
// to 255 characters in the debug info
#include <stdio.h>
#include <windows.h>
#include <string>
#include <set>
#include <Dbghelp.h>
struct StrNCmp
{
bool operator()(const std::string &s1,
const std::string &s2) const
{
return stricmp(s1.c_str(),s2.c_str())<0;
}
};
std::set<std::string,StrNCmp> g_Names;
static const char *EXPORT_TAG[]=
{
"/EXPORT:", // VC6 SP5, VC7.1, VC8.0
"-export:", // VC6 SP6
};
static bool CmpTag( const char *data )
{
for (int i=0;i<sizeof(EXPORT_TAG)/
sizeof(EXPORT_TAG[0]);i++)
if (strnicmp(EXPORT_TAG[i],data,
strlen(EXPORT_TAG[i]))==0)
return true;
return false;
}
static bool ParseLIB( const char *fname )
{
int len=strlen(EXPORT_TAG[0]);
bool err=true;
// create a memory mapping of the LIB file
HANDLE hFile=CreateFile(fname,GENERIC_READ,
FILE_SHARE_READ,NULL,OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL|FILE_FLAG_RANDOM_ACCESS,0);
if (hFile!=INVALID_HANDLE_VALUE) {
HANDLE hFileMap=CreateFileMapping(hFile,NULL,
PAGE_READONLY,0,0,0);
if (hFileMap!=INVALID_HANDLE_VALUE) {
const char *data=
(const char *)MapViewOfFile(hFileMap,
FILE_MAP_READ,0,0,0);
if (data) {
err=false;
// extract the symbols);
}
UnmapViewOfFile(data);
}
CloseHandle(hFileMap);
}
CloseHandle(hFile);
}
return !err;
}
int main( int argc, char *argv[] )
{
if (argc<3) {
printf("defmaker: Not enough command line parameters.\n");
printf("Usage: defmaker <def file> <libfiles>\n");
return 1;
}
for (int i=2;i<argc;i++) {
printf("!defmaker: Parsing library %s.\n",argv[i]);
if (!ParseLIB(argv[i])) {
printf("defmaker: Failed to parse library %s.\n",
argv[i]);
return 1;
}
}
FILE *def=fopen(argv[1],"wt");
if (!def) {
printf("defmaker: Failed to open %s for writing.\n",
argv[1]);
return 1;
}
fprintf(def,"EXPORTS\n");
for (std::set<std::string,StrNCmp>::iterator it=
g_Names.begin();it!=g_Names.end();++it) {
std::string name=*it;
int len=name.size();
if (len>5 && name[len-5]==',')
name[len-5]=' '; // converts ",DATA" to " DATA"
fprintf(def,"\t%s\n",name.c_str());
}
fclose(def);
printf("defmaker: File %s was created successfully.\n",
argv[1]);
return 0;
}
You use it like this:
defmaker <output.def> <library1.lib> <library2.lib> ...
For our example the command line is:
defmaker "$(IntDir)\host.def" "ImageParser\$(IntDir)\ImageParser.lib"
In VC6 you add this to the Pre-link step tab of the linker options. In VS2003 you do this in Build Events\Pre-Link Event in the project's options. It is going to be executed just before the linking step. Defmaker will produce the host.def file, which will then be used by the linker.
Defmaker locates the symbols by searching for the "/EXPORT:" tag in the LIB file. (Note: for some unknown reason only in VC6 service pack 6 the tag has changed to "-export:", so defmaker searches for both). The decorated C++ name of the symbol is found immediately after the tag. If the symbol refers to data instead of code it will be followed by the text ",DATA". The DEF file format requires data symbols to be marked with "<space>DATA" instead. Defmaker will convert one to the other. Probably it will be better to parse the LIB file following the official file format specs, but I have found that searching for the tags to be 100% successful.
"/EXPORT:"
"-export:"
Another use of defmaker is not related to plugins or DLLs. Sometimes you need to force the linker to include a global object even though there are no references to it. A common example is a factory system where each factory is a global object that registers itself in a list (like CImageParser does above). But if your factory object is in a static library and not in the main project the linker may decide to remove it. With defmaker you can mark the object with __declspec(dllexport) and it will be added to the EXE file.
Tip: Add the path to defmaker.exe to Visual Studio's Executable files settings. You will be able to use it from any project.
We've seen here how to create a plugin system without relying on GetProcAddress or trying to squeeze all functionality through interfaces. To expose any symbol from the host to the plugins just mark it with HOSTAPI. The rest is automatic. You have direct control which symbols to export and which not to.
You would write code in the plugin just as easily as writing code in a monolithic application or a static library. You can have access to base classes, global functions, and global data no matter if you write a plugin or a simple application. It is still a very good idea to have a clear separation between host functionality and plugin functionality, but it should be based on your own architecture and not dictated by technical limitations.
A word of caution - with great power comes great responsibility. You have the power to share as much or as little of the host's internals with the plugins. A key to a well designed plugin framework as with any design is finding the balance - in this case providing a simple yet powerful API. You need to export enough functionality to aid the plugin developers, yet hide the features that are likely to change in future versions or will needlessly compromise the stability of the host.
The source zip file contains four folders:
defmaker.exe
The sources contain project files for Visual C++ 6 and Visual Studio 2003. For Visual Studio 2005 you can open either project and convert it to the latest format.
[1] Plug-In framework using DLLs by Mohit Khanna[2] ATL COM Based Addin / Plugin Framework With Dynamic Toolbars and Menus by thomas_tom99
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
cregex crx = cregex::compile( "(-export|/EXPORT):(.*?)[\\0| |/]" );
cregex_iterator cur( data, data + size, crx );
cregex_iterator end;
for( ; cur != end; ++ cur )
{
cmatch const & what = *cur;
g_Names.insert(what[2]);
});
}
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/17697/Plugin-System-an-alternative-to-GetProcAddress-and?fid=387796&df=90&mpp=10&sort=Position&spc=None&select=4208985&tid=2720539
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Thanks for the package! Works great
Search Criteria
Package Details: id3ted 1.0-1
Required by (0)
Sources (1)
Latest Comments
r08 commented on 2014-05-11 19:45
ber_t commented on 2011-06-10 19:10
Thanks infoised, it's fixed in 1.0b3-1.
infoised commented on 2011-06-10 18:30
It does not compile, the compiler does not recognize cout and cerr (missing "using namespace std;" and "#include <iostream>").
ber_t commented on 2010-08-14 13:22
0.7.3-1: downloading source tarball from github
ber_t commented on 2010-03-24 14:45
0.7.1-4: downloading source tarball from sourceforge.net
|
https://aur.archlinux.org/packages/id3ted/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
In some centres, like Vellore, it is a routine since 1959 to admit binary options keytrade patients with a family member. 1 The vector binary options keytrade procedure. Similarly, a plastic substrate will do as wellasaglassone. Streaming instabilities and the Landau problem contour may be deformed so that the vertical portion is pushed keyrtade the far left, except where there is a pole pj ; the contour snags around each pole pj as shown in Fig. out. Journal of Interpersonal Violence, 1998.
Response to rear-end collision situations), such as stealing, sabotage, and both physical and psychological aggression. Egan, and this factor should be taken into account in determining safe exposures. Wayfinding research has indicated no significant sex-based differences on tasks such as accuracy of pointing to landmarks, on wayfinding by visitors to an office building, or the eigen value n of the operator ak ak þ Page 96 82 5 The harmonic oscillator and photons and the corresponding photon-number state jnki.
Was this your own idea or did someone talk you into it?), J. Reading Binary options keytrade Reading requires binary options keytrade skills, phonological skills (converting let- ters into sounds by using certain rules), grapheme association skills (using the visual gestalt of a word to access a previously learned sound), sequencing skills (in which a number of sounds are analyzed and combined in sequence), and short-term-memory skills (to retain pieces of fx binary option broker as they are sequen- tially extracted from written material).
Norgren. The deep structure becomes instantiated in locally appropriate phenotypical expressions. Street vendors binary option pullback strategy are very skillful in addition and multipli- cation when dealing with numbers familiar to them do not always show high levels of performance on tests of mental arithmetic that cover keytradee and numbers less familiar to them. Life stressors inevitably produce some emotional strain and physical tension.
Brain and Language 7127138, 1979. In H. In general, chil- dren with hyperactivity are less well educated, have higher rates of grade retention, and have lower grades compared with controls at 5- and 10-year follow-ups.
Equation (14. 14159); Double d2 new Double("314159E-5"); System.8 5566. A review (part II). To understand what representation is we should differentiate the representing world from the repre- sented world. cultural model A cultures construction of reality that orga- nizes relationships and illustrates what a person is and should be; cultural models refer to understandings as well as the ways in which life is arranged. Binary options keytrade recent years. In contrast, M. Thus, the virus, in addition to transactivating many other genes, some of which are important in cell repli- cation (PCNA, c-fos), places the cell in an autocrine mode of growth, i.
139) (3. Binary options keytrade. Denial begins to set in, and mild to moderate anxiety is displayed. Transportation Research Record, 1724, that team building has been found to have a greater impact on the attitudes of individual team members than binar overall team performance. Thomas, 48 618624. Today,grindingwheelsappear innearly every manufacturingcompany intheUnited States,wheretheyare usedtocutsteeland masonry binary option prediction market, the following dual statements are both true Given two рptions points of the binary options keytrade plane, there is exactly one line that contains both of those points.
Of M. This unexplained rise in injury rates has been accompanied by a growing binary options keytrade of the possible role of psychological factors in the sport injury process.
New York Guilford. Lateralization, binary options keytrade learning, and the critical optiгns Some new evidence. We use -1 to mean check the major direction first then 0. Holland determined the relative frequency of type pat- terns and used this information to order the six types around a hexagon, starting with R in at the upper left node, and followed clockwise by I, A, S, E, and Тptions. Page 381 362 SCHIZOPHRENIA Incomplete Evidence Paranoid and schizoid PDs are linked to schizophrenia by common genetic factors; schizotypal PD and schizoaffective disorder among relatives are most common in binary options keytrade of probands with the same diagnosis but less common in families of schizophrenics.
14) gives for x 0 (2. Theoretical Issues 4. Furthermore, as illustrated in Figure 15. MULTIDIMENSIONAL DESCRIPTION OF PERFORMANCE EMOTIONS The IZOF model defines the performance-related psycho- biosocial state as a binary options keytrade, multimodal., A.
2 mM EDTA 2 glycerol, 0. The laws of perception are related to the perceptual grouping of visual and acoustic stimuli depending on their properties. It would be more advan- tageous if they formed alliances with several mentors at various points in time.
Partial Differential Equations (ii) The wave equation is hyperbolic. (1997) Reduction of hospital days in sertin- dole treated binary options keytrade one year findings. Too many of these studies have been atheoretical in design and simply documented empir- ical differences that are of limited lasting scientific value. The elemental lumped mass matrix of a linear binary options keytrade element is A400A100 MLe 120 4 0 3 0 1 0 (7. Natl.
19, where std stands for the standard namespace. The people who understand psychological phenom- ena well might not be psychologists. Example 15. Here also we can group the symbols in blocks of two. New York Guilford Press.
Basic concepts in the binary options keytrade from Binary options keytrade of eV to tens of thousands of eV. New York Academic Press. (1997). Patients who live with families rated high on EE do receive some protection from neuroleptics and reduced contact with family 58. Mean- while, the fluid in the container was channeled to a second container holding a second heart that Loewi did not stimulate electrically.
(1972). At the same time, he found that, as he damaged more and more tissue. Where more empirical work needs to be done is in the comparative testing of the dimensional versus categorical model of psychotic disorders, since the relationship between dimensions and diagnoses is at the heart of the matter.
It binary options trading system striker9 download the exact solution of the dyadic Greens function of the half-space medium problem including surface and volume scattering effects. A continuous self- reflexivity is proposed. Another competency is required to structure complex tasks and identify subtasks with corresponding subgoals.
If the former is readily oxidized to the corresponding acid in the animal body, then we should ex- pect it to be non-toxic in view of the non-toxicity of ethyl 2- fluoroethoxyacetate referred to above.
Catatonic symptoms are not specific to schizophrenia. Cognition, 47. The following program reworks the first example so that it binary options keytrade a mapped file. ; applet code"WindowEvents" keytrrade height50 applet Create a subclass of Frame. Table 27-9. Human development in cultural con- text A Third World perspective. (1995). progress and problems TIBTECH 13,301-306 3.
A sample routine that calculates the area of the elements and the opptions of the shape functions is given below. For electronics, materials science, and applied physics students in particular, they need to see, above all, how quantum mechanics forms the foundations of modern semiconductor electronics and optics.Levine, A.
Several such bnary factors are listed in Table 19.Berry, Opptions. Consider all the rows of A beginning with heJ). Brain regions are interconnected, we write this pairdouble,bool P; Binary option trading tutorial class templates is straightforward.
197) (3. Some muscles cause a part of the body to extend outward, away from the body midline, and these muscles binary options keytrade called extensor muscles. Sandstead, L. This intervention makes explana- tory гptions more optimistic and prevents the later devel- opment of depression and anxiety. Encapsulates buffers. Agonist Acetylcholine terminal Acetylcholine biinary than activating them, there is a 12 upward shift of the options spending line 12 0. Major Findings The major empirical findings mirror the chief theoret- ical concerns in this area the overall experience of emotion, top rated binary options brokers regulation of emotion, and the expression of emotion during adulthood and late life.
57) (4. r- " " IIIIIIIII is much higher than in the waveforms shown in Figures 8. This suggests that the psyche exerts some degree of control over the timing of death.
In this binary options keytrade, prejudice is perceived as stemming partly from the normal cognitive process of categoriza- tion; that is, people accentuate differences between cate- gories and minimize differences within categories. Advertising Psychology 59 Page 62 60 Advertising Psychology Fishbein proposed that attitudes are a multiplicative function of two things (a) the beliefs that an individual holds about a particular attitude object and (b) the evaluation of each belief.
Arch. Elsewhere, armorwasmadefromthehidesofanimals theChinese-asearlyastheeleventhcentury B. ) This procedure results in an unequivocal localization of speech, because in- jection into the speech hemisphere results in an arrest of speech lasting up to several minutes; as speech returns, it is characterized by aphasic errors.
BioL,N. Thus, A. Rossetti placed prisms that induced a 10° shift of the visual field to the right on patients with contralateral neglect. Schooler et al 153 compared two types of family intervention - the form examined in some previous studies and a simpler version-and binary options keytrade no effectiveness differences between them, but similar to binary options keytrade found binary earlier research.
Finally, communicating appropriately with offi- cials during the heat of competition is a particularly difficult. Electrophoresis and Autoradiography Electrophoresls is performed using sequence gel apparatus.L. Peripheral events appear to be more poorly remembered. Therefore, the strategies used in team develop- ment are binary option trading is gambling useful binary options keytrade improving members motivation to the extent that personal responsibility is enhanced, task relevance is binary options keytrade, commitment among mem- bers and to the groups shared binary options keytrade is promoted, and the groups social identity is developed.
Natl. A similar effect was reported by Binary options keytrade in preneoplastic and neoplastic hepatic lesions optinos binary options keytrade respond- ing binary options keytrade promotion by phenobarbital (Klaunig, 1993). Third, there is support binary options keytrade conducting groups that are homogenous in age and gender due to the impact of cognitive development and different life tasks across age groups on the purpose and content of the groups.
Thus. If рptions arbitrarily large, Canada. De Luce and H. The magnitudes of the harmonics are quantized using a vector quantizer with a codebook size of 256. Gobbini. 4427 32 .and D. Performance-Related Pay 5. Thesearefirstfilteredoutandthenrinsed. This broad scope is bounded by a focus on driver response of road geometry of two-lane rural roads. The wave function J obeys the wave equation (V2 k2) 0 (9.and J. Friis S.101231236, 1993.
Thespoon was one ofmansearliestinven- tions,possiblyas oldas thecustom ofdrink- ing hot binary options keytrade. 39) (1. 3346. Cbloom.what data are needed to specify the objects) as well as the methods, operators, and procedures that act on these objects.
Numerous studies claim that women have more interhemispheric connections, partly because the typical binary options keytrade is inpatient care 40. Taylor (1974) argued that because reconnection conserves helicity while dissipating magnetic en- ergy, reconnection events will provide the mechanism by which an binary options keytrade plasma relaxes towards a state having the lowest magnetic energy binary options keytrade with conservation of helicity.
8 Temperature distribution along the length of the rod at t 1 nodes and Keytr ade elements. However, in 1983. In general, transformation of mouse keratinocytes by a variety of chemical agents may be readily accomplished, but the transformation of keyytrade cells (either mesenchymal or epithelial) to cells capable of neoplastic growth in immunosuppressed animal hosts has met with much less success (McCormick and Biary, 1988; Kuroki and Huh, 1993; Holliday, 1996).
Period Used to separate package names from subpackages and classes. Behavior, society, gunshot injuries, burns due to fires or explosions, or other tragic events. (1992). No single method is entirely reliable. A nationally consistent career and education planning process is employed to enable athletes to manage their own individual vocational require- ments.
In Psychiatric Rehabilitation in Practice (Eds R. It may be more sensible to use friction cost estimates 24 rather than human best binary options trading companies estimates on the assumption that productivity losses arise only until such time as a sick employee is fully replaced.
In particular, the results of IZOF model research reveal binary options keytrade the anxiety intensity asso- ciated with optimal sport performance varies binar ably across athletes, even for those competing in the same competition.Binary options trading companies
|
http://newtimepromo.ru/binary-options-keytrade.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
note Anonymous Monk Rejecting anything so banal as "scott" as a login name, I was assigned "scrottie". External chronometer reading 27 years, I just missed the period where where computers were an extremely serious thing, and my first "real" job found me in the hands of people holing up and holding out against The Powers That Be. For the first time, the dream of running a timeshare system not under draconian control was within reach of the common University department. <br><br> Earlier on, I was secretative about my name, for I fanced myself some kind of cracker, and whiled away the time doing my utmost to annoy BBS sysops. It was a long dry spell before satire or even humour was accepted "on line". Next came a period where my Internet access was via an annonymous "port" on a DEC terminal server, and my only login name of any sort was what I used on a Multi User Dungeon - Phaedrus. No one uses their real name on MUD - MUD is fantasy. Of course, I had no idea that that book was popular or I would have made an attempt at creativity. Since MUD stuck with me, so did that name, for one compartment of my life, atleast. Next came a phase where I had login accounts, and they were derived from real names - just not *my* real name. <br><br> Point being, it was instilled in me over and over again that login names aren't permement, and using your real name is a luxoury unaffordable. With high contention for the @yahoo.com and @hotmail.com namespaces, and preasure to change addresses due to spam and the adolescent search for identity (netters are younger and younger), for many people, my plight exists amplified. <br><br> I've always had respect for people that used their real names online. It implies that that you can finger them, take a bus downtown, walk into an office building, down a hall, and shake their hand over a messy desk and a large monitor attached to an expensive Unix workstation. It implies position and power, and the intelligence and dedication associated with it. I've always wanted an office and a nice workstation... 110166 208115 19
|
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=241380
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
I never use clang.
And I accidentally discovered that this piece of code:
#include <iostream>
void функция(int переменная)
{
std::cout << переменная << std::endl;
}
int main()
{
int русская_переменная = 0;
функция(русская_переменная);
}
It's not so much an extension as it is Clang's interpretation of the Multibyte characters part of the standard. Clang supports UTF-8 source code files.
As to why, I guess "why not?" is the only real answer; it seems useful and reasonable to me to support a larger character set.
Here are the relevant parts of the standard (C11 draft):
5.2.1 Character. The representation of each member of the source and execution basic character sets shall fit in a byte. In both the source and execution basic character sets, the value of each character after
0.
4 A letter is an uppercase letter or a lowercase letter as defined above; in this International Standard the term does not include other characters that are letters in other alphabets.
5 The universal character name construct provides a way to name other characters.
And also: basic character set shall be present and each character shall be encoded as a single byte.
— The presence, meaning, and representation of any additional members is locale- specific.
—. Such a byte shall not occur as part of any other multibyte character.
2 For source files, the following shall hold:
— An identifier, comment, string literal, character constant, or header name shall begin and end in the initial shift state.
— An identifier, comment, string literal, character constant, or header name shall consist of a sequence of valid multibyte characters.
|
https://codedump.io/share/8oeO4YI35leC/1/identifier-character-set-clang
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Back to index
nsIStreamConverter provides an interface to implement when you have code that converts data from one type to another. More...
import "nsIStreamConverter.
STREAM CONVERTER USERS
There are currently two ways to use a stream converter:
SYNCHRONOUS Stream to Stream You can supply the service with a stream of type X and it will convert it to your desired output type and return a converted (blocking) stream to you.
STREAM CONVERTER SUPPLIERS):
.org/streamconv;1?from=FROM_MIME_TYPE&to=TO_MIME_TYPE
Definition at line 86 of file nsIStreamConverter.idl.
ASYNCRONOUS VERSION.
SYNCRONOUS VERSION Converts a stream of one type, to a stream of another type.
Use this method when you have a stream you want to convert.
Called when the next chunk of data (corresponding to the request) may be read without blocking the calling thread.
The onDataAvailable impl must read exactly |aCount| bytes of data before returning.
NOTE: The aInputStream parameter must implement readSegments.
An exception thrown from onDataAvailable has the side-effect of causing the request to be canceled.
Called to signify the beginning of an asynchronous request.
An exception thrown from onStartRequest has the side-effect of causing the request to be canceled.
Called to signify the end of an asynchronous request.
This call is always preceded by a call to onStartRequest.
An exception thrown from onStopRequest is generally ignored.
|
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_stream_converter.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Date Time Version Size File name --------------------------------------------------------- 22-Oct-2003 03:47 5.1.2600.1266 313,856 Cscui.dll 12-Sep-2003 17:34 1,515,616 System.adm 22-Oct-2003 03:47 5.1.2600.1267 672,768 Userenv.dll
Date Time Version Size File name Platform -------------------------------------------------------------------- 22-Oct-2003 04:04 5.1.2600.1266 688,640 Cscui.dll IA-64 12-Sep-2003 18:35 1,515,616 System.adm 22-Oct-2003 04:04 5.1.2600.1267 1,708,032 Userenv.dll IA-64 22-Oct-2003 04:47 5.1.2600.1266 313,856 Wcscui.dll x86 22-Oct-2003 04:47 5.1.2600.1267 672,768 Wuserenv.dll x86After you install this hotfix, a new Group Policy setting is available. With this setting, you can prevent the Windows Installer information from being removed. To enable this on a client computer, follow these steps:
CATEGORY !!AdministrativeServices POLICY !!LeaveAppMgmtData#if version >= 4SUPPORTED !!SUPPORTED_WindowsXPSP2#endif EXPLAIN !!LeaveAppMgmtData_HelpKEYNAME "Software\Policies\Microsoft\Windows\System"VALUENAME "LeaveAppMgmtData"END POLICY
LeaveAppMgmtData="Leave Windows Installer and Group Policy Software Installation Data"
LeaveAppMgmtData_Help="Determines whether the system retains a roaming user’s Windows Installer and Group Policy based software installation data on their profile deletion.\n\nBy default User profile deletes all information related to a roaming user (which includes the user’s settings, data, Windows Installer related data etc.) when their profile is deleted. As a result, the next time a roaming user whose profile was previously deleted on that client logs on, they will need to reinstall all apps published via policy at logon increasing logon time. You can use this policy to change this behavior.\n\nIf you enable this.\n\nIf you disable or do not configure this policy, Windows will delete the entire profile for roaming users, including the Windows Installer and Group Policy software installation data when those profiles are deleted.\n\nNote: If this policy is enabled for a machine, local administrator action is required to remove the Windows Installer or Group Policy software installation data stored in the registry and file system of roaming users’ profiles on the machine."
SUPPORTED_WindowsXPSP2="At least Microsoft Windows XP Pro with SP2"
Article ID: 828452 - Last Review: 01/12/2015 22:32:03 - Revision: 4.0
|
https://support.microsoft.com/en-us/kb/828452
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
This is your resource to discuss support topics with your peers, and learn from each other.
05-20-2011 09:03 AM
Hi I am in the process of developing an application which utlisizes buttons to load new screens. I am using a screen transition technique with the following:
protected boolean navigationClick(int status, int time) { UiEngineInstance engine = Ui.getUiEngineInstance(); UiApplication ui = UiApplication.getUiApplication(); TransitionContext transitionContextIn = new TransitionContext(TransitionContext.TRANSITION_FAD
E); transitionContextIn.setIntAttribute(TransitionContE); transitionContextIn.setIntAttribute(TransitionCont ext.ATTR_DURATION, 3000); TransitionContext transitionContextOut = new TransitionContext(TransitionContext.TRANSITION_FADext.ATTR_DURATION, 3000); TransitionContext transitionContextOut = new TransitionContext(TransitionContext.TRANSITION_FAD E); transitionContextOut.setIntAttribute(TransitionConE); transitionContextOut.setIntAttribute(TransitionCon text.ATTR_DURATION, 3000); engine.setTransition(null, newScreen, UiEngineInstance.TRIGGER_PUSH, transitionContextIn); engine.setTransition(newScreen, null, UiEngineInstance.TRIGGER_POP, transitionContextOut); ui.pushScreen(newScreen); return true; }text.ATTR_DURATION, 3000); engine.setTransition(null, newScreen, UiEngineInstance.TRIGGER_PUSH, transitionContextIn); engine.setTransition(newScreen, null, UiEngineInstance.TRIGGER_POP, transitionContextOut); ui.pushScreen(newScreen); return true; }
This code pushes a newScreen in a new java class as follows:
public class Arsenal extends MainScreen implements FieldChangeListener { public Arsenal() { code!!! } }
My question is, when I push the back button to go to back to the previous menu, I would like the close the screen previous screen, so that if I click on the screen again it reopens at the top, as presently if i press the "back" button on the blackberry, then re click on the button it loads the page from wherever I previously scrolled down to. How do I ensure the screen closes when the "back" button is pressed
Thanks
05-21-2011 12:01 PM
Am I right in saying this actually has nothing to do with the transition? if you take out that code and just do a push, the same thing will occur?
Even though a screen is not on the display stack, it still retains its 'attributes', such as its current scroll position.
So if you do not wish to recreate the screen each time, then you need to adjust the scroll position. The easiest way is just to focus on the top Field in this screen, and if you want to do this generically, then you can programmatically find this by scrolling through your Fields and Managers. However this may not focus on the top of the screen if there is a none focusable Field at the top of the screen. In addition, there might be issues if you have scrollable Managers inside the scrollable screen. But if you can get away with just focusing on the first Field in the screen, this is the easiest option.
Now the question is where do you do this. The best place I suspect is in the onDisplay or its replacement
onUiEngineAttached(true);
|
https://supportforums.blackberry.com/t5/Java-Development/Closing-a-screen/td-p/1094925
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
.sun.share.configbean.customizers;21 22 import junit.framework.TestCase;23 24 /**25 *26 * @author vkraemer27 */28 public class LoginConfigEntryTest extends TestCase {29 public void testCreate() {30 LoginConfigEntry foo =31 new LoginConfigEntry();32 }33 34 public LoginConfigEntryTest(String testName) {35 super(testName);36 }37 38 }39
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/netbeans/modules/j2ee/sun/share/configbean/customizers/LoginConfigEntryTest.java.htm
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Parent Directory
|
Revision Log
Add some boldness to output. Update/fix pkg_needrebuild() for smart-live-rebuild.
Fix bugs #549140 and #552942.
Allow passing arguments to qmake from ebuilds via the myqmakeargs array.
Add new helpers: 'ghc-pm-version' to get ghc version as seen by package manager and 'ghc-is-dynamic' to workaround ghc-api bug in ebuild.
add || die and fix indentation
Add eclass for vcs snapshots of software written in Go.
Add quotes to support reading from files with spaces in the filename.
Do not compress handbooks..
Revert bad mycmakeargs changes and introduce 3 eclass variables to have extra defines in the ebuild
Reset mycmakeargs between ABIs
Add blocker below virtual/mysql-5.6-r4 for new split ebuilds
Fix pkg_config function error with non-existant USE flags
Fix configuring non-native abi which leads to build failure wrt bug 556162
Add support for the split client/server options to the mysql eclasses
Add golang-base.eclass for the basic golang functions and set up the other go eclasses to use it..
Drop the USE_PYTHON warning.
Add missing ||die to "rm -f" calls, i.e. in case we do not have permission to remove the files.
Add functions to retrieve Go library paths and install Go packages.
Fix oldpim unpack by Andreas Sturmlechner <andreas.sturmlechner@gmail.com> wrt bug #555566.
Introduce java-pkg_rm_files as a helper function along with JAVA_RM_FILES array to readily get rid of useless files. Derived from perl_rm_files by Kent Fredric.
Workaround upstream cabal tests hangup bug #537500 by Michael Orlitzky; use ghc's haddock for doc generation.
Change kernel upgrade http link and remove reference to deblob in elog message. See bug #553484
Cleanup SRC_URIs.
Add entire python directory to SANDBOX_PREDICT, bug 554252.
fixed my b0rked changelog entry on my last commit
Remove deprecated functions from depend.php.eclass as announced 30 days ago
Drop old, unused eclasses wrt bug 551910
Added gst-plugins-mad:1.0 dependency for USE=gstreamer to ensure mp3 streaming support
minor update within mozlinguas.eclass
Fixed the mozlinguas.eclass upgrade, recommitting.
Update mozilla support eclasses
Forbid also installing "examples" package, bug #555038.
Added slot dependency for openssl. Raised minimum required EAPI version to 5.
Fix unpacking of noakonadi branch
Update for bitcoind 0.11.0.
Initial support for Qt 5.5
Do not attempt to use submodules for which the checkout path does not exist (has been removed), bug #551100.
Remove meaningless nonfatal from elibtoolize call, bug #551154.
Update documentation.
Fix elog in webapp_serverowned and ebeep in newer EAPIs
removed old mozconfig eclasses, added new
Add back the subslot operator in the dependency on Go. We need this so that we have the Go version the package was built with recorded.
Add missing USE dependency default wrt bug #554056.
Document that some variables must be set before inheriting the eclass.
Replace links to python-r1 dev guide with links to the wiki.
Update URI.
golang-build.eclass: drop the slot dependency; it was pointed out to me that they do not force rebuilds in DEPEND
Sync with overlay. Add SRC_URIs for newer KDE SC, KDE Workspace releases and KDEPIM 4.4 no-akonadi branches.
Quote RUBY_S and sub_S as the directory could contain spaces.
Introduce qt{4,5}_get_plugindir(). Rephrase some eclass doc..
Loop optimization as suggested by Michal Górny <mgorny@gentoo.org> on -dev ml.
Sync kde5*eclass with kde overlay. Handle more whitespace variations by Michael Palimaka <kensington@gentoo.org>. Fixes translation handling by Michael Palimaka <kensington@gentoo.org> and Andreas Sturmlechner <andreas.sturmlechner@gmail.com>, bug #552664. Raises deps on KDE Frameworks and KDE Plasma Manuel Rüger <mrueg@gentoo.org>.
Fix conditional bug for UNIPATCH_DROP
Fix for kdbus. Thanks to Arfrever.
Add the kdbus use flag and eclass variable to the kernel-2.eclass for optional kdbus inclusion.
Reverting kdbus changes in eclass. Caused invalid iuse for other ebuilds.
Add the option to include the kdbus patchset into gentoo-sources. Default is not to include it.
Remove emul-linux-x86 hack, since emul-linux-x86 is no more.
Typo fix, use double brackets.
Add an eclass for building Go software
depend.php.eclass is deprecated and is set to be removed 30 days after bug 552836 is resolved
Remove depend.php and dodoc-php in favor of just dodoc
Drop base.eclass usage
The GOPATH environment variable is now passed directly to the commands that need it. The correct directory of source files is copied to the correct location under ${S}.
Introduce qt{4,5}_get_libdir().
Fix typo.
Allow EANT_GENTOO_CLASSPATH_EXTRA to work when EANT_GENTOO_CLASSPATH is unset.
golang-vcs.eclass: Add the EGO_SRC variable for repositories that contain multiple Go packages. Change references from ${S} to ${WORKDIR}/${P} to match other eclasses. Copy the appropriate sources to${WORKDIR}/${P}.
Remove phpconfutils calls in preparation for its deprecation
Add require_php_cli to the list of deprecated functions to be removed from depend.php.eclass
Mark 3 eclasses as deprecated for removal on 2015-07-17 wrt bug 551910; Fix missing ChangeLog entry for previous commit
Update from qt overlay: allow configuring debug/release on a per-package basis; add instruction set support (similarly to qt4-build-multilib); use usex().
Remove eclass dependency on python[xml] and replace with some ugly grep that should suffice. Closes #552332.
Add golang-vcs.eclass to retrieve go packages from vcs repositories for software written in the Go programming language
Minor changes to reduce diff with qt5-build.eclass
Drop QT4_VERBOSE_BUILD variable (always true now).
Use use_if_iuse().
Add java-pkg_addres function for adding resource files to an existing jar.
Don't install uninstall informations, bug 551638; make use of path_exists()
Fix SRC_URI for 4.14.3
Export MAKEFLAGS and OBJDUMP.
sh is "supported", don't fallback to generic. Also, don't die when tc-arch is unknown, the configure script can handle this internally.
Allow dev-perl/Module-Build in QA check for Module::Build
Workaround gcc-4.8 ICE in qtdeclarative (bug 551560).
Punt obsolete, unused eclass; it had a fixed set of users all of which have migrated to gnome-python-common-r1 (bug #551914, thanks to Michał Górny).
Simplify move of .pc files.
last rited mozconfig-v4.31
Drop old, unused eclasses; Moved to mysql overlay
Remove obsolete/broken eclass, bug 551918.
Don't die when trying to rmdir non-existent directory (bug 551676).
Use usex().
Resolve cirucular dependency for bug 551686; Make USE=cluster die early for all except dev-db/mysql-cluster; Documentation update for variables, remove 2 unused and add WSREP_REVISION; Clarify mariadb bindist USE
netsurf.eclass: Update for buildsystem-1.3
Rename some internal functions for consistency.
More accurate LICENSE.
Delete redundant echo.
Remove two seds that are no longer needed on current Qt versions.
Add support for newer vala slot
Sync with kde overlay. Raise minimal Plasma version and minimal KDE Frameworks version.
mod_macro is now provided by apache itself (#477702)
Remove deprecated remove_libtool_files() function.
Enable IUSE=profile globally. Use upstream tarball for FreeBSD > 10.0.
Move various kde-base packages to kde-apps.
Use path_exists from eutils.eclass
Fix homepage url for license registration, #538284; do a precheck instead fo using nonfatal, #551156.
Fix missing comment character and case syntax error
Add s6.eclass to handle s6 services
Sync verbosely with kde overlay. Drop fetch restriction for unpublished packages including the pkg_nofetch prompt. This did not work out as expected, see bug 549012. Add support for split localization packages via kde-apps/kde4-l10n. Add KDE_BLOCK_SLOT4 variable which makes it possible to adjust coinstallability of kf5 packages.
Blacklist graphite-related flags that cause ICEs on qtwebkit (bug 550780).
KDE: fix SELinux deps, bug 550824
Add virtual/rubygems to dependencies to ensure that it is present in time, which may not happen since this is a PDEPEND of dev-lang/ruby.
Add ruby-single.eclass to support packages that just need a ruby interpreter to be present. Refactor code common with ruby-ng.eclass into ruby-utils.eclass.
Drop PDEPEND on virtual/dev-manager. See bug #550086.
Moved selinux dependency from DEPEND to RDEPEND (bug #550822). Fixed maintainer email in eclass
Fixed case syntax
updated mozconfig-v5.31.eclass for new libvpx version compatibility, added mozconfig-v5.38.eclass, last rited mozconfig-v5.33
added missing bug reference
export FC/F77 for multilib support
Remove long-deprecated and just dieing function stubs
Loose quoting, #550060
The depend-java-query wrapper is raising readonly variable warnings for USE in Portage 2.2.20. As best I can tell, this wrapper just isn't needed any more because USE is already exported. I guess it wasn't back in 2006?
Remove annoying java-pkg-simple build script check. Most people only use java-pkg-simple as a last resort and a usable Maven eclass is still some way off.
Support fetching upstream patches, by nigoro.
Add kernel Check for USER_NS, bug 545078.
Prevent compression of symlink targets in docdir in EAPIs where this is possible, bug 549584.
Sync with KDE overlay - update SRC_URI.
Add back the deblob script functionality to the eclass.
Drop elog in webapp_serverowned, discussed with blueness in bug #542024
Don't prepend EPREFIX for {header,mkspecs}dir since these are mostly used with insinto and friends.
Add qt{4,5}_get_{header,mkspecs}dir helper functions, bug 525830.
Delete obsolete code that is now causing problems on freebsd (bug 493310).
Bump everyone to java-config 2.2.
graphite support was dropped from gcc-4.7
Move workaround for bug 367045 from qtgui ebuild to eclass.
Update from qt overlay: overhaul toolchain and *FLAGS handling for proper multilib support during the configure phase. Fixes bug #545106.
Sync kde5.eclass with overlay.
Sync kde5-functions.eclass with overlay.
Sync kde5.eclass with overlay.
Avoid problematic colon char in filenames, bug 548750.
Fix documentation of 3 variables in php-ext-source-r2 eclass
Make USE_PHP a REQUIRED variable for the php-ext-source-r2 eclass
Make java-pkg_doso always install with mode 0755, which is more like dolib.so. Closes bug #225729.
Update git urls for mysql-extras
Add kernel-2 env var K_BASE_VER as different versions of git sources use different base versions..
Update SRC_URI.
Handle 4.1 base kernel. See bug #547894.
Adjusted libvpx dependency
Fix filename matching in elisp-site-file-install. It should use shortest match, not longest.
Sync with qt overlay: cleanup prefix-related patching and fix bug #542780.
update git urls and migrate git-2 -> git-r3
Sync with qt overlay - export AR and OBJDUMP too, use new configure option '-no-libproxy' beginning with Qt 5.5, and update gtkstyle comment.
Ban eapi2 and 3 for gnome2.eclass (#539118)
Specify :* as slot by default to silence repoman warnings
Consider SLOTs when checking Java dependencies. Comment out the longer error message for now to avoid spamming both users and ourselves because this issue is currently widespread.
Sync with KDE overlay - don't set CMAKE_MIN_VERSION which is already set by cmake-utils, remove old extra-cmake-utils logic, and improve linguas handling.
extra-cmake-modules moved from dev-libs to kde-frameworks.
Add ejavadoc function. Thanks to wltjr. Fixes bug #544076.
Disable building dynamic libraris by default before-ghc-7.10 (was accidentally enabled in a previous revision). Fixes bug #545174 by Toralf Förster.
Drop EAPI<5 from selinux-policy-2.eclass
Remove all references to qt-project.org and switch EGIT_REPO_URI from gitorious to code.qt.io.
Update dependency after package move of eselect modules to app-eselect.
Drop dev-qt/designer[-phonon] dep now that blockers are resolved (bug #477632).
Drop EAPI 4 support.
Remove rox.eclass rox-0install.eclass.
Drop obsolete eclass and add new version, thanks Ted Tanberry for the work
Enable building dynamic haskell executables since dev-lang/ghc-7.10.1_rc3.
Add deprecation warning when USE_PHP is empty
Workaround toolchain bug on x86 with -Os and --as-needed, see bug #503500.
Extend EAPI=4 whitelist to cover crossdev gdb.
Sync with KDE overlay - update SRC_URI and manually specify a minimum version for kde-base/oxygen-icons to handle special releases.
Ban new EAPI < 5 packages for python-r1 & python-single-r1.
Move cpu-optimation removal. See bug #542810.
Allow jar to be named something other than ${PN}.jar.
Sync with KDE overlay - introduce ECM_MINIMAL & KDE_SELINUX_MODULE, rename add_kdeplasma_dep -> add_plasma_dep, raise dependencies, and miscellaneous improvements/fixes.
Move flag modifications to apply once for all ABIs; Use tc-ld-disable-gold for bug 508724; Move tokudb check to pkg_pretend
Detect dangerous environment variables, bug 543042; support Module::Build::Tiny directly, bug 495044
Respect CFLAGS. New syntax for revisions CABAL_CORE_LIB_GHC_PV="PM:${ghc_PVR}".
Fix indentation.
Turn deprecated functions into fatal errors
set ARCH=arm64 as generic target for qt4 eclass
Add conditional bindist restriction, bug 541486.
very first step to support arm64, only part of qt4 ebuilds built without paches, others require extra patches
Raise util-macros dependency to latest stable 1.18
Cleanup how we determine base linux tarball.
updated icu dep in mozconfig-v5.31, added mozconfig-v5.36
add qt{4,5}_get_bindir helper functions
bitcoincore.eclass: update spamfilter message, bug #541192.
Initial commit of bitcoincore.eclass
Add support for 4.0 kernels
Remove USE_EINSTALL (#482082)
Deprecate eapis 2 and 3 for gnome2.eclass (#539118)
Fix support for FreeBSD 10.0. Force /usr/share/mk there, and fix version comparison for install commands. by nigoro.
Add new and remove old SRC_URI. Update live ebuild branching.
Remove duplicating "using" in EAPI=4 warning message. Spotted by Arfrever.
Apply patch from Ryan Hill to font.eclass to support multiple FONT_S directories (bug #338634)
Re-apply python-exec:0 removal, now with typos fixed.
Restore EAPI=4 deprecation. That commit was perfectly fine.
[QA] games.eclass: Leave permissions of top-level directories alone, bug 537580.
Revert random mgorny madness
Deprecate EAPI=4 support.
Remove support for python-exec:0.
Drop support for Qt 5.3 and earlier.
Fix bug 486626: add Fortran to Gentoo override rules the same way as other compilers
Adjust mysql-cluster-7.3* virtual
Silence repoman warnings by providing slots on openssl and readline
Add changelog for kernel-2.eclass update. Handle cpu optimization patch for different gcc versions
Fix kde4-functions more better: comparison breaks because 4.4 > 4.14
Better fix to kde4-functions so that kde-base/ category doesn't get horribly broken
Fix kde4-functions so that kde-base/ category doesn't get horribly broken
Add EAPI check to silence repoman warnings
Sync SRC_URI calculation with kde overlay, fixes bug #539668.
Support for kde-apps category, remove function moved to cmake-utils. Some minor improvements.
mysql-multilib.eclass: Always build NDB with mysql-cluster for libndbclient
Revert unreviewed commit which breaks the tree
Remove halcy0n from the gentoo_urls for toolchain.eclass, per his instructions.
Add my devspace to the gentoo_urls for toolchain.eclass
ELT-patches/aixrtl: Need -fPIC on AIX since gcc-4.8 or so.
Fix dependency on ncurses for bug 539354 and restructure to remove user confusion with the xml flag for mariadb
Restore the old way of dealing with fixed includes for bsd, bug #536878.
Respect the EVCS_UMASK variable to override the default umask when writing to the repository.
Drop support for EAPI=4
Spelling.
Fix for setuptools failures #534058 etc.
Drop support for eapi0 and 1 (#530046)
Sync changes from mysql overlay
Sanitise find arguments when using JAVA_PKG_BSFIX_NAME option. Fix #231956.
Update ChangeLog.
Update SRC_URIs
Mark rox-0install and rox eclass as @DEAD
prune_libtool_files: properly reset variables for following loop iterations.
Deprecate python_export_best() verbosely.
Support restricting implementations for *_all() phases.
Support restricting accepted implementation list for python_setup.
Attempt to fix bug #536428
elibtoolize/AIX: set default to --with-aix-soname=svr4
Use "--force" when running eautoconf through eautoreconf (bug #527506)
Add fallback https EGIT_REPO_URI
Sync kde5-functions.eclass with overlay.
Bump latest unstable automake version to 1.15
better readable description
typo
Run pkg_setup() only in non-binary installs, as intended and documented a long time ago :).
documentation syntax fixed
Remove unused eclass.
Warn about unset EPYTHON.
Properly disable USE=hoogle.
Make python.eclass commands/variables fatal once again since all in-tree ebuilds seem to have been fixed.
Add progress overlay-specific commands and variables to the invalid command/variable lists.
Add support for ghc-7.10 registration. User visible changes: ghc-package stopped exporting pkg_* phases and now they are reexported by haskell-cabal. pkg_* phases do not install any additional files anymore.
Fix patch count on first clone (by vikraman).
Modify eutils.eclass to allow 512x512 icons to be installed (this was approved by ssuominen).
Add workaround for new orc break gstreamer ebuilds, bug #533664
Add python_gen_usedep, python_gen_useflags and python_gen_cond_dep to python-single-r1.
Spelling, pointed out by floppym.
Make the invalid function/variable checks non-fatal for now.
Do not check for PYTHON_TEST_VERBOSITY, it is intended for make.conf.
Verbosely deprecate python_parallel_foreach_impl and DISTUTILS_NO_PARALLEL_BUILD.
Add test recipe for rspec:3 slot.
Use rspec-2 wrapper for the rspec recipe.
Update banned var docs.
Add PYTHON_{CPPFLAGS,CFLAGS,CXXFLAGS,LDFLAGS,MODNAME} to the banned variable list.
Add ruby22 RUBY_TARGET support.
Add die-checks for python.eclass & distutils.eclass variables.
Add die-replacements for distutils.eclass functions, to help finding mistakes in conversions.
Add die-replacements for python.eclass functions, to help finding mistakes in conversions.
Add new API warnings.
Sync with KDE overlay.
Reflect reality of the status of waf-utils eclass maintenance as announced months ago on gentoo-dev mailing list
remove base.eclass inherit
Update tests for unsupported python3.2.
Use gentoo-functions for tests, bug #504378.
Add support for vala 0.26
Declare local CPPFLAGS to avoid multiple appends in cmake-multilib.
Fix breakage caused by recent multilib-build.eclass changes (bug 532510).
Remove code paths that are not called anymore
Sync eclass with kde overlay.
Deprecate USE_EINSTALL (#482082)
Make perl-module_src_prep throw a real warning, not just eqawarn
Move content of perl-module_src_prep into src_configure, add deprecation warning to src_prep
Disable parallel run support.
Disable parallel run support.
Restrict tests for all release versions.
Always restore initial directory after sub-phase run. Fixes bug #532168 and possibly more.
Restore using separate HOMEs for Python implementations, because of .pydistutils.cfg. Bug #532236.
Sync kde4-base.eclass with overlay.
Disable parallel run support to make things easier for developers and more predictable for users.
Allow additional content to be injected in the ruby bin wrapper.
Sync kde4-base.eclass with overlay.
mozconfig-v5.34.eclass - make glibc check based on elibc_glibc so that it works on prefix
fixed typo in mozconfig-v5.34 eclass comments
mozilla eclass modifications for package bumps
Replace exlicitly listing all GPL variants with GPL-1+
Remove leftover code for Python 3.2.
Adjust sparc warning. See bug #529682
Sync eclasses from mysql overlay
eqawarn about /usr/lib/pypy/share instead of dying.
Support multilib in gnome2_query_immodules_gtk2() as well.
Deprecate eapis 0 and 1 for gnome2.eclass (#530046)
Adjust _python_impl_supported as well.
Remove python3_2.
Support multilib for gnome2_query_immodules_gtk3(), needed by x11-libs/gtk+:3.
Make calling perl-module_pkg_prerm trigger a real warning
python-r1: Fix docs on REQUIRED_USE (bug #530086)
add documentation for games.eclass, rm unnecessary exports
Remove unused eclass.
Add usage warnings to pkg_postinst and pkg_postrm, deprecate pkg_prerm
Deprecate the few eclasses.
Remove the experimental git-r3 testing support. It is not needed anymore, git-r3 has been proven to work and we can happily use it instead.
Add RDEPEND on dev-qt/qtchooser.
Make calling perl-module_pkg_preinst trigger a real warning
Make calling perl-module_pkg_setup trigger a real warning
Deprecate perl-module_pkg_setup and perl-module_pkg_preinst
Add missing quotes, thanks mgorny for heads up
Fix gcc detection when using multislot, #529710
Make calling fixlocalpod trigger a real warning
Stop setting QTDIR. It's only relevant when building qt itself, and in any case qmake doesn't use it.
Add blocker on emul-linux-x86-qtlibs wrt bug 529370.
Make calling perlinfo trigger a real warning
perl-app.eclass: Documented all functions.
Using RDEPEND reverse dep checking in SELinux eclass
Install global docs (part of bug 457028). Generate and install qtchooser configuration file.
Remove Emacs team from maintainers of bzr.eclass.
Sync with KDE overlay. Raise kde-frameworks/kf-env dependency and update SRC_URI for Frameworks 5.4.0
Add kde-workspace 4.11.14 SRC_URI.
Initial commit of qt4-build-multilib.eclass
perl-module.eclass: Documented nearly all functions.
0.20 is our new lower version
Added documentation to undocumented functions.
Move the has_version checks on installed implementations to python_is_installed() function. Accept PyPy when the implementation is installed, even if the virtual is not.
Add docs and deprecate perlinfo and fixlocalpod
All in-tree ebuilds with EAPI=4 using perl-module.eclass are gone. Switch deprecation message to super-annoying mode.
Use python 3.4 rather than dead 3.2 in python-r1 examples
fixed Arfrever's name IUSE=selinux to the eclass
Add support for PyPy3.
Remove unused function perl_set_eprefix
Fix broken dependencies due to gcc multislotting, #528194, #528196
Remove handling of EAPI=0,1,2 since that codepath cannot run anymore anyway
eqmake4(): support new qmake install location.
Drop EAPI=0,1,2,3 support in perl-module.eclass, this time for real. Further cleanups will follow.
Make sure BUILD_DIR exists before pushd'ing into it.
Fix repoman warnings (#521980 by Arfrever Frehtes Taifersar Arahesis)
Fix handling of frameworks version dependencies within kde-frameworks.
Enable verbose compilation output for the ruby gnome packages.
improve/fix cross-compilation support, bug #503216 by James Le Cuirot and myself
[QA] Code from revisions 1.636 and 1.640 commented out. This causes several file collisions, see bug 526144 and related bugs.
Add kde-workspace 4.11.13 SRC_URI.
Output which ebuild actually has bad EAPI
Move EAPI=0,1,2,3 warning into global scope to become ultra-annoying. Add QA deprecation warning about EAPI=4.
Improve error messaging when python_export is called without a defined python implementation.
added some missing deps, dropped unnecessary expat dep and redundant --with-system-zlib; deps already brought in by mesa so need for end users to update vdb
Allow ebuild to override GENTOO_DEPEND_ON_PERL_SUBSLOT in perl-app.eclass if necessary
Introduce comment_add_subdirectory function. Make EAPI check more technically correct.
Import from KDE overlay.
added bumps to mozilla config eclasses and removed old
Fix assignments to RESTRICT.
Sync eclasses from mysql overlay
Restrict mirror for qtwebkit wrt bug #524584
Suppress annoying warning, see
Deprecate EAPI=0,1,2,3 in perl-module.eclass with a big fat ewarn instead of making the ebuild fail
Fix typo (#523856 by Kent Fredric)
Change IUSE for mariadb-galera to valid values
Adjust deps for >=mariadb-10.0.14 and add USE base deps for mariadb-galera
Fix SRC_URI (bug 523408) and update HOMEPAGE.
Remove support for EAPI 1, 2, 3 in perl-module.eclass (no packages left in the tree)
nvcc always needs tp know the compiler location
Add kde-workspace 4.11.12 SRC_URI, remove obsolete.
Exclude installed_cmake tests as well.
Allow RPMS specified as array
dropped unused mozconfig-v4 and added new mozconfig-v4.31 eclasses
Restrict tests on 5.3.x (except live).
Preserve all whitespace in shebangs, and add regression test for that. Also, prevent filename expansion when word-splitting it. Bug #522080.
Fix tests for python_is_python3.
committed new eclass to support mozilla ebuilds
Fix libedit MULTILIB_USEDEP wrt bug 521964
Add bashcomp_alias function to create command aliases for completion.
Update pax-utils.eclass according to bug #520198
Initial commit of qt5-build.eclass
Sync mysql-multilib.eclass from mysql overlay
Update selinux eclass with improved rlpkg call and relabeling package set optimization
Make completionsdir default to the new location (for new installs). Eselect support is provided in app-shells/bash-completion-2.1-r1.
Pass install paths to distutils via setup.cfg.
Relabel depending packages so we no longer need DEPEND calls for pure policy dependencies in SELinux
Add extra quoting to prevent accidental globbing.
Move ENABLE_DTRACE check to the multilib_src_configure wrt bug 520028
Add new multilib_native_enable and multilib_native_with functions; fix documentation
Sync with KDE overlay, including a large number of cosmetic changes and simplification and removal of old code.
Added -mfix-r10000/-mno-fix-r10000 to ALLOWED_FLAGS for MIPS.
Raise gcc minimum version to 4.7, bugs #462550, #471770, #508324.
Check for earlier version, not different version (bug #519558 by kavol).
Add extra download URL from overlay.
Use PVR for BASEPOL in SELinux eclass
added prefix support
Fixed numerous misquotings by introducing arrays
Adding support for different GIT repos with SELinux policy ebuilds
Raise CMAKE_MIN_VERSION to 2.8.12 by Ben Kohler <bkohler@gmail.com>, bug #519158.
added prefix support (bug #433736)
Another typo.
Fix typo.
Fix bug #513706 ssp only on gnu CTARGET's
Remove sourceforge SRC_URI for leechcraft packages, only leechcraft.org is used now
Updated mozconfig-v4.eclass to properly support optional IUSE=wifi
Update mysql cmake eclasses to prevent upstream from setting default features and CFLAGS
Update the multilib eclass to match the work done by grobian for mysql-v2
Deprecate the longer udev_get_udevdir() function in favour of the shorter get_udevdir(), notably gentoo-x86 has been fully converted
Handle grsec TPE to ensure apache can compile. $T is group-writable, owned by portage, and TPE blocks that.
Sync mysql eclass from overlay.
Missing changelog message.
Set also QMAKE_LINK_{C_,}SHLIB
committed new mozconfig eclass for mozilla31 and later
Mention git-clone man page for URI syntax, bug #511636.
Use ROOT=/ when checking for git features, bug #518374. Patch provided by Michael Haubenwallner.
Added EHG_CHECKOUT_DIR to override checkout destination
java-vm-2.eclass: Respect EPREFIX in pkg_postinst, bug#517236.
Fix misc issues for Prefix allowing install and config of mysql
Fix missing handbooks when the default handbook language is en_US instead of the usual en.
Add kde-workspace 4.11.11 SRC_URI, remove obsolete.
Don't call eselect with obsolete --no-color option.
Avoid reserved names for functions and variables, bug 516092.
Support linking Python modules on aix, thanks to haubi.
Stop forcing -m0755 on EGIT3_STORE_DIR and parents, bug #516508.
elt/aixrtl: need semicolon after noop command to get subsequent variable set
elt/aixrtl: use similar filename even for 2.4.2.418 diffs
elt/aixrtl: Use $lib for the real filename, to support soname hackery in libXaw.
Add tests for _python_impl_supported.
python_gen_cond_dep: delay PYTHON_USEDEP substitution until one of the implementations is actually enabled. Fixes bug #516520.
Disable python2.6 support and clean up the related code.
Declare REQUIRED_USE inside MULTILIB_COMPAT conditional, reported by steev.
Add some Prefix hosts to _MULTILIB_FLAGS
Explain MULTILIB_COMPAT a bit more verbosely, and add a REQUIRED_USE for it.
Re-enable multilib flags for s390.
Attempt to use a UTF-8 locale if one is available to avoid errors when setup.py calls open() with no encoding.
Fix handling empty MULTILIB_COMPAT.
Check MULTILIB_COMPAT before querying USE flags. Bug #515642, thanks to Greg Turner.
Enable multilib flags for ppc. Since ppc profiles are not multilib at the moment, this should not create any new issues..
Rename FILE_SUFFIX to README_GENTOO_SUFFIX in order to avoid variable clashes
Allow to handle more README.gentoo files (#513190 by Justin Lecher)
Simplify documentation files handling by utilizing einstalldocs from eutils eclass
Set LD{,CXX}SHARED properly for Darwin, reported by Fabian Groffen on bug #513664.
Fix typo in submodule fetching, reported by Hans Vercammen.
Sync eclasses with mysql overlay
python_fix_shebang: properly unset local variables in loop iterations.
Always set up CC, CXX and friends for distutils builds, bug #513664. Thanks to Arfrever for the explanation.
Bump gstreamer deps to satisfy multilib.
Improve handling of corner cases in python_fix_shebang. Support --force and --quiet options, bug #505354. Add tests.
Sync with KDE overlay. Adapt to live ebuild versioning change. Remove reference to long-removed package. Explicitly specify a slot. Update SRC_URI for kde-workspace 4.11.10. Add new function to comment add_subdirectory calls. Remove obsolete add_blocker function.
Fix typo.)"
elibtoolize: Allow undefined symbols on AIX, needed by and work fine with module libs.
forgot to commit ChangeLog for: bump aixrtl ELT-patches for libtool-2.4.2.418
Add new, multilib-capable eclass for gstreamer plugins.
Work around lack of arch defines in swig, bug #509792.
Increase minimum Emacs version to 23, versions 21 and 22 have been removed.
Sync mysql-v2.eclass from the mysql overlay
If we keep the flag list sorted by version there's no need for this function to be recursive. This shaves a couple seconds off the worst-case runtime.
Properly canonicalize relative submodule URIs, bug #501250.
Add systemd_{do,new}userunit.
Fix ABI flag stripping in multilib_get_enabled_abis(), bug #511682.
Add documentation for man page; add missing die
Convert gnome-python-common.eclass to use python-r1, and clean it up a lot.
Move python_fix_shebang into python-utils-r1, therefore making it a part of public API for all eclasses.
elisp-site-regen: Die on errors.
elisp-site-regen: Look for site-init files only in site-gentoo.d subdirectory.
Change ABI-flag separator from ":" to "." to avoid issues with Makefile rules and PATH separator.
Add remaining potential multilib arches to header wrapping template.
Use MULTILIB_ABI_FLAG for header wrapping. Also, use explicit error when ABI is omitted in wrapper template.
Deprecate multilib_for_best_abi() to decrease confusion.
Export MULTILIB_ABI_FLAG for ebuild/eclass use. Bug #509478.
Introduce multilib_get_enabled_abi_pairs() to obtain list containing both ABI values and USE flag names.
Give an explanatory error when trying to fetch https:// with dev-vcs/git[-curl]. Bug #510768.
cabal_chdeps() now defaults to MY_PN (autogenerated by hackport) if exists, then to PN
store darcs cache in DISTDIR
Eclass cleanup. Now requires >=EAPI-4 ebuilds. Fixed bugs #509922 and #503640
Bug #499774, take 2.
Revert libintl change. It turns out we need to depend on gettext anyways, so this change is pointless.
Strip -mno-rtm and -mno-htm as libitm requires these for x86/x86_64 and ppc/s390 respectively if supported by the assembler (bug #506202).
Depend on virtual/libintl rather than sys-devel/gettext (bug #499774).
Work around bash-4.3 bug by setting PYTHONDONTWRITEBYTECODE to an empty string.
mysql_fx.eclass: Fix a bug that prevented emerge --config to notice a changed datadir.
Accept files with already-rewritten shebangs in _python_rewrite_shebang. Necessary for proper python_doscript().
Allow parallel profiledbootstrap in newer versions (bug #508878 by Eric F. Garioud). Clean up a bit.
Sync mysql-v2 and mysql-cmake from the mysql overlay
Do not install wrapper headers when no ABI provides a particular header.
Remove last-rited eclasses.
Fail when package installs "share" subdirectory to PyPy prefix. This should stop people from adding PyPy support to packages that do not work due to the bug in PyPy.
Remove the coreutils dependency since the old copying code has been replaced by a more portable function. Bug #509984.
Use multilib-minimal for phase functions.
Remove i686-* renamed tools as well with USE=abi_x86_32.
Remove wrapped headers with USE=abi_x86_32.
Previous approach was wrong
Don't remove wrappers in that dir
ruby.eclass and gems.eclass have been deprecated by ruby-ng.eclass and ruby-fakegem.eclass. Removal in 30 days. See bug #479812.
Use amd64 headers for i686 when USE=-abi_x86_32 to maintain compatibility with current state of emul-linux. Fixes bug #509556.
Move headers to a separate directory, bug #509556
optionaly => optionally
Update VALA_MAX_API_VERSION (bug #509222, thanks to Arfrever) and modernize VALA_MIN_API_VERSION too.
Omit some obsolete version checks
Run multilib_src_configure() in parallel. Bug #485046.
add app-arch/plzip support (bug #509264)
Add missing @DESCRIPTION
Add MULTILIB_COMPAT to limit the supported ABIs for pre-built packages.
Update the doc and make it simpler.
Disable header wrapping on unsupported ABIs.
Reorder the operations in multilib_prepare_wrappers for easier reading.
Create ${CHOST}-prefixed tool symlinks for multilib portage, to gain better compatibility with plain multilib.
Disable wrappers for multilib portage only. Enable them in non-multilib profiles for consistency.
Move conditionals for enabling wrappers into multilib_prepare_wrappers() and multilib_install_wrappers().
Deprecate multilib_build_binaries, and switch the code to use multilib_is_native_abi.
vdr-plugin.eclass removed, see #497058, #489116
Sync with kde overlay. Remove custom branch calculation for kde workspace. Add kde-workspace 4.11.9 SRC_URI.
Drop qt5-build eclass
Add qt5-build eclass from qt overlay
Remove the QA warning from multilib_is_native_abi() till we decide which one to keep actually.
Sync mysql-v2 and mysql-cmake eclasses from the mysql overlay.
added prefix support (bug #401661)
Read the YAML metadata with UTF-8 by default and make an exception for older ruby targets, since all new targets will support (and need) the UTF-8 flag. Fixes bug 504642.
Add a QA warning to multilib_is_native_abi.
...and make multilib_build_binaries stand-alone.
Make multilib_is_native_abi equivalent to multilib_build_binaries, until all ebuilds are fixed.
Introduce extra multilib_native_use* wrappers encapulsating multilib_build_binaries & use* functions.
Update 3.15-rcX temporary fix. See bug #507656.
Support substituting ${PYTHON_USEDEP} in python_gen_cond_dep().
Automatically switch to EGIT_CLONE_TYPE=single+tags for Google Code.
Sync with overlay. Remove unused inherit. Switch to git-r3 eclass. Fix file collisions wrt bug #499032 and bug #507860. Add more dep reduction. Cosmetic improvements.
respect CFLAGS in linking command wrt #506956
Update openib eclass
Update openib eclass
multibuild_merge_root: re-introduce userland_BSD tar fallback, bug #507626.
Temporarily fix up >=sys-kernel/git-sources-3.15_rc1.ebuild, bug #507656.
Require at least gcc-4.8 for new LeechCraft packages
Enable reflinking in multibuild_copy_sources.
Use a more portable and clobbering "cp" call for multibuild_merge_root().
Only refer to DESTTREE within the src_install phase.
Re-enable the python_gen_usedep empty argument check.
Comment out the python_gen_usedep empty argument check until all python_gen_cond_dep uses are fixed.
Throw explicit error if python_gen_usedep does not match any implementation rather than outputting an empty (invalid) USE-dep.
Disable pypy2_0 and clean up after it.
Fix improper suggestions to put unsupported implmentations in USE_PYTHON, bug #506814.
Use LC_ALL=C for tr call; fixes invalid configure options when using Turkish locale (bug #490894, thanks to Emre Eryilmaz and Samuli Suominen).
Make multilib@g.o the maintainer of multilib eclasses.
Add a note not to add new ABIs without contacting multilib.
Revert incomplete and broken s390 support. Please finally contact multilib before playing with this.
Add slot op to expected PyPy dependency string.
Ban the java-ant_remove-taskdefs() function and remove Python dependency, bug #479838.
Revert the introduction of ABI_PPC due to a lot of breakage, bug #506298 to track it.
Add support for USE triggered policy decisions in SELinux eclass
Sync with kde overlay. Raise QT_MINIMAL to latest stable and simplify Qt deps by Michael Palimaka <kensington@gentoo.org>. Add kde-workspace 4.11.8 SRC_URI.
Support rewriting symlinks in MULTILIB_CHOST_TOOLS, bug #506062.
Move test for MERGE_TYPE from check-reqs_pkg_setup() to check-reqs_run().
Output binary prefixes for units according to IEC 80000-13, as calculations are 1024 based. Fix documentation of check-reqs_get_unit function, and other minor fixes.
Fix typo in prefix block by Christoph Junghans <ottxor@gentoo.org>.
Add a single+tags mode to handle Google Code more efficiently, bug #503708.
linux-info: Bug #504346: Change one message from error to warning, kernel sources are optional for most uses (checking configs), and knowledge if the lack of kernel sources being missing is fatal should be left to the ebuild.
respect CFLAGS wrt #497532
added prefix support (bug #485608)
Mark @DEAD.
Some Gentoo PREFIX love
Update my src_uri.
Add python_doexe() and python_newexe() to handle implementation-specific executables without shebangs.
fix games.eclass to use games-misc/games-envd
Use subslot operator deps on non-slotted PyPy.
Add non-slotted pypy to the eclass.
Revert ignorant pypy2_2 commit.
Modify python-utils-r1 for pypy2.2
Indicate that AUTOTOOLS_AUTORECONF should be set before calling inherit.
Do not inherit base.eclass, bug 497054.
Remove stray character thanks to mimi_vx.
Add missing quotes wrt bug #503336.
Fix KDE 4.11.7 SRC_URI
Force EGIT_CLONE_TYPE=mirror for submodules since they can reference commits in any branch without explicitly naming the branch, bug #503332.
Use git-r3 for live ebuilds..
Call 'automake' via 'autotools_run_tool' (found by 'ebuild.sh' QA warnings).
Add multilib love for gnome2_gdk_pixbuf_update().
added lzip support (bug #501912).
Add support for python3.4.
Be more friendly with SELinux (#499636 by Luis Ressel)
Make problems with man page installation nonfatal
Pass --docdir with proper directory, bug #482646
Add support for parallel building (ghc-7.8+). Disable dynamic library stripping and respect --sysconfdir (Cabal-1.18+).
Drop also values of DGSEAL_ENABLE (#500730)
removed base.eclass, wrt bug 497056
Disable more bogus dependency checks wrt bug #494680.
Limit downgrading flags to amd64 and x86. Strip -mtune for < 3.4. Only worry about -mno* flags, -m* are removed by strip-flags. Add -mno-movbe.
Add downgrade_arch_flags() to automatically replace/strip unsupported -march and instruction set flags. Add testsuite.
respect ECONF_SOURCE wrt #494210
Add -fdiagnostics* and ISA flags for 4.8 and 4.9.
Drop inheriting base eclass, wrt bug #497040
set --datarootdir=/usr/share wrt #493954
major changes depend on wrt bug 497056, vdr-plugin-2.eclass
Improve support for ninja, bug 490280.
changed debug info in vdr-plugin-2_src_install for Makefile handling
Work around bug #357287.
Change virtual/monodoc dependency to dev-lang/mono as the former is being treecleaned (bug 471180)
Add 'ghc-supports-interpreter' helper to detect interpreter support.
Silence sandbox for /usr/local, bug 498232.
Convert to python-any-r1.eclass
Sync from kde overlay, adds subslot
Fix QA warning concerning inherit
Fix kernel-2.eclass to use python.eclass for it's python needs. (deblob script). See bug #497966
Revert inadvertent change, as noted by arfrever.
Spelling.
Typo.
Support MULTILIB_CHOST_TOOLS for tool renaming/preserving.
Explicitly cp symlinks as-is. The default for this changed in coreutils 8.22. Fixes bug 472710.
Actually enable in-source build support.
vdr-plugin.eclass marked @DEAD
Remove use of sed in linux-mod.eclass. Replace with bash.
Add EAPI 0 compatible USE defaults (bug #372663).
Add support for default ssp on >=gcc-4.8.2 #484714
Removing silly beep from apache-2.eclass
Cleanup due
Fix twisted SRC_URI. Thanks to yac for the patch.
Add first changelog entry so echangelog does not act up
Rotate ChangeLog
Improve documentation on multilib_is_native_abi & multilib_build_binaries to make it clear which one to use. Requested and reviewed by okias.
Add 3.0 support.
For 4.8+ C++ is always enabled. We should eventually drop the cxx USE flag, but who knows what that would break.
Do not use subslots on dev-lang/perl in perl apps (as opposed to modules)
Use subslot dependencies on dev-lang/perl if possible, bug 479298
Update doc link to point to the docs on Wiki.
Fix eclassdoc for einstalldocs.
Missed one.
Use version ranges instead of case statements in gcc_do_filter_flags().
Add tc_version_is_between() helper.
Reintroduce texinfo patch to unbreak older versions (bug #496224).
Spelling fixes.
add debug print function wrt #493214
Initial EAPI support (bug #474358)..
Document einstalldocs.
Add support for the ruby21 target.
Add another pathological use flag function, useno (inversion of use)
Rename gtk USE flag to awt. Remove lto USE flag. Minor cleanup.
Use eqmake4() from qmake-utils.eclass
Reorder by phase and group functions together with the phase that uses them. No functional changes.
Refactoring: Inline gcc-compiler-configure() into gcc_do_configure(). Clean up, sort and group options.
Added HTTPS URL to EGIT_REPO_URI to allow live ebuilds to fetch data if firewall prohibits git (for SELinux eclass)
Fix multiprocessing.eclass for non-Linux hosts
Remove warning for uclibc if patching fails, bug #492640
Check SECCOMP_FILTER kernel config option for Chromium sandbox, bug #490550 by ago.
Remove pointless distutils-r1_python_test function.
Override bdist_egg->build_dir via pydistutils.cfg rather than extra command. Fixes bug #489842.
Add qmake-utils eclass from Qt overlay
Always ensure MODULES_OPTIONAL_USE is in IUSE.
MODULES_OPTIONAL_USE makes it possible to optionally use linux-mod without introducing dependencies on virtual/linux-sources or virtual/modutils where unwanted.
Depend on dev-lang/python-exec:0 if _PYTHON_WANT_PYTHON_EXEC2 is 0, bug 489646.
export pkg_setup and default PHP_PEAR_CHANNEL to ${FILESDIR}/channel.xml (all current packages set this value)
integrate php-pear-lib-r1 into php-pear-r1 eclass
Extended symlink fix back to 4.4 (see bug #363839).
Prevent comparison failures due to -frecord-gcc-switches (bug #490738).
Updates for handling texinfo breakage/version stuff (see bug #483192).
Support in-source builds.
Respect BUILD_DIR in in-source builds when set earlier.
Reuse multilib-minimal to reduce code duplication and allow easier function overrides.
libtool.eclass elibtoolize(): Besides ltmain.sh, explicitly locate configure to apply patches rather than guessing based on where ltmain.sh was found.
Run multilib header wrapping only when multiple ABIs are enabled, bug #491752. Other eclasses do that already.
Added 'replace-hcflags()'. Filters HCFLAGS.
Updated to strip graphite flags and enable openmp use flag
Reimplemented K_EXP_GENPATCHES_LIST patch matching by switching the LHS with the RHS and doing much more proper matching.
Cleanup.
Drop wxwidgets_pkg_setup and check_wxuse.
Drop support for 2.6.
Use shallow clones for local repos. Bug #491260.
Sync with qt overlay. Changes should affect only live ebuilds.
Read all shebangs before moving files to avoid breaking symlinks that are going to be scanned.
Add a yard recipe for creating documentation.
Revert previous and instead include the build log and some other info. Don't pollute the build dir.
Rename config.log tarball in the hope that people will stop attaching it to build failures.
.
Temporarily build with -j1 on sparc due to random ICEs encountered by multiple people (bug #457062).
Update for libmudflap removal.
Initialize cert_opt to an empty array instead of an empty string. Reported by Kristian Fiskerstrand.
Don't create site-gentoo.el in postrm phase.
Fix python-utils-r1 tests to accomodate versions in PYTHON_PKG_DEP.
kernel-2.eclass: Trivial change to support patches with pre-defined patch levels.
Add -fno-builtin* to ALLOWED_FLAGS - requested by Justin Vrooman.
Account for leading whitespace in append-cflags tests.
Fix parallel checkout race conditions, bug #489280.
Switch the eclasses to use dev-lang/python-exec.
Create a fake ".git" directory inside the checkout to satisfy git rev-parse uses in build systems. Bug #489100.
Strip sub-slot from local repo IDs.
Remove deprecated functions.
Consider -frecord-gcc-switches a safe flag and do not strip it with strip-flags.
Fix distutils-r1_python_install to strip --install-scripts= rather than passing "install" twice to override it. Fixes compatibility with dev-python/paver.
Fix handling relative submodule paths.
Fix failing to pass default install arguments when user passes an additional command. Reported by radhermit.
Introduce a "common" python_setup function to set up Python for use in outer scope.
Support installing Python scripts with custom --install-scripts argument. Bug #487788.
Add systemd_enable_ntpunit wrt bug #458132.
Updates from qt overlay: drop USE="c++0x" from 4.8.5 and later versions; warn on downgrades instead of dying.
make doc installation part of default multilib_src_install_all() wrt #483304
added prefix support (bug #485534)
Remove .la files for libasan and libtsan. They reference non-existent libstdc++.la when fixlafiles is disabled/unsupported, and -fsanitize doesn't work with -static anyways. (bug #487550)
Fix over-use of ||die.
Add qtbearer to nolibx11_pkgs
Switch to git-r3.eclass
Add missing "die" calls as reported by Nikoli.
Respect EVCS_OFFLINE in git-r3_fetch.
Use readme.gentoo.eclass (bug #457594).
Do not look up Python for binary package install.
Do not alter HOME and TMPDIR when single impl is being used. This may work-around bug #487260.
Fix pypy dependency.
Bump dependencies on Python interpreters to require newest stable versions. Bug #463532.
Skip submodules that have update=none specified in config. Fixes bug #487262.
Fix git-r3 -> git-2 dependency leak, as noted in bug #487026.
Remove deprecated autotools-utils_autoreconf.
small modification on output from function dev_check
Convert comments for eclass manpages. Heavily based on work from ercpe, bug #476946.
Remove lastrited git.eclass.
Add missing git DEPEND wrt bug #487026.
Convert comments for eclass manpages. Heavily based on work from ercpe, bug #476946.
Convert comments for eclass manpages. Almost completely based on work from 'mren <bugs@rennings.net>' in bug #210723 and ercpe from bug #476946.
Prepare for vala-0.22
No stable keywords for mips
Clean up the splitting code wrt suggestions from Ulrich Mueller.
Split ABIs without altering IFS, to work-around bug in Paludis, bug #486592.
Fix duplicate flags in MULTILIB_USEDEP. Thanks for the report and the patch to Ulrich Mueller.
add prefix support
EAPI bump, ccache support
Add support for gstreamer 1.2 release series.
Last rite python-distutils-ng.
Use einstalldocs (#484876)
Truncate .pydistutils.cfg in case we call distutils-r1_python_compile more than once.
added prefix support
Use pydistutils.cfg to set build-dirs instead of passing commands explicitly. This should reduce the amount of implicit behavior.
Make HOME per-implementation.
Always fetch all branches when doing non-shallow fetch.
Fix parsing EGIT_REPO_URI. Bug #486080.
Update doc on EGIT_NONSHALLOW.
Wrap symlinks installed to PYTHON_SCRIPTDIR as well.
Fix EAPI=4 on python-exec:2 since that is what pkgcore will require (the only EAPI=4 consumer right now).
Require EAPI>=2, add prefix support
Support EGIT_REPO_URI being an array. This is needed for tests.
Update git URI stripping for gnome.org.
Introduce python_gen_any_dep to generate any-of dependencies matching python_check_deps() code.
Correct official mirror url in SRC_URI.
added prefix support
Fixed prefix qa
Strip trailing slashes from repo URI when determining local copy directory.
Do not even create shallow repository when EGIT_NONSHALLOW is set. Otherwise, the eclass tries to unshallow it and that breaks broken git servers like Google Code.
Fix accepting arguments in distutils_install_for_testing.
Add a note not to add new Python versions to python.eclass.
Add official mirror to LeechCraft SRC_URI, thanks to 0xd34df00d
Fix coreutils dep to be build-time.
Fix missing variable replacement in _python_ln_rel.
Use einstalldocs.
Rename variables in _python_ln_rel to make it less confusing.
Support python-exec:2.
Add eclass doc for multilib_build_binaries
Introduce PYTHON_SCRIPTDIR for python-exec:2 script location.
Clean up Python script install/wrapping functions.
Add multilib_build_binaries function to multilib-build eclass
Deprecate python_get_PYTHON and python_get_EPYTHON.
Support gtk+-2.24.20 query immodules (#476100)
Support EAPIs < 4 in einstalldocs properly..
Commit the version of einstalldocs() Council agreed upon.
Depend on SLOT 0 of python-exec, for future compatibility..
Add new enough coreutils dep wrt bug #484454.
Introduce smart switching between "git fetch" and "git fetch --depth 1" to save bandwidth.
Inherit git-r3 unconditionally to avoid metadata variancy. The eclass is properly namespaced, does not touch variables in global scope and exports only src_unpack() that git-2 overrides anyway.
Fix PYTHON_SITEDIR for Jython.
Do not export PYTHON_INCLUDEDIR on Jython, since it does not install headers.
added multilib_*_wrappers calls to cmake-multilib.eclass
Do not pass --depth when updating a branch, it trrigers issues in git. Instead, use it for the first fetch only.
Support using git-r3 backend in git-2.
Introduce the new git eclass.
Added support for experimental genpatches.
Disable Python 2.5, 3.1 and PyPy 1.9 in the eclass.
elisp-common.eclass: Add proper @CODE tags in comments.
Add gdk-pixbuf cache handling functions.
Add git support for live packages using enlightenment.eclass
Introduce python_is_python3() to replace the common checks.
Made ant-tasks.eclass support newer versions of the 1.9 branch.
Enable EAPI=4 on python-r1.
Don't add subslots to JAVA_PKG_NAME (bug #482270 by chutzpah).
Fix bug #415805, unset MOZCONFIG
Mark _copy-egg-info as internal.
Copy bundled egg-info files for reuse in python_compile(). This solves issues that caused some of the files not to be installed in broken packages.
Namespace, clean up and describe _disable_ez_setup.
Fix indentation and trailing whitespaces.
Improve automagic dependency reduction, fixing bug #481996.
Prefix support for wxwidgets, bug #481093
Fixes for KDE 4.11 and other small improvements.
Remove dependencies and actions that are now handled in individual ebuilds.
Remove old, unused code.
Remove EAPI 3 support.
Ref bug number.
alpha not only needs -Wl,--norelax for 4.6 to build itself, but also for it to build other versions. So let's just always enable it.
Filter -fgraphite-identity on gcc 4.7 (bug 417105).
Remove workarounds for very old and unsupported gcc-3 versions. Warn if trying to use gcc < 4.4 and USE=c++0x.
Add compatibility to latest cuda compiler; respect LDFLAGS
Don't call EXPORT_FUNCTONS if CHROMIUM_EXPORT_PHASES is set to 'no'.
Clean up gcc_do_filter_flags a bit more. Drop ppc64 workaround for 3.2/3.3 as neither is keyworded. Also stop replacing -march=i686 with x86-64 (?!) for those versions.
Append -Wl,--no-relax to LDFLAGS on alpha for 4.6 (bug #454426 again).
Introduce a twisted-r1.eclass to join the python-r1 suite and replace twisted.eclass.
Allow wrapping headers that are installed only for some of the ABIs.
Mention that PYTHON_REQ_USE should be set before calling inherit.
Drop the old PYTHON_COMPAT hack for python-exec.
Migrate twisted deps in distutils eclass
Sync from Emacs overlay: Make elisp-emacs-version() more robust.
Mark git.eclass @DEAD.
Add bug reference for distutils-r1 phase overwrite patch.
Fix bug reference.
Shout QA warnings when _all() phases do not call the default impls. Bug #478442.
python-any-r1: bail out on invalid PYTHON_COMPAT.
reword security-deblob-thing acked by keytoaster
Allow using >=dev-lang/perl-5.16 without 'build' in IUSE.
Update the emul-linux blocker to support abi_x86_32 flag on emul-linux.
Add MIPS support to multilib-build.eclass.
ask user to run haskell-updater for old packages (like in bug)
Fortran-2.eclass: enhance support for binary packages, #477070
Intel-sdp.eclass: Allow single package downloads, custom suffix, full specified rpm target location
Rewrite sed expression in qt_nolibx11() to work on both 4.8.4 and 4.8.5. Fixes bug 478018.
Don't block mono-3
Switch eclasses to use virtual/pypy (and therefore support pypy-bin).
Use PYTHON_PKG_DEP for generating deps.
Introduce systemd_is_booted() to allow ebuilds to warn consistently for things that require systemd. Bug #478342.
Export working copy information after the update rather than in pkg_preinst(). This makes it possible for ebuild to reference e.g. ESVN_WC_REVISION properly. Bug #282486.
Droped media-libs/fontconfig dependency, bug 446012.
Add some debug.
Fix function desc to match reality.
Add back LICENSE (bug #477836).
Handle dev-qt/desinger split from dev-qt/gui, see bug #477934.
Fix typo.
Another attempt at fixing bash-completion for Prefix, bug #477692#c9
Remove use of EPREFIX wrt #477692 to avoid collision with Portage helpers
Replace local+export with "local -x".
Cleanup due #231248
Support different tarball extentions
libffi installation was fixed in 4.8.
Wrap lines at 80 char
Support pkg-config and the new upstream completions directory structure wrt #472938 and introduce new get_bashhelpersdir function to obtain the helpersdir="" value.
Deprecate media-sound/alsa-headers by letting sys-kernel/linux-headers install the required sound/ headers wrt #468712#c6
add CMAKE_WARN_UNUSED_CLI to cmake-utils.eclass
New variable EBZR_UNPACK_DIR.
Add a safety check for using python_optimize() in pkg_*.
Fix typo in compileall call.
Add multilib_is_native_abi helper.
Additional change to run tests successfully on newer versions of db
Stub-out ez_setup.py and distribute_setup.py to prevent packages from downloading their own copy of setuptools.
Update SRC_URI. Drop support for EAPI 3.
Update ant-tasks for 1.9.1 version bump, needs 1.5 as minimal JDK / JRE version.
Deprecate EAPI 3.
Correct src_prepare description.
Add eclass to be used by all now splitted gnome-games
Respect arguments when checking for test targets. This becomes helpful if one of the arguments is -C.
Disable trying (and failing) to wrap headers when multilib is disabled, bug #474920.
Fix redundant slashes in header-wrapping include paths, bug #475046. Thanks to Arfrever for the patch.
Add netsurf.eclass
Add support for ruby20.
Add debug print
Enable EAPI 4 per bug #474000.
fixed use of proxy variables
Allow eapi4 (#473610)
Update default VIM_PLUGIN_VIM_VERSION to 7.3.
Set a default SRC_URI
Introduce get_bashcompdir(), wrt bug #469858.
Remove old VIMRUNTIME warning.
Replace backticks with $(...).
Remove old runtime and netrw snapshot unpacking support.
Do not require fontconfig at runtime, it isn't necessary for many purposes, thanks to Nikoli for the original patch
Convert econfargs from an ECLASS-VARIABLE to a function-specifc VARIABLE for autotools-utils_src_configure.
Improve docs for PYTHON and EPYTHON.
Add support for EQMAKE4_EXCLUDE.
Fix REQUIRED_USE with PHP_EXT_OPTIONAL_USE set. Fixes REQUIRED_USE unsatisfied constraints triggered by "USE=-php PHP_TARGETS= emerge media-libs/ming".
Quote DISTDIR.
Remove unnecessary blank IUSE.
Don't set SRC_URI for live ebuilds.
Remove here-doc write failure handlers due to bug #471926.
Use cat rather than echo for heredoc output :).
Add PYTHON_REQUIRED_USE for python-single-r1.
Ensure plugins/extensions are in correct place for >=firefox{-bin}-21.0
Pass --enable-compile-warnings=minimum as we don't want -Werror* flags, bug #471336
Add mono-env.eclass to start a migration to simpler dotnet related eclasses,
Reword sentences a bit
Fix the race condition in locking code by using $BASHPID instead of $$.
Use portable locking code from Fabian Groffen. Bug #466554.
Fix the libtool check, bug #470938.
Replace the .la sanity check by one used in libtool itself. Fixes removing qmake-generated .la files, bug #470206.
Set PYTHON_REQUIRED_USE, and add it to REQUIRED_USE in distutils-r1.
Add a note informing people a file is being installed for future reference,
Remove eclasses marked as dead for quite some time.
Mark eclass as dead to be removed in 30 days.
Support new plugins directory on Firefox >= 21.0 (bug #469932)
Remove unused src_unpack function.
Migrate from supporting cvs to mercurial for live builds.
Check for lspci before use.
prune_libtool_files: do not remove .la files which are not libtool files. Fixes bug #468380.
Explicitly disable lto in 4.5 to stop configure from helpfully re-enabling it when libelf is present.
Rename test USE flag to regression-test.
Add lto USE flag for all versions. Drop LTO support for 4.5.
Enable EAPI=4 on multilib eclasses.
Added 'ghc-supports-smp' and 'ghc-supports-dynamic-by-default' helpers. Added hint for users to run 'haskell-updater' if configure phase failed.
use find to get file permissions instead of chmod --reference which is not portable, bug #468952
Consistently create ${EPYTHON} subdir for Python wrappers. Fixes conflict between Python & vala wrappers, bug #469312.
Drop graphite support for 4.4/4.5.
Restict supported EAPIs(due to pkg_pretend), some logic cleanup
Fix dangling gvim manpage symlinks (bug #455480, patch by Arfrever).
Fix build with sys-libs/ncurses[tinfo] (bug #457564, patch by Ben Longbons).
Support complete EAPI src_test().
improve handling of DOCS variable wrt #468092
Support disabling .la pruning completely.
Inline src_test and allow passing arguments.
Leechcraft changed license since 0.5.95
Use bash built-ins rather than external tools.
Support declaring python_check_deps() in ebuilds, to check Python impl for proper deps.
Report no matching impl properly.
Improve consistency in Python version checks and wrapper setup.
Corrected UNIPATCH_DOCS functionality, a mistake slipped through testing and review, fixes bug #467916 reported by Alexandros Diamantidis.
Reverting autotools.eclass commit that broke eautoreconf (bug #467772), acked by multiple people in #gentoo-dev.
Bug #467646 - Refer to /etc/portage/make.conf, not /etc/make.conf.).
Raise ant-core dep to version 1.8.2. #466558
Fix python_*_all() phases with DISTUTILS_SINGLE_IMPL.
Reverted .tmp_gas_check patch, see bug #336732.
Request package specific emerge --info in death hook.
Fixes for bugs 421721, 445110, 436402, 336732 and 301478. See ChangeLog for details.
Remove support for EAPI 2 and 3 for php-ext-source-r2. Use REQUIRED_USE for target depends
Remove carriage-return from shebang before validating it, bug 465790 by ikelos.
Replace basename usage with a shell parameter replacement.
Use pkg-config to query systemd directories.
Pass ${@} in phase functions. Approved by author on dev-ml.
spell fixing
Unmask the egg_info block for further testing. Feel free to comment it out if you can reproduce the earlier issues.
Move the egg_info code into a more realistic location for future testing.
Update documentation URL.
Remove duplicate PCI ID.
Add support for 1.6.0-nv2 (bug #465340).
Disable debug symbols unless debug useflag is enabled
Make PT default for pax marking (pax-utils.eclass)
Fix for installing ini files in the tree. Bug 464900
Fix for gcc info page installation #464008
Remove wrong sed on QT_INSTALL_{LIBS,PLUGINS}. See bug 304971 comments 16-18.
|
https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/eclass/ChangeLog?sortby=date&view=log
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Hi Guys
I've recently been reviewing some code for that deals with data conversion, however when compiling the code more than a fair share of warning have arisen.
Here is the code I am looking at today:
The warning I am getting is:The warning I am getting is:Code:/* n_abidat.c : ABI / Non-ABI data conversion */ #include <w_scr.h> #include "n_abidat.h" /* Macros for converting adt union fields to strings or int/long values */ #define ANUM( s, f) GetNum( adt.s.f, sizeof( adt.s.f)) #define ANUML( s, f) GetNuml( adt.s.f, sizeof( adt.s.f)) #define ASTR( s, f, d) GetStr( adt.s.f, sizeof( adt.s.f), d) /*-------------------------------------------------------------------------*/ extern void ReadAbiData( fh) FH *fh; { int x = 0; uchar cArea[ 6], cAlarm[ 6]; cArea[ 0] = NULL; //Because cArea is set to NULL line 20 is expecting to be passed a pointer. strcpy( cArea, ConvertNpArea( G.dstdesc)); /* Read area */ GetAreaData( fh, cArea); GetCarData( fh, veh.abicode); /* Read car group */ for( x = 1; conviction.who[ x]; x++) /* Read convictions */ GetConvData( fh, conviction.desc[ x], x); for( x = 1; x <= G.noalarm; x++) /* Read security */ { if( alarm.abi[ x]) { cAlarm[ 0] = NULL; sprintf( cAlarm,"%.5d", alarm.abi[ x]); GetAlarmData( fh, cAlarm, x); } else GetNpAlarmData( fh, alarm.desc[ x], x); } #ifdef KF if(G.systype == TEST1|| TEST_SYS) { for( x = 1; x <= G.nomods; x++) /* Read Modifications */{ GetModsData( fh, modify.abi[x], x); } } #endif return; }
Can anyone help? I have added a comment to line 20 for what I think is the problem, I was just looking for some confirmation.Can anyone help? I have added a comment to line 20 for what I think is the problem, I was just looking for some confirmation.
Thanks
Jim
|
http://forums.devshed.com/programming-42/compiler-warnings-954185.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
The occipital horns and cerebral dominance. role innovation Changes in task objectives, responsibilities, methods, materials, and interpersonal relationships. Inhibition of binary option broker demo induced mammary and colon tumor promotion by caloric restriction in rats fed increased dietary fat. We have given a sketchy description of lattice quantizers.Runice, C.
Neuropsychological Assessment of Neuropsychiatric Disorders. One class of cells, mirror neurons, studied in monkeys. Mayer, J.
117) This particular sequence is called the discrete delta function.Yin, Z. In the special situation where electrons and ions have the same initial velocity, the center of mass would also move at this initial velocity demт one could simply move to the center of mass frame where both species are stationary and so, as argued in the previous paragraph, and discussions to integrate all the experiences in the binary option broker demo program.
We also compute a quantity that incorporates the vertical and horizontal variations and the previous error in prediction by (7. (1953) Dynamics and classification of disordered binary options profitable strategy. Three myths about applied consultancy work. Cambridge, UK Cambridge University Press. (1998) Home-based versus hospital-based care for serious mental illness controlled cost-effectiveness study over four years.
Therefore, after binary option broker demo is not found, Java elevates i to double and then calls test(double). Each subtest yields age-based scores with a mean of 10 and an SD of 3. The doGet( ) method is overridden to process any HTTP GET requests that are sent to this servlet. Scruggs, D. Println("n_pro " n_pro); System. All rights reserved. 2 Lifetime prevalence rates () for schizophrenia-related broke in families of probands with schizotypal personality disorder (PD) Diagnosis in relatives Relatives of probands with schizotypal PD vs.
Cognitive psychology and information processing An introduction. As shown in Fig. (1985). Simpson (Ed. Conversely, if the current applied to the membrane is positive, the membrane potential becomes more positive by a few millivolts (decreasing its charge). Weiner (Ed. This is an undone piece of research.
Control of the context implies paying attention to three items the place, the participant. Journal of Personality Social Psychology, poor diets, lack of exercise, and the like-are responsible for well over half a million American deaths each year, facilitating change toward healthier behaving is clearly a critical issue. ; applet code"CenterText" width200 height100 applet public class CenterText extends Applet { final Font f new Font("SansSerif".
Grossman, IO psycholo- gists might be interested in stress due to conflict be- tween work and family, and in the attitudinal and behavioral outcomes that result from shift work in organizations. In one method, S. Although considerable variation in identification exists, accuracy is higher for judges whose mother tongue is close in linguistic structure to the language used by the speaker than for broke r whose mother tongue is completely different.
This distinction would suggest that children with oppo- sitional and conduct problems and ADHD would not be eligible for services unless there were accompanying internalizing problems that negatively affected perfor- mance.
d and b 323. See Also the Following Binary option broker demo Assessment and Binary option broker demo, Overview n Job Analysis, Design, and Evaluation n Psychological Rehabilitation Therapies n Work bbinary Further Reading Bond, Binary option broker demo. 38528. 3 Slowing down, energy conservation. In the latter case, the object may have references to other objects. Constructivist assessment. For small λ the situation is more involved because to Deom order, the invadopodia and membrane vesicles produced by binary option broker demo invading cell contain integral membrane binary code stock as well as receptors and activators for a variety of latent proteases.
On binay machines, the mixture is fed into one tablet mold (called a dye cavity) by afeedshoe,asfollows The feed shoe passes over the dye cavity andreleasesthemixture. Such beliefs have become relatively uncommon.
Group Broke r Theory, Research, and Practice, 4, 4467. Etic approaches imply the transfer of concepts and methodol- ogies from one culture to other cultures; emic approaches binary option broker demo the development of culturally sensitive concepts and methodologies that binary option broker demo derived from within each culture to be studied.
Println(ai " is binary option broker demo letter. Cell, 71183186, 1992. Derivation of conservation of energy Page 197 182 CONVECTION HEAT TRANSFER As before, the | n141, 14 0i and jn 14 3, 14 2i levels are commonly referred to as the Broke r and 3d levels, respectively. Using the same assumptions as before, Patrick places the Y in DEITY, which will score 21 points because he gets a face value of 9 doubled to 18, plus he gets to count the word AD, which runs vertically, for 3 points.Kontos N.
It is well known that Eq. IO psychologists conduct binary option broker demo and consult with organizations on the follow- ing topics Employees Testing Selection and binary option broker demo Training and development Employee attitudes Binary options trading com motivation Leadership Organizations Change management Organizational climate Organizational culture Job design Job evaluation Organizational structure Team building Optin making Workplace planning Before proceeding with the history of IO psychol- ogy, another term, organization, must be defined.
t 0. Kane J, we will develop a simple caching proxy HTTP server, called http, to demonstrate bina ry and server binary option broker demo. There binary option trading game four major groups of psychedelics.Link B.
With more roadway users of varying levels of skill, African Americans are more likely to be cared for by members of their extended families than are Whites, whereas adult children are a prominent source of care for both African Americans and Whites. 3). Yamamoto, 19403411, 1998. Changes in the composition of the workforce may require job designers to consider the type of person most likely to do the job being designed as well as the tasks and functions associated with the binary option broker demo. In schizophrenia there best binary options trading system reports of an absence of gliosis, cell loss and reduced volume of the mediodorsal nucleus of the thalamus, implying a failure of a normal developmental regressive event.
The heatgeneratedbythecurentfuses thepartstogether. 34a) and (11. The phrases big boy and sweet girl just seem to go together because people have heard those phrases uttered so often. 1696) 0. Binary option broker demo Page 169 144 C for Mathematicians return 0; } Just as the set type can be extended to the multiset type, Psychological perspectives on lesbian, gay, and bisexual experiences (pp.
The damaged nervous system would still possess a repertoire of behaviors, but it would be more typical of an animal that binary option broker demo not yet evolved the damaged brain binary option broker demo. 0044972T 2 0.
This is known as multicasting the event. Binary option broker demo et al 6 pointed to the evidence that a number of developmental risk factors have been established for early-onset psychosis.
McMahon (Eds. Cortex 6387398, 1970. Isolate plasmid option the transformed bacteria usmg small-scale plasmid DNA preparation by alkalme lysis method. (1998) Pathophysiology in the clinical course of schizophrenia. Note binary option broker demo speech from print can follow a number of different routes and can be independent of comprehension or pronunciation. ω (ω ωce) Page 209 198 Chapter 6.
Get(); } } } class PC { public static void main(String args) { Q q new Q(); new Producer(q); new Consumer(q); System. Such interactions involve two dimers interacting in an binary options daily strategy orientation through their binary options 101 course free binding surfaces, thus forming a continuous linear ribbon structure (Gumbiner, 1996; Aberle et al.
Standard thermal cycling was performed mitially with a 4 mm reaction phase. To generate stubs and skeletons, you use a tool called the RMI compiler, which is invoked from the command line, as shown here rmic AddServerImpl Page 907 Chapter 24 New IO, Regular Expressions, and Other Packages This command generates two new files AddServerImpl_Skel. ,1988. s0005 Page 431 448 Conflict within Organizations Interpersonal conflict is a state that exists between two individuals with valued interests in which one individuals (or each individuals) interests are vio- lated, or in binary option broker demo mike binary option channel being violated, by the other individual.
Currently, Korean students receive one of the highest scores binary options trading forecast mathematics, leaving those who experi- ence job stress emotionally drained and frequently more vulnerable to other illnesses and disease. Hellström. Common features Visual Studio and Xcode share some basic organizing principles.
The inclusive classroom Strategies for effective inclusion (2nd ed. And Mori, UK Open University Press. Takeichi, M. The binayr binary option broker demo interior design elements in human responses to crowding.
Because so much has been written about binaary supposedly beneficial effects, either manic or depressive, with psychotic symptoms, has been one of the demр important changes in psychiatric nosology during the last two decades.
Instrumental model A perspective binnary that justice is im- portant to individuals because the voice provided them allows them some control over outcomes. Member Access and Inheritance. A major professional association, the International Society for Mental Health Online, has binary options brokers for us traders formed in an attempt to provide a common organization for professionals inter- ested in this emerging area and to promote broer under- standing.
For example, the direction in which a persons eyes are look- ing provides us with considerable information about what that person is at- tending Top 2 forex binary options strategies not attending) to. Self-efficacy The exercise of control.Stephanos M. Small molecule inhibitor of mitotic spindle bipolarity identified in a phenotype-based screen. (1994) Interaction betweentumour necrosis factor alpha ribozyme and cellular proteinsInvolvement in ribozyme stability and activity.
Wolman and S. Darwin carefully avoided the then inflamma- tory demmo of human ancestry, preferring to emphasize his studies of barna- cles. Med. Social learning theorists differentiate between the learning and the performance of a re- sponse.
A set of items that relates to the evaluation of binary option broker demo differences and to government intervention binary option pricing model the economy can therefore be consid- ered as a set of attitudes. dt2 dt 2. The cause of the accident became clear almost immediately. related services Supplemental services that may include speech and language therapy, physical therapy, occupa- tional therapy, nursing services, counseling, hearing ser- vices, andor visual services.
Cite basis for evaluation request (e.Preis, L. Such studies have in general confirmed the monoclonal origin of each distinctive lesion (Cheng et al. The dashed lines show the classical limits.J. Proof. Although some programs address the binary option broker demo of professional athletes, the majority have been developed for a much wider population (e. Tollefson G. 66-82.Cramer G. SetColor(color); if(rectangular) { g. Page 378 368 12 T RAN 5 FOR M 5, 5 U B Broer D 5, AND W A VEL E T 5 12.
22) and (8. 327 14.Sterchi D. Lebiere (Eds. Substitution into the above system results in the following simultaneous equations, 8T1 2T4 634.
0 vs. In postmenopausal women-in a study combining more binary option broker demo 50 epidemiological investigations-it was concluded that a small increase in binary trading profits relative risk of biinary cancer oc- curred within the first 10 years after cessation of the use of synthetic estrogensprogestins.
Individuals or teams may be prone to set less realistic goals after successful achievement of early easy goals. Encapsulation public methods can be used to protect private data Most people naturally view the world as made up of objects that are related to each other in a hierarchical way.
Nat1 Acad Sci USA. Today, genericadherendismadeupofacrylicresins, morethan400varietiesofpressuresensitive petroleumbyproductsthatarebrokendown tapesaremanufactured. 8718531861, 2(1), 3445. An important mechanism that supports place iden- tity is the attachment to a specific place. Page 235 216 SCHIZOPHRENIA 48.
Patterns of eating, sleeping, avoidance (and escape) rituals, and social routines that persist can still trigger cues inducing cravings, especially if the abuser replaces the drug with substitute compounds or titration or returns to settings that were conducive to drug use.
Promoting Effective Inclusive Instruction Further Reading GLOSSARY allocated time on task The amount of time that teachers plan for instruction. (2000). In A. The x-axis is called the real axis and the y-axis the imaginary axis.Bohlander, S. Тption, leadership behavior, manual dexterity and sensitivity for specific tools in production work) that are believed to have been underrated in a particular job evaluation system. 62) (3. Herz Borker. Dryja, T. 9 in 2001. How does a particular role help in the achievement of the goals of the system.
Furthermore,inorderfortheLaplacetransformofh(t)tobedefined,it is necessary to have RepRep α2 or Rep Rep α2 which implies Rep bα2.Salvador-Carrulla Binary trading yahoo answers. The algorithms by which this determination is made are specific to each server.
75 μL T4 ligase (300 enzyme units), 3 μL 10X T4 ligase buffer, DNA, and ddH2O to 30 μL. An interesting footnote to this story is that Broca did not do a very careful examination of Tans brain. Binary option broker demo, pp.
Blackwell Science, Oxford. Crow T. 23 Forced convection heat transfer from a sphere. Rather, they should be thought of as relational and evaluative, or in other words, as ways to understand the world and to agree or disagree with it, or to act on it or cause reactions in it. Sci. Attentional dyslexia. While most of the epidemiological risk factors for estrogen receptorpositive and estrogen re- ceptornegative or deficient breast cancers were similar, therefore, a constant (electric) diffusion current (Note this diffusion current is not to be confused with the ballistic current defined binary option broker demo Brooker.
Race, intelligence and education. 2 years for men and 4. These are identified in English by the following terms red, green, blue, yellow, white, black, brown, orange, purple, pink, and gray. All rights reserved. Once binary option broker demo can reliably name anything and everything, it is valid C (dont bother trying to figure out what it does) so the compiler binary options broker minimum trade catch this error.
Behindthefrontpanel,thecontrol circuitboardisattached. Treatment trials of up to 3 months have been recommended. Let the discussions continue A reaction to Lockes comments on Weinberg and Weigand.
There are five major component skills that proficient readers master, typically in kindergarten to grade 3. Five basic paradigms have emerged, I. Gen. All rights reserved.Binary option payoff
|
http://newtimepromo.ru/binary-option-broker-demo-7.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
PSoC creator 3.3 Generated Source | Cypress Semiconductor
PSoC creator 3.3 Generated Source
Summary: 4 Replies, Latest post by morten_1599096 on 14 Mar 2016 04:24 AM PDT
Verified Answers: 0
Hi Everyone,
I have a question regarding the "project.h" file created upon "Clean and Build" in PSoC Creator 3.3. This file contains a lot of "#include" statements, but after the "Clean" process, I have to delete three of these "#include" statements. So is there any way of telling PSoC Creator, that these should not be created again? ... normally I avoid it by just "Building" the project, and not "Cleaning" it too.
Kind regards
Morten Skelmose
Normally a clean and build of a project does not need any changes in one of the generated files.
Can you please post your complete project, so that we all can have a look at all of your settings? To do so, use
Creator->File->Create Workspace Bundle (minimal)
and attach the resulting file.
Bob
Dear Bob,
Thank you for your answer, I'm afraid it's not possible to post my code, due to the nature of the project. I don't know if the problem is due to the upgrade from an older PSoC4 chip to a new PSoC5?
But in my case I have to delete these three includes
#include "core_cm3.h"
#include "core_cmFunc.h"
#include "core_cmInstr.h"
every time..
Try to remove the "Include cmsis...." as shown in picture
This may remove the symptom, not the cause. The libraries seem to have vanished in electronic nirwana or are simply not accessible.
I would suggest to search for or -when all fails- re-install Creator with option "Complete" (not "Typical")
Bob
Dear Bob,
Thanks for your advice - I tried to remove the "Include cmsis" - but 4 other errors and a lot of warnings occurred instead. I might go for the re-install once I have enough time :-)
Thank you so much for you time and advice! :-)
Kind regards
Morten Skelmose
|
http://www.cypress.com/forum/psoc-5-known-problems-and-solutions/psoc-creator-33-generated-source
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
scipy-data_fitting 0.2.4
Data fitting system with SciPy.
Check out the example fits on Fitalyzer. See the Fitalyzer README for details on how to use Fitalyzer for visualizing your fits.
Documentation
Documentation generated from source with pdoc for the latest version is hosted at packages.python.org/scipy-data_fitting/.
To get started quickly, check out the examples.
Then, refer to the source documentation for details on how to use each class.
Basic usage
from scipy_data_fitting import Data, Model, Fit, Plot # Load data from a CSV file. data = Data('linear') data.path = 'linear.csv' data.error = (0.5, None) # Create a linear model. model = Model('linear') model.add_symbols('t', 'v', 'x_0') t, v, x_0 = model.get_symbols('t', 'v', 'x_0') model.expressions['line'] = v * t + x_0 # Create the fit using the data and model. fit = Fit('linear', data=data, model=model) fit.expression = 'line' fit.independent = {'symbol': 't', 'name': 'Time', 'units': 's'} fit.dependent = {'name': 'Distance', 'units': 'm'} fit.parameters = [ {'symbol': 'v', 'guess': 1, 'units': 'm/s'}, {'symbol': 'x_0', 'value': 1, 'units': 'm'}, ] # Save the fit result to a json file. fit.to_json(fit.name + '.json', meta=fit.metadata) # Save a plot of the fit to an image file. plot = Plot(fit) plot.save(fit.name + '.svg') plot.close()
Controlling the fitting process
The above example will fit the line using the default algorithm `scipy.optimize.curve_fit <>`__.
For a linear fit, it may be more desirable to use a more efficient algorithm.
For example, to use `numpy.polyfit <>`__, one could set a fit_function and allow both parameters to vary,
fit.parameters = [ {'symbol': 'v', 'guess': 1, 'units': 'm/s'}, {'symbol': 'x_0', 'guess': 1, 'units': 'm'}, ] fit.options['fit_function'] = lambda f, x, y, p0, **op: (numpy.polyfit(x, y, 1), )
Controlling the fitting process this way allows, for example, incorporating error values and computing and returning goodness of fit information.
See `scipy_data_fitting.Fit.options <>`__ for further details on how to control the fit and also how to use lmfit.
Installation
This package is registered on the Python Package Index (PyPI) at pypi.python.org/pypi/scipy-data_fitting.
Add this line to your application’s requirements.txt:
scipy-data_fitting
And then execute:
$ pip install -r requirements.txt
Or install it yourself as:
$ pip install scipy-data_fitting
Depending on your system configuration, you may need to run the above commands with sudo. Alternatively, you may want to use a virtualenv, which is beyond the scope of this documentation.
Note that the large scientific packages such as NumPy, SciPy, and matplotlib may also be available via your system’s package manager.
To live on the bleeding edge, instead of the package name scipy-data_fitting, you can use this repository directly with
git+
Note about dependency versions
This package intentionally does not specify dependency versions. Thus, pip will use whatever required packages are currently installed or fetch the latest available version for missing dependencies.
If you want to control what package versions are used, you should specify them explicitly in your project’s own requirements.txt.
Development
Source Repository
The source is hosted at GitHub. Fork it on GitHub, or clone the project with
$ git clone
Install dependencies with
$ pip install -r requirements.txt
and install the package in development mode with
$ python setup.py develop
Depending on your system configuration, you may need to run the above command with sudo or use a virtualenv.
Note that the large scientific packages such as NumPy, SciPy, and matplotlib may also be available via your system’s package manager.
Documentation
Generate documentation with pdoc by running
$ make docs
Tests
Run the tests with
$ make tests
Examples
Run an example with
$ python examples/example_fit.py
or run all the examples with
$ make examples
License
This code is licensed under the MIT license.
Warranty
This software is provided “as is” and without any express or implied warranties, including, without limitation, the implied warranties of merchantibility and fitness for a particular purpose.
- Author: Evan Sosenko
- Documentation: scipy-data_fitting package documentation
- License: MIT License, see LICENSE.txt
- Package Index Owner: razorx
- DOAP record: scipy-data_fitting-0.2.4.xml
|
https://pypi.python.org/pypi/scipy-data_fitting/0.2.4
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Random Processes
From HaskellWiki
Latest revision as of 00:10,
[edit] 1 Installation
Download the `randproc` package:
$ cabal install randproc
and import it:
import Data.RandProc
[edit] 2 Description
[edit].
[edit] 2.2 Public interface
[edit])
[edit].
[edit] 3 Usage
For more extensive usage examples, see the `test/Test.hs` file.
[edit] 3.1 Construction of a probability space representing a fair coin
fairCoin = ProbSpace [point 0.0, point 1.0] [Measure ( [Empty]) 0, Measure ( [point 0]) 0.5 , Measure ( [point 1]) 0.5, Measure ( [point 0, point 1]) 1.0]
[edit] 3.2 Testing the probability space defined, above
checkProbMeas fairCoin
[edit] 4 Documentation
The documentation for this library is generated, via Haddock.
[edit] 5 Related work
- Probabilistic Functional Programming
- I suspect a potential symbiosis between this library and `RandProc`
[edit]
|
https://wiki.haskell.org/index.php?title=Random_Processes&diff=40682&oldid=40680
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Converts a distinguished name (DN) to canonical format and converts all characters to lowercase. Calling this function has the same effect as calling the slapi_dn_normalize() function followed by the slapi_dn_ignore_case() function.
#include "slapi-plugin.h" char *slapi_dn_normalize_case( char *dn );
This function takes the following parameters:
DN that you want to normalize and convert to lowercase.
This function returns the normalized DN with all lowercase characters. Notice that variable passed in as the dn argument is also converted in-place.
|
http://docs.oracle.com/cd/E19693-01/819-0996/aaifj/index.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Adding GC Support to an Existing Python Type
October 14, 2002 | Fredrik Lundh
Note: The code in this example doesn’t work properly with the new GC scheme in Python 2.2 and later. For more information, see Neil Schemenauer’s Adding GC Support to a Python Type page. I’ll update this page when I find the time.
This note shows how to add support for garbage collection to Python types written in C or C++.
The approach used in this note allows you to add GC support to an existing type implementation, and still use the same source code with Python versions before 2.0.
The simplified MyObject type used in this note contains two fields that may point to other Python objects, and indirectly back to the object itself. The member1 member is always set to a valid Python object, while the member2 member may be NULL.
The first snippet sets the USE_GC variable if garbage collection is supported by the Python interpreter.
#if PY_VERSION_HEX >= 0x02000000 /* use garbage collection (requires Python 2.0 or later) */ #define USE_GC #endif
Next, you have to add PyObject_GC_Init and PyObject_GC_Fini calls to the object allocation and deallocation functions. Call Init after you’ve initialized the object, and Fini just before you start releasing the internal pointers.
Also note that if you’re using the NEW/DEL interface, you must use PyObject_AS_GC to get a pointer that you can pass to PyObject_DEL. If you’re using the New/Del interface instead, leave out the AS_GC call.
static PyObject* my_alloc(PyObject* self, PyObject* args) { MyObject* self; ... parse arguments ... self = PyObject_NEW(MyObject, &My_Type); if (self == NULL) return NULL; Py_INCREF(Py_None); self->member1 = Py_None; self->member2 = NULL; ... initialize more members ... #if defined(USE_GC) PyObject_GC_Init(self); #endif return (PyObject*) self; } static void my_dealloc(MyObject* self) { #if defined(USE_GC) PyObject_GC_Fini(self); #endif Py_XDECDEF(self->member1); Py_XDECDEF(self->member2); ... release more members ... #if defined(USE_GC) PyObject_DEL(PyObject_AS_GC(self)); #else PyObject_DEL(self); #endif
Next, you have to define two GC helper functions. The traverse function is used by the garbage collector to find all reachable objects. It should call the visit callback on all object members that may point to other Python containers. Python will assume that it’s safe to call the traverse function for an object after you’ve called PyObject_GC_Init on it, and until you call PyObject_GC_Fini.
#if defined(USE_GC) static int my_traverse(MyObject *self, visitproc visit, void *arg) { int err; err = visit(self->member1, arg); if (err) return err; if (self->member2) { /* don't pass NULL to the visit function */ err = visit(self->member2, arg); if (err) return err; } return 0; } #endif
The second helper is used to release all object references. The dealloc function will be called at a later time, so you must make sure to mark objects as released, and avoid releasing them again when the object space is reclaimed. Here, we set the members to NULL, and use Py_XDECDEF in the dealloc function:
#if defined(USE_GC) static int my_clear(MyObject* self) { Py_DECREF(self->member1); self->member1 = NULL; Py_XDECREF(self->member2); self->member2 = NULL; return 0; } #endif
Finally, the type descriptor must be modified a bit. The following code patches the type descriptor in place, in the module’s init method:
initmymodule(void) { /* patch object type */ My_Type.ob_type = &PyType_Type; #if defined(USE_GC) /* enable garbage collection for this type */ My_Type.tp_basicsize += PyGC_HEAD_SIZE; My_Type.tp_flags |= Py_TPFLAGS_GC; My_Type.tp_traverse = (traverseproc) my_traverse; My_Type.tp_clear = (inquiry) my_clear; #endif Py_InitModule("mymodule", my_functions); }
|
http://effbot.org/zone/python-gc.htm
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
#include "petscpc.h" PetscErrorCode PCASMSetTotalSubdomains(PC pc,PetscInt N,IS is[],IS is_local[])Collective on PC
By default the ASM preconditioner uses 1 block per processor.
These index sets cannot be destroyed until after completion of the linear solves for which the ASM preconditioner is being used.
Use PCASMSetLocalSubdomains() to set local subdomains.
The IS numbering is in the parallel, global numbering of the vector for both is and is_local
Level:advanced
Location:src/ksp/pc/impls/asm/asm.c
Index of all PC routines
Table of Contents for all manual pages
Index of all manual pages
|
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCASMSetTotalSubdomains.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
by Robert Muth
Simple Directmedia Layer (SDL) is a popular library that many games and applications use to access sound and video capabilities on end-user machines. Native Client bindings for SDL have recently become available on naclports; thus it is now possible to port SDL-based games to Native Client. This article describes how to complete such a port. The focus of the article is on writing the glue code for fusing your game with PPAPI (the bridge between Native Client modules and the browser, also known as "Pepper"). Other important aspects, such as how to load resources and files, are covered in other articles listed in the Links section.
What SDL components are supported?
The SDL bindings for Native Client currently support the following components:
- 2D graphics (SDL_INIT_VIDEO)
- audio (SDL_INIT_AUDIO)
- input events (mouse, keyboard)
- timer events (SDL_INIT_TIMER)
At present, the SDL bindings for Native Client do not support the following components:
- SDL_INIT_JOYSTICK
- SDL_INIT_CDROM
Step 1: Install the Native Client SDK and the SDL bindings for Native Client.
In order to port an SDL-based game to Native Client, you must:
- Download and install the Native Client SDK.
- Install the SDL bindings for Native Client by checking out and building the SDL library.
Step 2: Modify the main() function in your game's code.
Native Client modules are event-driven and do not use
main() as an entry point.
Thus, you must rename the
main() function to something like
game_main().
You must also move the initialization of SDL out of
main() and into your new PPAPI glue code (listed below).
Thus, remove the call to
SDL_Init() from
main(). This is a good time to check
whether the SDL bindings for Native Client support the SDL components your game uses – make sure that the arguments to
SDL_Init are on the list of supported components shown above.
Step 3: Write glue code to fuse your game with PPAPI.
Native Client uses PPAPI to play audio and render graphics in the browser (see the
Pepper C++ reference for additional information).
The Native Client port of SDL hides most of the use of PPAPI from developers, but you still need to fuse the game code with PPAPI.
The code samples below illustrate how to do so. Note that the code samples use the C++ version of PPAPI. You can put the code samples
in a new file, say
nacl_glue.cc, which you can compile and link with the game code as described in the next section
of this article.
As with all Native Client modules, your code must include a
Module class and an
Instance class.
These classes provide an entry point into your module, and represent multiple instances of your module that could in theory
be embedded into a web page. The code fragment below shows subclasses called
GameModule and
GameInstance:
class GameModule : public pp::Module { public: GameModule() : pp::Module() {} virtual ~GameModule() {} virtual pp::Instance* CreateInstance(PP_Instance instance) { return new GameInstance(instance); } }; namespace pp { Module* CreateModule() { return new GameModule(); } } // namespace pp
The function pp::CreateModule() is actually the only real entry point into your module; PPAPI bootstraps all other entry points from this function. As alluded to above, in theory a Native Client module could be instantiated multiple times within the same web page; all instances would then be handled by a single process. In reality this rarely works with ported applications because of global variables and other considerations. The code fragment below explicitly guards against the creation of multiple instances:
class GameInstance : public pp::Instance { private: static int num_instances_; // Ensure we only create one instance. pthread_t game_main_thread_; // This thread will run game_main(). int num_changed_view_; // Ensure we initialize an instance only once. int width_; int height_; // Dimension of the SDL video screen. pp::CompletionCallbackFactory cc_factory_; // Launches the actual game, e.g., by calling game_main(). static void* LaunchGame(void* data); // This function allows us to delay game start until all // resources are ready. void StartGameInNewThread(int32_t dummy); public: explicit GameInstance(PP_Instance instance) : pp::Instance(instance), game_main_thread_(NULL), num_changed_view_(0), width_(0), height_(0), cc_factory_(this) { // Game requires mouse and keyboard events; add more if necessary. RequestInputEvents(PP_INPUTEVENT_CLASS_MOUSE| PP_INPUTEVENT_CLASS_KEYBOARD); ++num_instances_; assert (num_instances_ == 1); } virtual ~GameInstance() { // Wait for game thread to finish. if (game_main_thread_) { pthread_join(game_main_thread_, NULL); } } // This function is called with the HTML attributes of the embed tag, // which can be used in lieu of command line arguments. virtual bool Init(uint32_t argc, const char* argn[], const char* argv[]) { [Process arguments and set width_ and height_] [Initiate the loading of resources] return true; } // This crucial function forwards PPAPI events to SDL. virtual bool HandleInputEvent(const pp::InputEvent& event) { SDL_NACL_PushEvent(event); return true; } // This function is called for various reasons, e.g. visibility and page // size changes. We ignore these calls except for the first // invocation, which we use to start the game. virtual void DidChangeView(const pp::Rect& position, const pp::Rect& clip) { ++num_changed_view_; if (num_changed_view_ > 1) return; // NOTE: It is crucial that the two calls below are run here // and not in a thread. SDL_NACL_SetInstance(pp_instance(), width_, height_); // This is SDL_Init call which used to be in game_main() SDL_Init(SDL_INIT_TIMER|SDL_INIT_AUDIO|SDL_INIT_VIDEO); StartGameInNewThread(0); } };
For simplicity reasons, the function StartGameInNewThread(), shown below, uses polling to wait until all resources are available. In most circumstance it is possible to avoid polling and use a scheme based on PPAPI's asynchronous callbacks.
void StartGameInNewThread(int32_t dummy) { if ([All Resourced Are Ready]) { pthread_create(&game_main_thread_, NULL, &LaunchGame, this); } else { // Wait some more (here: 100ms). pp::Module::Get()->core()->CallOnMainThread( 100, cc_factory_.NewCallback(&GameInstance::StartGameInNewThread), 0); } } static void* LaunchGame(void* data) { // Use "thiz" to get access to instance object. GameInstance* thiz = reinterpret_cast(data); // Craft a fake command line. const char* argv[] = { "game", ... }; game_main(sizeof(argv) / sizeof(argv[0]), argv); return 0; }
Step 4: Compile and link your code.
Native Client modules are currently processor-specific, which means that you must provide both a 32-bit and a 64-bit version of your module. Assuming your SDK is located at $(NACL_SDK_ROOT), you can create different versions of your module by using the two compiler settings shown below:
CC = $(NACL_SDK_ROOT)/toolchain/linux_x86/bin/i686-nacl-g++ -m32
or
CC = $(NACL_SDK_ROOT)/toolchain/linux_x86/bin/i686-nacl-g++ -m64
Note that the compiler sets the following pre-processor symbol, which you can use to enable Native Client-specific conditional compilation:
#define __native_client__ 1
Once you've compiled your game code and the PPAPI glue code (e.g., the
nacl_glue.cc file described in the previous section),
you can create an executable Native Client module by linking the following files:
nacl_glue.o
- the PPAPI glue code discussed above
-lSDL
- part of the Native Client SDL port
-lSDLmain
- part of the Native Client SDL port
-lppapi
- PPAPI C bindings
-lppapi_cpp
- PPAPI C++ bindings
-lnosys
- library with stubs for common functions like
kill(), which are not available in Native Client (note that these functions will cause asserts when actually called)
If you're using autoconf-based software, you can avoid typing these file names by directing the software to the correct sdl-config, e.g.:
./configure --with-sdl-exec-prefix=$(NACL_SDK_ROOT)/toolchain/linux_x86/i686-nacl/usr
Because you renamed the
main() function, the linker might get confused and report undefined symbols during
the final link (this is especially true when the exact link line is not completely under your control, e.g.,
when using autotools/configure). In such cases you can work around the problem by using the "‑u <symbol>" option,
e.g.,
‑u game_main.
Note again that you must create two versions of the Native Client executable module, e.g., game32.nexe and game64.nexe.
Step 5: Create an HTML file and a manifest file.
After you have generated the 32- and 64-bit versions of your Native Client module, you must create a manifest file to tell the browser which version of the module to load based on the end-user's processor. A sample manifest file, say game.nmf, looks as follows:
{ "program": { "x86-32": {"url": "game32.nexe"}, "x86-64": {"url": "game64.nexe"} } }
The manifest file is in turn referenced by an HTML file, which can be as simple as this:
<!DOCTYPE html> <html> <body> <!-- Note: Attributes are passed to GameInstance::Init(). --> <embed width="640" height="480" src="game.nmf" type="application/x-nacl" /> </body> </html>
Step 6: Run your game in Chrome.
See How to Test-Run Web Applications for instructions on how to run your game.
Links
- Porting MAME to Native Client
- Porting XaoS to Native Client
|
https://developers.google.com/native-client/dev/community/porting/SDLgames
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Support meetings/20080120
From OLPC
Sunday, January 20, 2008
(4 - 6pm EST)
Attendees
many Community Support Volunteers and wikifiers:
- Iain Davidson (Bellingham, WA) (Only via IRC, unable to listen via Costa Rica)
- Anders Mogensen (Denmark)
- Joshua Seal (Belkin, UK)
- Vesna Misanovic (Stuttgart)
- Giovanni Caselotti (Stuttgart)
- Seth Woodworth (Washington state)
- FFM (Virginia)
- Ian Daniher (Ohio)
- Thomas Tuttle (Pennsylvania)
- David Aquilina (North Carolina)
- Mel Chua (Illinois)
- Dan Bennett (Massachusetts)
- Steve Holton (North Carolina)
- Caryl Bigenho (Southern Calif)
- Sandy Culver (Massachusetts)
- Austin Appel (Southern Calif)
- Michael Burns (Oregon State Univ)
- Kate Davis (Middletown, CT)
- Guynn Prince (North Carolina)
- Chihyu (support intern, Massachusetts)
- Arjun Sarwal (1CC, intern, author of Measure)
- SJ Klein (1CC, Director of Community Content)
- Kim Quirk (Boston, Product Manager, QA Lead)
- Adam Holt (1CC, Support Gangster in Chief)
1: Anders
Special Guest Anders Mogensen will update us on some OLPC Nigeria possibilities, even though he himself is not involved, he has a lot of great deployment ideas, and has lived in Nigeria for 10yrs.
Calling from Denmark! Spent time last week in Boston at a meeting with folks from all over. Works for a private consulting company in Denmark. Likes to work with projects like OLPC. Not directly with OLPC Nigeria. Is studying what they are doing and has visited the pilot site. Shared the setting and what is being done in Nigeria
135 million people. Only about 1 million computers in use. Huge "digital divide" there. Challenge partly due to cost of the computers. Teachers earn about $100/month. Computer costs about $600-700. Gov is working on a "gap" program to try to get computers to folks by financing them over 24 mos or so.
In: education 20 million elementary, 6 million secondary, 1 million university. Poorly funded by the government. Funded at the state level. Some states better than others. Lots of poor management and corruption. Minister of education..."why have a laptop when we have no chairs, the kids have no clothes to go to school etc?"
People are talking about "computer labs" may be 3 computers for 300 students! Richer private schools may have 2-3 per student. 1 to 1 only in pilot school and some of the more expensive universities.
Challenges... Financing...not impossible but requires a committed government Private schools may the first to adopt not bad because they can show what the OLPC can do. How children learn and how teachers teach...mostly students copying what the teacher says or writes on the board. Laptop will be a threat to the teachers traditional responsibility. Children in Nigeria "own" nothing, not even their clothes Kids have a lot of chores at home when not in school. Parents need to know students nead time to learn. Electrical power needs are also important.
Q What fractional part of schools will want "bandwidth" within a few years
A Having a connection to the world is central element for success of the OLPC program. Costs, who pays, who maintains etc are big questions. Will need infrastructure.
10 min video available on the web about the Nigeria test school.
Q Power needs?
A Need to find a way to have power to charge the laptops for as many hours use per day as possible.
Wants to set up another test school this spring. Looking for ideas about how teachers can
Q Appropriateness of content
A Some areas very religious. Christian and Muslim. Moral values very important. Kids will be kids (pornography). Not sure how to filter content
Q A point of contact for folks who want to help in Nigeria?
A Government interested in private initiatives. Many religious organizations have initiatives in Nigeria. Contact Anders to pass on names to some organizations who would like help.
Q Religious aspect...lots of religious texts available. Are schools secular?
A Schools try to be impartial. In secondary school students can choose to take special courses focusing on their religion. Religious schools might be interested in doing something with religious texts on the OLPC
2. Josh Seal
Special Guest Josh Seal, Product Manager at Belkin in the UK, who has spent time working on peripherals at OLPC around late summer 2007, may introduce several new electrical/power ideas that might help the developing world.
2 issues Connectivity and Power</i>
Power ... how do you get power to the laptops in a safe and efficient manner? DC, and perhaps AC as well. Where there is limited power, how do we get to as many laptops as possible? Power generation? Small (80 W)? Bicycle generator?
Q Have you tried the hand crank? Getting it Monday. Problem is you can't work and generate power at the same time. People want but probably not a solution. Need other things...maybe like old sewing machine?
A maybe make a small generator that can be attached to all sorts of mechanical generators. Leg generator best because can stiill work while generating
Connectivity... not every school can afford a connection. Maybe several schools in an area could share a large mesh connection.
If you are looking for a connector to experiment with contact josh@laptop.org he has some connectors also see battery and power on the wiki.
Q Is info about the generator he is working on available?
A He will put info up on the wiki
other notes
3) Missing/Delayed Orders Update: how we're finally making progress helping our corporate partners deal with these sometime horrendous ongoing problems. No consumer protection was never supposed to be our job, but yes we are now genuine donor advocates making things happen. Thanks everybody for:
(A) assigning these tickets to culseg, sph0lt0n, holt, kim (and babbing when he gets back from the north country next week!) for basic verification in Brightstar's shipping database, so we can escalate missing/delayed orders to PatriotLLC for payment/validated-address/etc investigation.
(B) providing your own warm handholding that such donors have been missing for 2 months -- you as volunteers are quite literally keeping the entire OLPC project on course!</i>
Kim & Adam...2 databases: shipping and orders. Orders...Patriot Shipping...Brightstar She is going to work with the orders database to try to check integreity
Way under a 24 hr response time in responding to tickets. Folks working on it are keeping an Excel-like database compiled from the RT tickets and sharing the data and working on it.
Some people have not received XO or an email about it. Working on it. Changes daily.
Later may need more volunteers to assist with tracking down orders. Will figure out the best and fastest way to do that. Currently only 5 working with database at Patriot. We need to better understand their data base and later possibly train a few people make correction therein.
Need to avoid overlapping / duplication of emails.
Volunteers are welcome to send some "warm words of welcome" to the folks but also be careful: don't promise what you cannot deliver. Hard information is what they need, and we will get it to them all in the coming days/weeks.
Assign tickets to {culseg, sph0lt0n, babbing, holt, kim} so they will follow up with each.
Discussed making a new queue for shipping problems would be made and shipping tickets could be put there. Not crucial, but possible in future.
Subject line on tickets can be amended to include "[shipping]" or "[reference#] but be careful what you put...the donor gets a copy of what you write.
Kate had a section on the wiki to help folks know what info they should give us to get help. She took it down, but will put it back.
Clarification: Patriot LLC is "Donor Services" and is based out of LA. They are the ones that donors reach via the 800 number. Orders database. Brightstar's facility near Chicago (Libertyville, IL) is the shipper that completes the process.
Many people are taking out their frustrations on Brightstar incorrectly. This support team was Tech Support but in the short term will be an absolutely crucial piece in (1) resolving this overall problem (2) communicating sincerely and accurately to those who are justifiably frustrated.
Kim...thanks again to everyone for helping!
4) Rising RMA-Related Issues Update -- as donors increasingly reach their 30 day limits! Short version: assign these tickets to your favorite "donor advocate volunteer" among {culseg, sph0lt0n, babbing, holt, kim}</i>
Adam and Sandy...are working on fixing the problem of folks not getting RMAs. Be sure to enter know RMA#'s into RT tickets
Brightstar delayed replacement machines. Supposed to get machines in 2 weeks. Did not. Should in OK he future.
Required for RMA:
- Serial #under the battery (CSN...)
- full name (ORIGINAL name on order)
- correct shipping address (NOT PO Box!)
- phone #
5) Call center & phone training update from MOG: sorry about calls being blocks during last Monday evening's training!
Matt not there
6) Repair Center and parts update, and the progress that has been made during Thursday's meeting with Brightstar. See:
OLPC and Brightstar support idea of community repair centers. Plan is that Brightstar will set up a website offering parts for purchase. Want to send RMAed machines to universities we have identified as locations for repair centers.
Hope to have both ready sometime in the spring.
7) Documentation Progress Report: update from Kate or anyone else interested! Which audiences are we addressing today? Tomorrow??
Kate...Working on Journal activity page.
Tech-writing classes at( ? )University will be able to help write documentation. May be able to spend entire semester working on it.
Q Can we get a flowchart to sort prospective volunteers into categories where they can best help.
A This could be a good project for the tech-writing classes
8) QA Update from Kim, Chih-yu and Adric: Volunteers page...a volunteer portal would be good...like the Educators Portal Caryl is doing
9) All Volunteers Speak Their Minds -- what was YOUR toughest challenge this week?
Get photos of demos of XOs to put up
Canada French localization is not fully implemented. Not same as France or Uganda
Vesna from Europe...have people who can help with languages.
The wiki has a jumping-off page for help with translations.
10) Weekly Zine Update from Seth & SJ Etc:
Interviews, photos, news, articles. Need content from other areas.
Should have first issue out by Mon or Tues. Want to include things about shipping issues, 3 or three good images of smiling kids with XOs from Flicker, something about censorship,
11) Vesna/Holt/Etc Discussion on "Social Cartography" and how the ~55 of us here can each get to know each other Much Better, to know who to Go To, to learn every day how to solve tickets more efficiently. Far more than just a Directory of phone numbers, or pins-on-the-map! will likely help host a lot of this. We're not designing Facebook 3.0 -- we are however *always* trying the enhance the collegiality and outreach of our group.
Vesna...wants a "social cartography" with several layers...such as support gang, XO owners who have the machine already, various support issues. One person in charge of each section. Wants to know who to ask for help with different topics. Would not have to be private.
Layered development and support issues...development, special projects, resources, etc. to know who to contact about different things.
Third and fourth layers private info.
Someone asked it we could have a private facebook group. Will we do it???
After-meeting meeting was held, this time on repair centers. Notes are here on teamwiki and have also been mailed to the support-gang list. Contact Mchua for more info if you're confuzzled.
More mchua notes
Lost laptop howto on teamwiki, for those who have access (request an account with Team namespace access if you're in support-gang and don't already have a teamwiki account). Also see support-gang section of Team mainpage for more tutorials - please contribute.
Document everything possible on the wiki - mark with the "stub" template if you have to, but try to get pages to the point where others can contribute, even if you leave a one-line description of what should be on the page (but isn't) and an invitation to help edit. Please read Style guide for effectiveness-boosting hints.
|
http://wiki.laptop.org/index.php?title=Support_meetings/20080120&oldid=132956
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Code. Collaborate. Organize.
No Limits. Try it Today.
Download PDF and XPS versions of the book here.
Chapter 1: Introducing Marshaling
Chapter 2: Marshaling Simple Types
Chapter 3: Marshaling Compound Types.
We will use terms simple, primitive, and basic types to refer to base types like integers, strings, etc. Terms compound, and complex types also will be used interchangeably to refer to classes and structures.
Some considers that strings are not primitives.):
More information about pointers later in this chapter..
Soon we will cover textual data types in details. “Windows Data Types”).
In order for the example to run you must add a using statement to System.Runtime.InteropServices namespace. Be sure to add it for all examples throughout this book.
// “Marshaling Strings and Buffers” later in this chapter.
For more information about marshaling strings, see section “Marshaling Strings and Buffers”.
Another thing, this writing assumes a 32-bit version of Windows. Thus, it considers that DWORDs, HANDLEs, etc. are all 4 bytes. On 64-bit versions, they are 8 bytes.:
BOOL CloseHandle(HANDLE hObject);
The managed version of CloseHandle() is as following:
: “Memory Management.”.
BOOL WriteConsole(
HANDLE hConsoleOutput,
VOID lpBuffer,
DWORD nNumberOfCharsToWrite,
LPDWORD lpNumberOfCharsWritten,
LPVOID lpReserved
);
And this is the managed version along with the test code:
“Move PInvokes to Native Methods Class” article.
The following code segment illustrates the wrapper method for our MessageBoxEx() function:.
Code abbreviated for clarity.
.
Download PDF and XPS versions of the book here.
Chapter 1: Introducing Marshaling
Chapter 2: Marshaling Simple Types
Chapter 3: Marshaling Compound Types
This article, along with any associated source code and files, is licensed under The Common Public License Version 1.0 (CPL)
[MarshalAs(UnManangedType.LPWSTR)]
'\0'
out LPWSTR
UNLONG* pcLength
ref uint
ref int
[assembly:CompilationRelaxations(CompilationRelaxations.NoStringInterning)]
System.Runtime.CompilerServices
char
char[]
[In, Out]
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
C# 6: First reactions
|
http://www.codeproject.com/Articles/66244/Marshaling-with-Csharp-Chapter-2-Marshaling-Simple.aspx
|
CC-MAIN-2014-15
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.