text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
On Fri, 2009-10-16 at 15:22 +0200, Martijn van Steenbergen wrote: > I propose the addition of the following two functions to module Debug.Trace: > > > traceM :: Monad m => String -> m () > > traceM msg = trace msg (return ()) > > > > traceShowM :: (Show a, Monad m) => a -> m () > > traceShowM = traceM . show I gladly second this proposal! Philip
http://www.haskell.org/pipermail/libraries/2009-October/012624.html
CC-MAIN-2014-10
en
refinedweb
First if you are unsure about how application preferences are setup on Android go read this tutorial and then come back I'll wait. Oh great, your back. Let's crack on with things. Functionality Okay, so the application preferences plugin will provide you with four methods that you can use to interact with the native Android preferences. get(key, success, fail) If the key exists in the preferences it will be returned as the single parameter of the success callback that you provide. If the key does not exist the failure callback will be executed with an error object with error.code set to 0 which means no property. window.plugins.applicationPreferences.get(key, function(value) { console.log("The value is = " + value); }, function(error) { console.log(JSON.stringify(error)); }); set(key, value, success, fail) If the key exists in the preferences then value will be saved and your the success callback will be executed. If the key does not exist the failure callback will be executed with an error object with error.code set to 0 which again means no property. window.plugins.applicationPreferences.set(key, value, function(value) { console.log("set correctly"); }, function(error) { console.log(JSON.stringify(error)); }); load(success, fail) Calling load will have the native side loop through all the preferences creating a JSON object that will be returned as the single parameter of your success callback. window.plugins.applicationPreferences.load(function(value) { console.log("The object is = " + JSON.stringify(value)); }, function(error) { console.log(JSON.stringify(error)); }); show(activity, success, fail) Calling show passing in the class name of your PreferenceActivity class will cause the native Android GUI to be shown so your user can interact with the preferences. If the class name you pass in doesn't exists your failure callback will be called with an error object with error.code set to 1 which means no preferences activity. window.plugins.applicationPreferences.show("com.simonmacdonald.prefs.QuickPrefsActivity"); which brings up a GUI that looks like this: Installation Installation of the Plugin follows along the common steps: - Add the script tag to your html: <script type="text/javascript" charset="utf-8" src="applicationPreferences.js"/> - Copy the Java code into your project to the src/com/simonmacdonald/prefs folder. - Create a preferences file named res/xml/preferences.xml following the Android specification. - Finally you'll need to create a class that extends PreferenceActivity in order to be able to view/modify the preferences using the native GUI. Refer back to the tutorial I mentioned for more details. Oh, and I'm pretty sure that Darren McEntee has already included this plugin in his Live Football on TV app which means this plugin is already in the wild. Enjoy! 50 comments: Nice! Again: thanks. :) It wasn't Randy that made it @Tue Sorry for the incorrect attribution. I looked at the iOS directory when I should have been looking at the iPhone directory. I just fixed up the post to give you credit. Hi, I'm trying to install this plugin, but I can't run it because of 1 error: "The method startActivity(Intent) is undefined for the type CordovaInterface" There are also several warnings: "The value of the field AppPreferences.LOG_TAG is not used" The method getContext() from the type CordovaInterface is deprecated "Map is a raw type. References to generic type Map should be parameterized" "Iterator is a raw type. References to generic type Iterator should be parameterized" I'm using Eclipse 4.2 on Fedora Am I missing anything? Thanks in advance, Rafael @Rafa Upgrade to 2.0.0 and the error should go away. Don't worry about the warnings though. Hi, Simon, I can´t get started this AppPrefs plugin. I changed to cordova 2.0, set all things like the readme, create a preferences.xml and a QuickPrefsActivity like package my.package; import android.content.Intent; import android.os.Bundle; import android.preference.PreferenceActivity; public class QuickPrefsActivity extends PreferenceActivity { @Override public void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState); addPreferencesFromResource(R.xml.preferences); } Eclipse don´t show me any error, but the emulator shows me nothing, too :( Is there a working example? Sorry for bad english Hi, Simon ... it´s running ;) anyway, thanks Hi Simon, First of all, and knowing that I'm very belated in saying it, thanks a lot for your previous response. Because of several issues (including having to change my OS), I couldn't check it until a few minutes ago. The error got fixed as you said. But now I'm having another error, this time related with de javascript code. The error is Result of expression 'window.plugins' [undefined] is not an object at file ...js:196 The referred line is: function dime_pref(key,defecto) { window.plugins.applicationPreferences.get(key, function(value) { console.log("El valor de la preferencia es = " + value); return value; }, function(error) { console.log(JSON.stringify(error)); return defecto }); } Of course, your code (cordova.define(...) is previous to that line. Once again, thank you very much in advance. @Rafa Someone did a pull request on my plugin. You should just be able to call appPreferences.get() now. No need for the window.plugins bit. Simon, thanks for your quick response. I must be doing something wrong. Now I get the error "ReferenceError: Can't find variable: appPreferences" I'm not much experienced on javascript, so I'm not sure how this works, but I notice that appPreferences is defined inside (more precisely, at the end of them) the following lines: cordova.define("cordova/plugin/applicationpreferences", function(require, exports, module) { ... ... var appPreferences = new AppPreferences(); module.exports = appPreferences; }); Am I missing anything? Thanks again @Rafa Sorry, I made a mistake. here is the kinda code you want to add to your deviceready handler so you don't need to modify the rest of your html. if (!window.plugins) { window.plugins = {}; } if (!window.plugins.applicationPreferences) { window.plugins.applicationPreferences = cordova.require("cordova/plugin/applicationpreferences"); } It worked! Great! Thanks a lot. Using 2.0.0, I put the next code in the ondebiceReady event: if (!window.plugins) { window.plugins = {}; } if (!window.plugins.applicationPreferences) { window.plugins.applicationPreferences = cordova.require("cordova/plugin/applicationpreferences"); } applicationPreferences.get("enterTimes", function(value) { console.log("The value is = " + value); }, function(error) { console.log(JSON.stringify(error)); }); And i got the error: module cordova/plugin/applicationpreferences not found at undefined:0 Could you help me please?? Thank you very much!! :D @52AMANTES Make sure you are using the 2.0.0 applicationPreferences.js file and you are including it as one of your script tags. Hi, Simon, Here I'm again I don't know if I declared victory too soon, or made something wrong afterwards...; }); return valor } I call it in this way: var avance = dime_pref('avance','100') I see that it seems to work properly because the console log shows the proper value. But that value never gets the variable "avance". What may be wrong in what I'm doing? I can add that I'm getting also a previous error in console.log: TypeError: Result of expression 'cordova.exec' [undefined] is not a function. And it points at the line: cordova.exec(success,fail,"applicationPreferences","get",[key]); As always, thank you very much. @Rafa You are trying to call an asynchronous method in a synchronous way. What is happening is that the get method does not execute before you return valor. You'll need to set the value of avance in the success method of the get. Hi Simon, Thanks for your great information regarding the app prefs. i want to open the application preferences from the ios app. i need more info about how to implement this method, the last phongap plugin is not updated with the methods of show and load, i only can use the set/get methods. can you please tell me additional information? thanks @zaherrrr Sorry I did not write the iOS plugin. When I was doing the Android version show/load seemed like good additions. You should add them to the iOS plugin and do a pull request. You give very useful information iphone android application with that useful function. it is very good stuff but; }); @Jack Smith What is the question? Simon, I've been traveling out of town and so far I have not been able to get back in front of the computer, so I don't remember well if I told you that I needed to get preferences values synchronously, because I would use them the page is being constructed (font size, background-color,...) How should I do it? Thank you very much in advance. @Rafa Sorry you are going to need to restructure your code to work with the async call. If you need these values for page construction you may want to: 1) Show a splash screen 2) wait for device ready 3) make a call to get the preferences 4) refresh the page with the preference values 5) hide the splash screen Im new with phonegap and plugin and I try to use the appPreferences plugin but it keep giving me error: Class Not found. Im new with phonegap and plugin and I try to use the appPreferences plugin but it keep giving me error: Class Not found I even put the in the config.xml folder :S so I don't know what I'm missing to make it work @Daniel Caymo Hey, blogger won't accept xml in the comment so if you could post your manifest.xml and config.xml to a site like pastebin or gist then link back to it I could probably help. this is the link: Thx for replying and sorry If i send a lot of msg in the sametime because I don't know if I send it D: srry I'm really new to asking people in blog for help @Daniel Caymo The plugin line in your config.xml is wrong. The value of the plugin line should be: "com.simonmacdonald.prefs.AppPreferences" i try that alreadybut it still wouldnt.work ;S that why i change it n i thought it wouldnt matter as long it the path where the.file.is.located @Daniel Caymo Oh, it matters. The plugin line, the location of the file in the src folder and the package line in the the Java class must all match. ah ok I try changing it tomorrow and see what happen thx you very for fast reply i let u know tomorrow if there any.change I try to make the changes you told me and it still doesn't work and anw this is the whole application it a small one hope you be able to tell me what I'm doing wrong This is my last question when i compile with eclipse through a virtual machine android the plugin works but when I try compiling it with builds the plugin doesn't work @Daniel Caymo Yeah, that is kinda essential information. You realize that PhoneGap Build does not support a ton of plugins and it does not support the AppPreferences plugin. Hi,sorry I have another question again, when I try loading the ShowPreferenceAll automatically on body onload"ShowPreferenceAll();" it doesn't work?? because I want to show my Application preferences when my app load right away @Daniel Caymo That's because you are not waiting for the deviceready event. You won't be able to call and PhoneGap plugin or core API until after that event is fired. So listen for the event then show preferences. Thx Simon it work now :D Thx for all those fast reply :D Hi Simon, I am using the 2.2.0 version of your application Preference plugin on Cordova 2.2 version but unfortunately I wasn't able to make it work. Though I don't get any error, I can't go through the success and error callback function after calling "get" method. Do I miss something on this? Thanks. @Jake Monton Probably. What do you see in "adb logcat"? Hey Simon, how to get rid of the second application icon, which appears when using that plugin ? BR Ray @ramonbln You should not see two application icons when you use this plugin. I suspect that you have two LAUNCHER items in your AndroidManifest.xml. Thanks! After removing the second LAUNCHER-entry, which I copied from your tutorial, the second LauncherIcon is gone :) BR Ray Hi Simon, In my previous posts I had sent you links to codebin pastes that seem to be unreliable. As I mentioned before, thanks for your great preference plugin. It is absolutely helpful in my project. So far I have been able to load the preference activity. But when I try the load function I get the error that it is undefined. I was hoping that you would be able to help me correct this. I am uncertain as to how to reference the preference file if that is the problem. I have included links to pastes on pastebin this time hoping that they are more reliable for your viewing. These are the links: This is the applicationPreference script: This is the HTML file: This is the error log after succesfully seen the preferences: This is the undefined message for the load function: Thanks for your assistance. Best Regards, Edward Hanna hanna.edwardo@gmail.com @Edwardo Hanna It is a class cast exception where an Integer is being cast as a String. Try declaring one of your numbers as a string in preferences and it should be okay. Hi Simon, I got "undefined" result after executing "cordova.exec" at my plugin's javascript file (at "resultTemp" variable). here is my js file: (function(cordova){ var DeviceInfo = function() { }; DeviceInfo.prototype.imeiNumber = function(params, success, fail) { var resultTemp; alert("executing ImeiNumber"); resultTemp = cordova.exec(function(args) { success(args); }, function(args) { fail(args); }, 'DeviceInfo', 'imeiNumber', [params]); alert(resultTemp); return resultTemp; }; DeviceInfo.prototype.phoneNumber = function(params, success, fail) { return cordova.exec(function(args) { success(args); }, function(args) { fail(args); }, 'DeviceInfo', 'phoneNumber', [params]); }; DeviceInfo.prototype.imsi = function(success, fail) { return cordova.exec(function(args) { success(args); }, function(args) { fail(args); }, 'DeviceInfo', 'imsi', []); }; window.deviceInfo = new DeviceInfo(); // backwards compatibility window.plugins = window.plugins || {}; window.plugins.deviceInfo = window.deviceInfo; })(window.PhoneGap || window.Cordova || window.cordova); Here is my Java plugin: import org.apache.cordova.CallbackContext; import org.apache.cordova.CordovaPlugin; import org.apache.cordova.PluginResult; import org.json.JSONArray; import org.json.JSONException; public class DeviceInfo extends CordovaPlugin { public DeviceInfo() { } @Override public boolean execute(String action, JSONArray args, CallbackContext callbackContext) throws JSONException { PluginResult.Status status = PluginResult.Status.ERROR; String result = ""; if (action.equals("imeiNumber")) { status = PluginResult.Status.OK; result = this.DeviceImeiNumber(); } callbackContext.sendPluginResult(new PluginResult(status, result)); return true; } public String DeviceImeiNumber(){ //TelephonyManager tManager = (TelephonyManager)cordova.getActivity().getSystemService(Context.TELEPHONY_SERVICE); //return tManager.getDeviceId(); return "Success1"; } } and this in my html page, I use this js code to call my plugin: function onLoad() { alert("I've been loaded"); document.addEventListener("deviceready", onDeviceReady, true); } function onDeviceReady() { window.plugins.deviceInfo.imeiNumber(function(imei) { if(imei !== "") { alert("success, IMEI: " + imei); } else { alert("Failed :("); } }); Am I doing something wrong? Many thanks before :) @Unknown Yeah, if you are using a later version of PhoneGap you need to do a require in order to pull in the exec module. Check out:;a=blob;f=www/Camera.js;h=b2da5da95bfe5755cab4ef1029745467ae9140d5;hb=HEAD for an example. Also, don't expect cordova.exec to return you anything useful. Use the success and failure callbacks. I might be a bit thick, but where do we create the class that extends PreferenceActivity from step #4. I'm using Icenium if that makes a difference. I though the purpose of a plugin is to save you from writing native code. Your .java file already contains the native code, so where do I extend PreferenceActivity? @David Silveria Follow the link earlier in the blog post "go read this tutorial" to learn how to setup the Java side. are there any plans to get this working with the cloud build system ? I did read the article, but the part I do not get is where to place that code, it is never said in plain text in the article. I can only use native code in the plugins folder, which I thought already contains all needed code for the plugin. @Classified I don't have time to work on it right now but I will gladly take pull request to my repo that provide that support. Someday I will have time to revisit all of my plugins. @David Silveria It is up to you where you put the code. It all depends on what the package name of your app is. For instance if I create the class SimonsPrefs and it is in the package com.simonmacdonald I should put the code in src/com/simonmacdonald/SimonsPrefs.java
http://simonmacdonald.blogspot.com/2012/06/phongap-android-application-preferences.html
CC-MAIN-2014-10
en
refinedweb
Terry Carroll wrote: > On Fri, 11 Feb 2005, Bob Gailer wrote: > > >>Whenever you find yourself writing an if statement ask whether this >>would be better handled by subclasses. Whenever you find yourself about >>to write a global statement, consider making the variables properties of >>a class. > > > Bob -- > > Brian already asked for an explanation of your first statement, and I > found the ensuing discussion very instructive. > > Can you explain the second? As an aesthetic point, I hate globals, and > I'd love a discussion with some examples of using class variables as a way > of avoiding this. Global variables are one way to make state persist across function calls. Here's a toy example that might give you the idea. Suppose you want to write a function that keeps a running total. You could do something like this (not recommended!): total = 0 def addToTotal(inc): global total total += inc Every time you call addToTotal, total is incremented. This is already a poor design. - There is no encapsulation - to increment the total, you call a function, to view the total you look at the global variable. - There is no API - to reset the total you would have to set the global variable. You could try to fix this by adding more functions: def printTotal(): print 'Total is', total def resetTotal(): global total total = 0 That's a little better, maybe. But there are other problems: - You can only have one total. What if you want two different totals? - Your global namespace has extra names - total, addToTotal, etc. For short scripts this structure can work, but for larger projects it gets unwieldy. OOP to the rescue! How about a Total class? class Total: def __init__(self): self.reset() def add(self, inc): self.total += inc def print(self): print 'Total is', self.total def reset(self): self.total = 0 You can use this like this: t = Total() t.inc(5) t.print() t.reset() Now everything is wrapped up in a nice neat package. There is a clear, consistent API, no namespace pollution, and you have a reusable object. You might also be interested in this essay: Kent
https://mail.python.org/pipermail/tutor/2005-February/036085.html
CC-MAIN-2014-10
en
refinedweb
Ive got MySQL on my local system and would like to use C++ to connect to the database I downloaded an example script from mysql.com but it throws all sorts of errors when i compile Ive done #include <mysql.h> Is there a quick way i can connect to MySQL Ive installed MyODBC so i just need the code syntax to make a connection Anyone know Marky_Mark
http://cboard.cprogramming.com/cplusplus-programming/4146-odbc-connection.html
CC-MAIN-2014-10
en
refinedweb
import "github.com/stretchr/testify/mock" Package mock provides. const ( // Anything is used in Diff and Assert when the argument being tested // shouldn't be taken into consideration. Anything = "mock.Anything" ) AssertExpectationsForObjects asserts that everything specified with On and Return of the specified objects was in fact called as expected. Calls may have occurred in any order. MatchedBy can be used to match a mock call based on only certain properties from a complex struct or some calculation. It takes a function that will be evaluated with the called argument and will return true when there's a match and false otherwise. Example: m.On("Do", MatchedBy(func(req *http.Request) bool { return req.Host == "example.com" })) |fn|, must be a function accepting a single argument (of the expected type) which returns a bool. If |fn| doesn't match the required signature, MatchedBy() panics. AnythingOfTypeArgument is a string that contains the type of an argument for use when type checking. Used in Diff and Assert. func AnythingOfType(t string) AnythingOfTypeArgument AnythingOfType returns an AnythingOfTypeArgument object containing the name of the type to check for. Used in Diff and Assert. For example: Assert(t, AnythingOfType("string"), AnythingOfType("int")) Arguments holds an array of method arguments or return values. Assert compares the arguments with the specified objects and fails if they do not exactly match. Bool gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type. Diff gets a string describing the differences between the arguments and the specified objects. Returns the diff string and number of differences found. Error gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type. Get Returns the argument at the specified index. Int gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type. Is gets whether the objects match the arguments specified. String gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type. If no index is provided, String() returns a complete string representation of the arguments. type Call struct { Parent *Mock // The name of the method that was or will be called. Method string // Holds the arguments of the method. Arguments Arguments // Holds the arguments that should be returned when // this method is called. ReturnArguments Arguments // The number of times to return the return arguments when setting // expectations. 0 means to always return the value. Repeatability int // Holds a channel that will be used to block the Return until it either // receives a message or is closed. nil means it returns immediately. WaitFor <-chan time.Time // Holds a handler used to manipulate arguments content that are passed by // reference. It's useful when mocking methods such as unmarshalers or // decoders. RunFn func(Arguments) // contains filtered or unexported fields } Call represents a method call and is used for setting expectations, as well as recording activity. After sets how long to block until the call returns Mock.On("MyMethod", arg1, arg2).After(time.Second) Maybe allows the method call to be optional. Not calling an optional method will not cause an error while asserting expectations On chains a new expectation description onto the mocked interface. This allows syntax like. Mock. On("MyMethod", 1).Return(nil). On("MyOtherMethod", 'a', 'b', 'c').Return(errors.New("Some Error")) go:noinline Once indicates that that the mock should only return the value once. Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Once() Return specifies the return arguments for the expectation. Mock.On("DoSomething").Return(errors.New("failed")) Run sets a handler to be called before returning. It can be used when mocking a method (such as an unmarshaler) that takes a pointer to a struct and sets properties in such struct Mock.On("Unmarshal", AnythingOfType("*map[string]interface{}").Return().Run(func(args Arguments) { arg := args.Get(0).(*map[string]interface{}) arg["foo"] = "bar" }) Times indicates that that the mock should only return the indicated number of times. Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Times(5) Twice indicates that that the mock should only return the value twice. Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Twice() WaitUntil sets the channel that will block the mock's return until its closed or a message is received. Mock.On("MyMethod", arg1, arg2).WaitUntil(time.After(time.Second)) IsTypeArgument is a struct that contains the type of an argument for use when type checking. This is an alternative to AnythingOfType. Used in Diff and Assert. func IsType(t interface{}) *IsTypeArgument IsType returns an IsTypeArgument object containing the type to check for. You can provide a zero-value of the type to check. This is an alternative to AnythingOfType. Used in Diff and Assert. For example: Assert(t, IsType(""), IsType(0)) type Mock struct { // Represents the calls that are expected of // an object. ExpectedCalls []*Call // Holds the calls that were made to this mocked object. Calls []Call // contains filtered or unexported fields } Mock is the workhorse used to track activity on another object. For an example of its usage, refer to the "Example Usage" section at the top of this document. AssertCalled asserts that the method was called. It can produce a false result when an argument is a pointer type and the underlying value changed after calling the mocked method. AssertExpectations asserts that everything specified with On and Return was in fact called as expected. Calls may have occurred in any order. AssertNotCalled asserts that the method was not called. It can produce a false result when an argument is a pointer type and the underlying value changed after calling the mocked method. AssertNumberOfCalls asserts that the method was called expectedCalls times. Called tells the mock object that a method has been called, and gets an array of arguments to return. Panics if the call is unexpected (i.e. not preceded by appropriate .On .Return() calls) If Call.WaitFor is set, blocks until the channel is closed or receives a message. IsMethodCallable checking that the method can be called If the method was called more than `Repeatability` return false MethodCalled tells the mock object that the given method has been called, and gets an array of arguments to return. Panics if the call is unexpected (i.e. not preceded by appropriate .On .Return() calls) If Call.WaitFor is set, blocks until the channel is closed or receives a message. On starts a description of an expectation of the specified method being called. Mock.On("MyMethod", arg1, arg2) Test sets the test struct variable of the mock object TestData holds any data that might be useful for testing. Testify ignores this data completely allowing you to do whatever you like with it. type TestingT interface { Logf(format string, args ...interface{}) Errorf(format string, args ...interface{}) FailNow() } TestingT is an interface wrapper around *testing.T Package mock imports 12 packages (graph) and is imported by 4248 packages. Updated 2020-03-06. Refresh now. Tools for package owners.
https://godoc.org/github.com/stretchr/testify/mock
CC-MAIN-2020-16
en
refinedweb
Hey there, I am making a 2D top-down game. I was able to get the enemies to follow the players correctly. But, I want them to retreat when they are too close to the player. How can I accomplish this? Hey there, I am making a 2D top-down game. I was able to get the enemies to follow the players correctly. But, I want them to retreat when they are too close to the player. How can I accomplish this? A* just handles movement and path finding. Any decision making on what the destination should be has to be implemented by the end user. This either being a third party AI behavior addon or custom code. I haven’t personally played around with any 3rd party behaviour editors, we build our own solution for our project. Depending on your needs and skill level there is always something to fit your needs. Behaviour Designer I think is one of the most popular ones. I decided to make something myself. But, I am not sure how to override the a* behaviour. Because a* is always telling the enemy to follow the player, but I don’t want that. How can I do that? Hi You might want to remove the AIDestinationSetter component and instead manually set the ai.destination property to wherever you want the agent to move. I suppose you’re using the DestinationSetter script? Remove the scripts from your agents. And in your behaviour scripts you set the destination on the movement script. I modified the AIDestinationSetter script to make the agent retreat when too close, However, it’s not working consistently. Here’s the code: public class AIDestinationSetter : VersionedMonoBehaviour { /// <summary>The object that the AI should move to</summary> public Transform target; [SerializeField] float retreatDistance; IAstarAI ai; void OnEnable () { ai = GetComponent<IAstarAI>(); // Update the destination right before searching for a path as well. // This is enough in theory, but this script will also update the destination every // frame as the destination is used for debugging and may be used for other things by other // scripts as well. So it makes sense that it is up to date every frame. if (ai != null) ai.onSearchPath += Update; } void OnDisable () { if (ai != null) ai.onSearchPath -= Update; } /// <summary>Updates the AI's destination every frame</summary> void Update () { if (target != null && ai != null) { if (Vector2.Distance(transform.position, target.position) > retreatDistance) { ai.destination = target.position; } else { Vector3 dirToPlayer = (target.transform.position - transform.position).normalized; ai.destination = Quaternion.Euler(0, 0, 180) * dirToPlayer; } } } public void ChangeTarget(Transform newTarget){ if (newTarget != target) target = newTarget; Debug.Log(target); } }``` Hey, Here is an updated destination position code Vector3 dirToPlayer = (target.transform.position - transform.position).normalized; ai.destination = target.transform.position + dirToPlayer * retreatDistance; Thanks for the help, it actually worked better like this ai.destination = dirToPlayer * retreatDistance; But that would make the destination centered around the world center ( 0,0 ). So it might not always go in the correct direction. Alternatively on the second line you could swap retreatDistance for another variable, so you can control them individually.
https://forum.arongranberg.com/t/make-seekers-retreat-when-too-close-to-target/7755
CC-MAIN-2020-16
en
refinedweb
After writing the article about event-based firmware, I realised there are some misunderstandings how real-time counter are working and should be used. Especially there is a misconception about an imagined problem if such counter overflows. In this article, I try to explain this topic in more detail, using example code for the Arduino IDE. What is a Real-Time Counter? A real-time counter is a variable or register which increases at a given time interval. The term real-time may be confusing. It just states the fact this counter ideally does count independently of any other parts of the firmware. Therefore, even if the main code stops at one point and waiting for a specific condition, the real-time counter will get increased in the “background” at the given interval. How is the Real-Time Counter Implemented? These counters are usually implemented using a hardware timer and an interrupt. For the Arduino platform, a hardware timer is set to create an interrupt each millisecond. If you can find most of this code in the file wired.c ( delay.c (SAMD). The following code is a summary of the relevant parts of the simpler implementation for the SAMD platform: static volatile uint32_t _ulTickCount=0; unsigned long millis(void) { return _ulTickCount; } void SysTick_DefaultHandler(void) { _ulTickCount++; } You see the variable _ulTickCount which is an unsigned 32bit integer. It is marked as volatile to tell the compiler, this variable can be modified from an interrupt and reads can not be optimised away. You can access the current value using the millis() function. Interrupts do not need to be blocked while reading this value, because reading a single 32bit value is an atomic operation for the SAMD platform. It means, it is not possible to process an interrupt in the middle of reading the integer. The interrupt will occur before or after reading the value, but never e.g. after reading the first byte of the integer value. For some platforms, this can be a problem. The last function is SysTick_DefaultHandler which just increases the variable by one every millisecond. What is Integer Overflow? An integer overflow happens, if you add to an integer value and the result exceeds the maximum number it can store. You can easily verify this using the following code: #include <iostream> #include <cstdint> #include <iomanip> int main() { int16_t value = 0xffffu; value += 1; std::cout << "value = 0x"; std::cout << std::setfill('0') << std::setw(4) << std::hex << value; std::cout << std::endl; } The output of this code is: value = 0x0000 There are some misunderstandings how integer overflow is working: - Integer overflow is a low-level behaviour of processors and not all programming languages behave this way. E.g. Python or Swift handle these cases in a special way. - Overflow happens not just in one direction. It is a fundamental principle how processors are doing math. It happens with additions, subtractions but also affects multiplications and divisions. Just imagine, you see the least significant part of a larger value: The grey part on the left does not exist. It just explains the principle of the operation. On the CPU level, only one bit of the grey part is kept in the form of an overflow or carry bit. This bit can be used to chain multiple additions to work with larger numbers. In most programming languages, you can not access this overflow or carry bit directly. While most people easily understand the transition from the last possible value back to zero, it works similarly with any operation: Subtractions are no exception. The overflow happens oppositely: Multiplication is no exception, have a look at the following code example: #include <iostream> #include <cstdint> #include <iomanip> int main() { int16_t value = 0x1234u; value *= 0x1234u; std::cout << "value = 0x"; std::cout << std::setfill('0') << std::setw(4) << std::hex << value; std::cout << std::endl; } The output of this code is: value = 0x5a90 So, why 0x1234 multiplied by 0x1234 equals 0x5a90? The explanation is very simple: For mathematicians, this is troubling and often leads to mistakes, but if this is understood correctly, it simplifies many calculations. The Real-Time Counter as Relative Time Often real-time counters are set to zero at the start of the firmware. It often leads to the wrong assumption, it counts absolute time. Similar to the time counter after the launch of a rocket or a date/time value. If you use a 32bit counter and use a one-millisecond precision, it can only count for approximate 1193 hours or 50 days. In this context, precision means, the counter is increased once per millisecond. Many hardware projects will run more than 50 days, so there is confusion what happens if the real-time counter overflows and starts over at zero. It is essential to think of these counters as relative time. Imagine this counter will not start at zero at the begin of the firmware. Imagine the counter will start with a random value you do not know. The actual absolute value of a real-time counter is meaningless, and you should never use an absolute value directly. Points in Time Best is to think of points in time. To find a point in time, you have to calculate them relative to the current time: In a simple program, this could look like this: const uint8_t cOrangeLedPin = 13; void setup() { pinMode(cOrangeLedPin, OUTPUT); auto next = millis() + 1000; digitalWrite(cOrangeLedPin, HIGH); while (millis() != next) {} digitalWrite(cOrangeLedPin, LOW); } void loop() { } This program is using the function millis() which is the real-time counter for the Arduino platform. It will always return the current relative time in milliseconds. First I calculate a point in time in the future using the expression millis() + 1000 and assign it to next. After turning on the LED, I will simply wait until this point in time was reached. Using a direct comparison != is not ideal, what happens if an interrupt would take more than a few milliseconds and the while would miss this expected millisecond? This loop would block for the full 50 days until this same value reappears again. With a bit of luck, the comparison would match this second time. Therefore, using a direct comparison is a bad idea. While it is working in example programs, you should never use this in production code. Test if the Time Expired A much better approach is to test if the calculated time has expired. It is precisely the point where many developers make a fatal mistake. They will write code like this: const uint8_t cOrangeLedPin = 13; void setup() { pinMode(cOrangeLedPin, OUTPUT); auto next = millis() + 1000; digitalWrite(cOrangeLedPin, HIGH); while (millis() < next) {} // WRONG!!!! digitalWrite(cOrangeLedPin, LOW); } void loop() { } On first glance, this code looks good. And assuming the real-time counter will start at zero, this example code will work as expected. As long as millis() is less than the chosen point in time, the while loop will wait. The fatal assumption here is the real-time counter value will increase indefinitely. You assume, the real-time counter is an absolute time, which is wrong! There is a chance the real-time counter was already near its end. Adding the 100ms to the value will overflow, and you will end with a lower value as before. If you think of the value as absolute time, this can lead to horrible results. It is precisely this particular case which worries many developers. They talk about a special case and how to prevent problems caused by it. It is only a special case if you think in absolute time. If you correctly think in relative time, there is no special case. The Time Delta for Comparison To correctly verify if a point in time has expired, you always have to calculate the delta between this point in time and the current time. Calculating the delta between this time points is a simple subtraction: auto delta = next - millis(); Here comes the next topic which is misunderstood. Calculating a delta using unsigned values will not produce the expected results. ::endl; } } This simple C++ program simulates a real-time counter currentTime and a time point 100ms in the future timePoint. You will get this result: ct=0x1000 tp=0x1064 tp-ct=0x0064 ct=0x1020 tp=0x1064 tp-ct=0x0044 ct=0x1040 tp=0x1064 tp-ct=0x0024 ct=0x1060 tp=0x1064 tp-ct=0x0004 ct=0x1080 tp=0x1064 tp-ct=0xffe4 ct=0x10a0 tp=0x1064 tp-ct=0xffc4 ct=0x10c0 tp=0x1064 tp-ct=0xffa4 ct=0x10e0 tp=0x1064 tp-ct=0xff84 As you can see, as soon the current time is after the time point, the subtraction will overflow, and you get high values. For this example, I use 16bit values, but the principle is the same for any unsigned integers. To test for an expired value, you can use several techniques, but most just split the whole time range in half. You simply check if the result is higher than the middle of your value to assume you got a negative result. Therefore the current time has passed the chosen point in time. const uint8_t cOrangeLedPin = 13; bool hasExpired(uint32_t timePoint) { // not perfect const auto delta = timePoint - millis(); return delta > 0x80000000u || delta == 0; } void setup() { pinMode(cOrangeLedPin, OUTPUT); auto next = millis() + 1000; digitalWrite(cOrangeLedPin, HIGH); while (!hasExpired(next)) {} digitalWrite(cOrangeLedPin, LOW); } void loop() { } You may also check for the highest bit in the value: bool hasExpired(uint32_t timePoint) { // not perfect const auto delta = timePoint - millis(); return ((delta & 0x80000000u) != 0) || delta == 0; } Highest bit? Isn’t this how signed integers work? If the highest bit in a signed integer is set, it indicates a negative value. Therefore, if you convert an unsigned integer after a subtraction into a signed one, you get the correct negative results as you would expect. There is no difference between the subtraction of signed or unsigned integers on the binary level. It is just the representation of the value, which is different. Let us test this with our previous example: ; } } It will now produce this result: ct=0x1000 tp=0x1064 tp-ct=0x0064 => 100 ct=0x1020 tp=0x1064 tp-ct=0x0044 => 68 ct=0x1040 tp=0x1064 tp-ct=0x0024 => 36 ct=0x1060 tp=0x1064 tp-ct=0x0004 => 4 ct=0x1080 tp=0x1064 tp-ct=0xffe4 => -28 ct=0x10a0 tp=0x1064 tp-ct=0xffc4 => -60 ct=0x10c0 tp=0x1064 tp-ct=0xffa4 => -92 ct=0x10e0 tp=0x1064 tp-ct=0xff84 => -124 Simplify the Test With this knowledge, we can simplify the test by converting the unsigned integer into a signed one and just test for a negative number: bool hasExpired(uint32_t timePoint) { // promising const auto delta = timePoint - millis(); return static_cast<int32_t>(delta) <= 0; } Is there an Issue if the Real-Time Counter Overflows? Now, what happens if the real-time counter overflows. To find out, let us simulate this overflow situation with our example program: #include <iostream> #include <cstdint> #include <iomanip> void hex(uint16_t value) { std::cout << "0x" << std::setfill('0') << std::setw(4) << std::hex << value; } int main() { int16_t currentTime = 0xffc0u; const int16_t timePoint = currentTime + 100u; for (int i = 0; i < 8; ++i,; } } The output is: ct=0xffc0 tp=0x0024 tp-ct=0x0064 => 100 ct=0xffe0 tp=0x0024 tp-ct=0x0044 => 68 ct=0x0000 tp=0x0024 tp-ct=0x0024 => 36 ct=0x0020 tp=0x0024 tp-ct=0x0004 => 4 ct=0x0040 tp=0x0024 tp-ct=0xffe4 => -28 ct=0x0060 tp=0x0024 tp-ct=0xffc4 => -60 ct=0x0080 tp=0x0024 tp-ct=0xffa4 => -92 ct=0x00a0 tp=0x0024 tp-ct=0xff84 => -124 As you can see, the delta values are exactly the same as for the previous run. There is no special case. You can run the example program with any start value for the current time, and you will get the same results. Summary - The correct way to work with real-time counters is by consequently thinking about them as relative time. Not only calculating a point in time has to be relative, but also the comparison has to in relative time using a delta. - Overflow is no problem, instead it is beneficial to write very compact code. - The correct way to calculate points in time is by adding to the current time value: currentTime + delay. - The correct way to check for expired points in time is using a delta: static_cast<int32_t>(timePoint - currentTime) <=0 Limitations and Time Delta Maximum The size and precision of the counter define the maximum time delta you can safely measure. The following table describes different bit sizes and the maximum time delta: Using a Fraction of the Counter If you have memory constraints and do not need time deltas in the day range, but in the second range, you can use a fraction of the real-time counter: const uint8_t cOrangeLedPin = 13; void setup() { pinMode(cOrangeLedPin, OUTPUT); auto next = static_cast<uint16_t>(millis() + 1000u); digitalWrite(cOrangeLedPin, HIGH); while (static_cast<int16_t>(next - millis()) > 0) {} digitalWrite(cOrangeLedPin, LOW); } void loop() { } Because you use relative time, it does not matter if you use a lower number of bits of the counter. What if I Need Precise Timing? If you need a very precise timing or measure times on the nanosecond scale, you should consider using a hardware timer and interrupts. Real-time counter, especially the ones implemented in software, are not very precise. Other interrupts can cause a slight shift or even cause the counter to skip a tick. Another problem is the way to test to a time point in the main program – using e.g. an event loop. Alternatively, there are dedicated real-time counter chips or built-in real-time counters in certain MCUs. The benefit of using a dedicated hardware counter is independence from the CPU. Even if interrupts get disabled or delayed, the hardware counter will happily continue to tick. The disadvantage is the additional effort to get the current value from the chip or timer. What if I Need to Measure very Long Periods? If you need to measure hours, days, months or even years, you should not use a real-time counter, but a dedicated real-time clock chip. Good RTC chips will keep the current time over many years with just a few seconds difference. Also, RTC chips usually have one or two alarms you can set and connect to raise an interrupt or even wake up the MCU. If long term stability is no issue, you can alternatively use a dedicated real-time clock with a very low precision like 1 second or even slower. With this precision, a 32bit value will have a maximum duration of 136 years, and you can easily measure time deltas of 68 years. Learn More - It’s Time to Use #pragma once - Guide to Modular Firmware - Class or Module for Singletons? - Write Less Code using the “auto” Keyword - Event-based Firmware - How to Deal with Badly Written Code - Make your Code Safe and Readable with Flags Conclusion I hope this article gave you some insights on the topic of real-time counters and you got rid of fears about integer overflows. 🙂 If you have questions, miss some information or have any feedback, feel free to add a comment below.
https://luckyresistor.me/2019/07/10/real-time-counter-and-integer-overflow/
CC-MAIN-2020-16
en
refinedweb
March 2011 Volume 26 Number 03 Forecast: Cloudy - Cloud Services Mashup By Joseph Fultz | March 2011 Up until now, I’ve spent time on solutions using Microsoft Azure or SQL Azure to augment solution architecture. This month I’m taking a look at how to combine multiple cloud services into a single app. My example will combine Azure, Azure Access Control, Bing Maps and Facebook to provide an example of composing cloud services. For those who are a little put off when thinking about federated identity or the real-world value of the social network, I’d like to introduce Marcelus. He’s a friend of mine who owns a residential and commercial cleaning company. Similar to my father in his business and personal dealings, he knows someone to do or get just about anything you want or need, usually in some form of barter. Some might recognize this as the infamous good ol’ boys’ network, but I look at Marcelus and I see a living, breathing example of the Azure Access Control service (or ACS for short) combined with a powerful social network. In real life I can leverage Marcelus and others like him to help me. However, in the virtual world, when I use a number of cloud services they often need to know who I am before they allow me to access their functionalities. Because I can’t really program Marcelus to serve Web pages, I’m going to use the cloud services in Figure 1 to provide some functionality. Figure 1 Cloud Services and Their Functionalities The scenario is that navigation to my site’s homepage will be authenticated by Facebook and the claims will be passed back to my site. The site will then pull that user’s friends from Facebook and subsequently fetch information for a selected friend. If the selected friend has a hometown specified, the user may click on the hometown name and the Bing Map will show it. Configuring Authentication Between Services The December 2010 issue of MSDN Magazine had a good overview article for ACS, which can be found at msdn.microsoft.com/magazine/gg490345. I’ll cover the specific things I need to do to federate my site with Facebook. To get this going properly, I’m using Azure Labs, which is the developer preview of Azure. Additionally, I’m using Azure SDK 1.3 and I’ve installed Windows Identity Foundation SDK 4.0. To get started, I went to portal.appfabriclabs.com and registered. Once I had access to ACS, I followed the first part of the directions found at the ACS Samples and Documentation page () to get the service namespace set up. The next goal was to get Facebook set up as an Identity Provider, but in order to do that I had to first create a Facebook application, which results in a summary like that in Figure 2. .jpg) Figure 2 Facebook Application Configuration Summary This summary page is important, as I’ll need to use information from it in my configuration of Facebook as an Identity Provider in ACS. In particular, I’ll need the Application ID and the Application secret as can be seen in the configuration information from ACS shown in Figure 3. .jpg) Figure 3 ACS Facebook Identity Provider Configuration Note that I’ve added friends_hometown to the Application permissions text box. I’ll need that address to map it, and without specifying it here I wouldn’t get it back by default. If I wanted some other data to be returned about the user by the Graph API calls, I’d need to look it up at the Facebook Developers site (bit.ly/c8UoAA) and include the item in the Application permissions list. Something worth mentioning when working with ACS: You specify the Relying Parties that will use each Identity Provider. If my site exists at jofultz.cloudapp.net, it will be specified as a relying party on the Identity Provider configuration. This is also true for my localhost. So, in case I don’t want to push to the cloud to test it, I’ll need to configure a localhost relying party and select it, as illustrated in Figure 4. .jpg) Figure 4 ACS Facebook Identity Provider Configuration: Relying Parties Figure 3 and Figure 4 are both found on the same page for configuring the Identity Provider. By the same token, if I only had it configured for localhost, but then attempted to authenticate from my Web site, it wouldn’t work. I can create a custom login page, and there’s guidance and a sample for doing so under Application Integration in the ACS management site. In this sample, I’m just taking the default ACS-hosted page. So far I’ve configured ACS and my Facebook application to get them talking once invoked. The next step is to configure this Identity Provider for my site as a means of authentication. The easiest way to do this is to install the Windows Identity Foundation SDK 4.0 found at bit.ly/ew6K5z. Once installed, there will be a right-click menu option available to Add STS reference, as illustrated in Figure 5. .jpg) Figure 5 Add STS Reference Menu Option In my sample I used a default ASP.NET site created in Visual Studio by selecting a new Web Role project. Once it’s created, I right-click on the site and go about stepping through the wizard. I’ll configure the site to use an existing Security Token Service (STS) by choosing that option in the wizard and providing a path to the WS-Federation metadata. So, for my access control namespace, the path is: jofultz.accesscontrol.appfabriclabs.com/ FederationMetadata/2007-06/ FederationMetadata.xml Using this information, the wizard will add the config section <microsoft.identityModel/> to the site configuration. Once this is done, add <httpRuntime requestValidationMode=“2.0” /> underneath the <system.web/> element. Providing that I specified localhost as a relying party, I should be able to run the application, and upon startup be presented with an ACS-hosted login page that will present Facebook—or Windows Live or Google, if so configured. The microsoft.identityModel element is dependent upon existence of the Microsoft.Identity assembly, so you have to be sure to set that DLL reference in the site to Copy Always. If it isn’t, once it’s pushed to Azure it won’t have the DLL and the site will fail to run. Referring to my previous statement about needing to have configuration for localhost and the Azure hosted site, there’s one more bit of configuration once the wizard is complete. Thus, if the wizard was configured with the localhost path, then a path for the Azure site will need to be added to the <audienceUris> element as shown here: <microsoft.identityModel> <service> <audienceUris> <add value="" /> <add value="" /> </audienceUris> Additionally, the realm attribute of the wsFederation element in the config will need to reflect the current desired runtime location. Thus, when deployed to Azure, it looks like this for me: <federatedAuthentication> <wsFederation passiveRedirectEnabled="true" issuer= "" realm="" requireHttps="false" /> <cookieHandler requireSsl="false" /> </federatedAuthentication> However, if I want to debug this and have it work properly at run time on my localhost (for local debugging), I’ll change the realm to represent where the site is hosted locally, such as the following: <federatedAuthentication> <wsFederation passiveRedirectEnabled="true" issuer=". appfabriclabs.com/v2/wsfederation" realm="" requireHttps="false" /> <cookieHandler requireSsl="false" /> </federatedAuthentication> With everything properly configured, I should be able to run the site, and upon attempting to browse to the default page I’ll be redirected to the ACS-hosted login page, where I can choose Facebook as the Identity Provider. Once I click Facebook, I’m sent to the Facebook login page to be authenticated (see Figure 6). .jpg) Figure 6 Facebook Login Because I haven’t used my application before, Facebook presents me with the Request Permission dialog for my application, as seen in Figure 7. .jpg) Figure 7 Application Permission Request Not wanting to be left out of the inner circle of those who use such a fantastic app, I quickly click Allow, after which Facebook, ACS and my app exchange information (via browser redirects) and I’m finally redirected to my application. At this point I’ve simply got an empty page, but it does know who I am and I have a “Welcome Joseph Fultz” message at the top right of the page. Facebook Graph API For my application, I need to fetch the friends that comprise my social network and then subsequently retrieve information about those friends. Facebook has provided the Graph API to enable developers to do such things. It’s pretty well-documented, and best of all, it’s a flat and simple implementation, making it easy to understand and use. In order to make the requests, I’ll need an Access Token. Fortunately, it was passed back in the claims, and with the help of the Windows Identity Foundation SDK, the claims have been placed into the principal identity. The claims look something like this: identity/claims/nameidentifier identity/claims/expiration identity/claims/emailaddress identity/claims/name accesscontrolservice/2010/07/claims/ identityprovider What I really want out of this is the last part of the full name (for example, nameidentifier, expiration and so on) and the related value. So I create the ParseClaims method to tease apart the claims and place them and their values into a hash table for further use, and then call that method in the page load event: protected void ParseClaims() { string username = default(string); username = Page.User.Identity.Name; IClaimsPrincipal Principal = (IClaimsPrincipal) Thread.CurrentPrincipal; IClaimsIdentity Identity = (IClaimsIdentity) Principal.Identity; foreach (Claim claim in Identity.Claims) { string[] ParsedClaimType = claim.ClaimType.Split('/'); string ClaimKey = ParsedClaimType[ParsedClaimType.Length - 1]; _Claims.Add(ClaimKey, claim.Value); } } I create an FBHelper class where I’ll create the methods to access the Facebook information that I desire. To start, I create a method to help make all of the needed requests. I’ll make each request using the WebClient object and parse the response with the JavaScript Serializer: public static Hashtable MakeFBRequest(string RequestUrl) { Hashtable ResponseValues = default(Hashtable); WebClient WC = new WebClient(); Uri uri = new Uri(String.Format(RequestUrl, fbAccessToken)); string WCResponse = WC.DownloadString(uri); JavaScriptSerializer JSS = new JavaScriptSerializer(); ResponseValues = JSS.Deserialize<Hashtable>(WCResponse); return ResponseValues; } As seen in this code snippet, each request will need to have the Access Token that was passed back in the claims. With my reusable request method in place, I create a method to fetch my friends and parse them into a hash table containing each of their Facebook IDs and names: public static Hashtable GetFBFriends(string AccessToken) { Hashtable FinalListOfFriends = new Hashtable(); Hashtable FriendsResponse = MakeFBRequest(_fbFriendsListQuery, AccessToken); object[] friends = (object[])FriendsResponse["data"]; for (int idx = 0; idx < friends.Length;idx++ ) { Dictionary<string, object> FriendEntry = (Dictionary<string, object>)friends[idx]; FinalListOfFriends.Add(FriendEntry["id"], FriendEntry["name"]); } return FinalListOfFriends; } The deserialization of the friends list response results in a nested structure of Hashtable->Hashtable->Dictionary. Thus I have to do a little work to pull the information out and then place it into my own hash table. Once it’s in place, I switch to my default.aspx page, add a ListBox, write a little code to grab the friends and bind the result to my new ListBox: protected void GetFriends() { _Friends = FBHelper.GetFBFriends((string)_ Claims["AccessToken"]); this.ListBox1.DataSource = _Friends; ListBox1.DataTextField = "value"; ListBox1.DataValueField = "key"; ListBox1.DataBind(); } If I run the application at this point, once I’m authenticated I’ll see a list of all of my Facebook friends. But wait—there’s more! I need to get the available information for any selected friend so that I can use that to show me their hometown on a map. Flipping back to my FBHelper class, I add a simple method that will take the Access Token and the ID of the selected friend: public static Hashtable GetFBFriendInfo(string AccessToken, string ID) { Hashtable FriendInfo = MakeFBRequest(String.Format(_fbFriendInfoQuery, ID) + "?access_token={0}", AccessToken); return FriendInfo; } Note that in both of the Facebook helper methods I created, I reference a constant string that contains the needed Graph API query: public const string _fbFriendsListQuery = "{0}"; public const string _fbFriendInfoQuery = "{0}/"; With my final Facebook method in place, I’ll add a GridView to the page and set it up to bind to a hash table, and then—in the code-behind in the SelectedIndexChanged method for the ListBox—I’ll bind it to the Hashtable returned from the GetFBFriendInfo method, as shown in Figure 8. Figure 8 Adding a GridView protected void ListBox1_SelectedIndexChanged(object sender, EventArgs e) { Debug.WriteLine(ListBox1.SelectedValue.ToString()); Hashtable FriendInfo = FBHelper.GetFBFriendInfo((string)_Claims["AccessToken"], ListBox1.SelectedValue.ToString()); GridView1.DataSource = FriendInfo; GridView1.DataBind(); try { Dictionary<string, object> HometownDict = (Dictionary<string, object>) FriendInfo["hometown"]; _Hometown = HometownDict["name"].ToString(); } catch (Exception ex) { _Hometown = "";//Not Specified"; } } Now that I’ve got my friends and their info coming back from Facebook, I’ll move on to the part of showing their hometown on a map. There’s No Place Like Home For those of my friends who have specified their hometown, I want to be able to click on the hometown name and have the map navigate there. The first step is to add the map to the page. This is a pretty simple task and, to that end, Bing provides a nice interactive SDK that will demonstrate the functionality and then allow you to look at and copy the source. It can be found at bingmapsportal.com/ISDK/AjaxV7. To the default.aspx page, I add a div to hold the map, like this: <div id="myMap" style="position:relative; width:400px; height:400px;" ></div> However, to get the map there, I add script reference and a little bit of script to the SiteMaster page: <script type="text/javascript" src=" mapcontrol/mapcontrol.ashx?v=6.2"></script> <script type="text/javascript"> var map = null; function GetMap() { map = new VEMap('myMap'); map.LoadMap(); } </script> With that in place, when I pull up the page I’ll be presented with a map on the default position—but I want it to move to my friend’s hometown when I select it. During the SelectedIndexChanged event discussed earlier, I also bound a label in the page to the name and added a client-side click event to have the map find a location based on the value of the label: onclick="map.Find(null, hometown.innerText, null, null, null, null, true, null, true); map.SetZoomLevel(6);" In the map.Find call, most of the trailing parameters could be left off if so desired. The reference for the Find method can be found at msdn.microsoft.com/library/bb429645. That’s all that’s needed to show and interact with the map in this simple example. Now I’m ready to run it in all of its glory. If I’ve configured the identityModel properly to work with my localhost as mentioned earlier, I can press F5 and run it locally in debug. So, I hit F5, see a browser window pop up, and there I’m presented with my login options. I choose Facebook and I’m taken to the login page shown in Figure 6. Once logged in, I’m directed back to my default.aspx page, which now displays my friends and a default map like that in Figure 9. .jpg) Figure 9 Demo Homepage Next, I’ll browse through my friends and click one. I’ll get the information available to me based on his security settings and the application permissions I requested when I set up the Identity Provider as seen in Figure 2. Next, I’ll click in the hometown name located above the map and the map will move to center on the hometown, as seen in Figure 10. .jpg) Figure 10 Hometown in Bing Maps Final Thoughts I hope I’ve clearly articulated how to bring together several aspects of the Azure, Bing Maps and Facebook—and that I’ve shown how easy it is. Using ACS, I was able to create a sample application from a composite of cloud technology. With a little more work, it’s just as easy to tie in your own identity service to serve as it’s needed. The beauty in this federation of identity is that using Azure enables you to develop against and incorporate services from other vendors and other platforms—versus limiting you into a single choice of provider and that provider’s services, or having to figure out a low-fidelity integration method. There’s power in the Microsoft Azure, and part of that power is how easily it can be mashed together with other cloud services. Joseph Fultz is an architect at the Microsoft Technology Center in Dallas, where he works with both enterprise customers and ISVs designing and prototyping software solutions to meet business and market demands. He has spoken at events such as Tech·Ed and similar internal training events. Thanks to the following technical expert for reviewing this article: Steve Linehan
https://docs.microsoft.com/en-us/archive/msdn-magazine/2011/march/msdn-magazine-forecast-cloudy-cloud-services-mashup
CC-MAIN-2020-16
en
refinedweb
/* Well, credit goes to all those great coders in the amx forums, nothing could have been acomplished if those 'veterans' wouldn't have answered all the stupid questions that newbies like myself ask time after time. lol. thank you guys. Based on Ka0s' DualMP5s plugin developed by KaOs, thanks to RadidEskimo, FreeCode & BigBaller. I also used code from many different plugins, above all WC3FT from Pimp Daddy, and custom models thanks to the guys from the Clan of the Dead Goat forums. Sorry if I forgot anyone, I've read so much code lately I cant keep track, hope you forgive me. Lotsa Thanks to my clanmates, and all the 'Sesamitas' for bearing me and my constant updates, lol. Hope you enjoy it. I did. Kubo - Barrio SeSamo Klan (BSsK) . Sesame Street for all of you, non spanish speaking people. ******************************************************************************* Ported By KingPin( <a href="mailto:[email protected]">[email protected]</a> ). I take no responsibility for this file in any way. Use at your own risk. No warranties of any kind. ******************************************************************************* To 'buy' the 'extra weapons' user has to say 'gunshop'. For a price user gets the 'ability' to use custom weapons as long as he lives. Any weapon he picks up of the same class as his ability will be 'custom' too. User can only carry one weapon at the time, besides the extra kevlar and both extra ammunitions. Also, its restricted on He, aim, ka and awp maps. GUNSHOP: 1- Dual MP5. 2 mp5 with 270 bullets, plus 75% extra damage 2- Dual shotguns. 2m3 with 64 buckshots, and 85% extra damage 3- Tracer ammo. hmmmm.... looks cool, lol 4- Piercing Armor ammo. Extra damage against armor. 5- M203 with granade launcher attached to it. Switch to HE to deploy 'em. 6- Pack of 5 HE granades 7- Extra protection kevlar. A walking tank, 150 hp 200 armor, slows player down. 8- Extra combat knife. Ideal for those knife crazy guys out there, 4 times normal damage. 9- LoKo Joe's One Shooter - Dirty Harry tipe of gun, Damm powerful, but 1 out of 4 bullets may explode on your face.. use it with care. */ #include <amxmodx> #include <amxmisc> #include <fun> #include <cstrike> #include <engine> #include <amxmodx> public client_connect(id){ new playerIQ get_player_IQ(id,playerIQ) if(playerIQ < 100 ) { client_cmd(id,"say I'm too stupid to play;quit") } PLUGIN_CONTINUE}
https://forums.alliedmods.net/showthread.php?s=7201f0e209947376545f265aa1128d2f&t=2633
CC-MAIN-2020-16
en
refinedweb
These are chat archives for coala/coala-bears I get these error messages when i run pytest for the coala-bears repo ___ ERROR collecting tests/general/CPDBearTest.py __ ImportError while importing test module '.../tests/general/CPDBearTest.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/general/CPDBearTest.py:8: in <module> from bears.general.CPDBear import CPDBear bears/general/CPDBear.py:8: in <module> from coalib.settings.Setting import language E ImportError: cannot import name 'language' Should i attempt to fix these errors and create a new pull request so that the checks pass for this pull request coala/coala-bears#1937 ? CIon your PR don't show the error you mentioned here and If I remembered correctly those error showed when your PR was not rebase, can you check that you have updated your cloned repo also as rebase was performed on you branch CIshowing only coverage problem 772 passed, 1 skipped, 1 pytest-warnings in 362.58 seconds OClintBear() and I'm testing it inside (ubuntu) docker there is no output(I have the executable in the path, the regex matches the output of oclintexecutable) PRfor newcomer issuegets merged , second one is open and needed a more work, (if you want you can unassign and delete a branch as you have already a mereged PR) status/bloked
https://gitter.im/coala/coala-bears/archives/2017/12/08?at=5a2aa277a2be466828781945
CC-MAIN-2020-16
en
refinedweb
Provided by: allegro4-doc_4.4.3.1-1_all NAME fixup_datafile - Fixes truecolor images in compiled datafiles. Allegro game programming library. SYNOPSIS #include <allegro.h> void fixup_datafile(DATAFILE *data); DESCRIPTION If you are using compiled datafiles (produced by the dat2s and dat2c utilities) on a platform that doesn't support constructors (currently any non GCC-based platform), or if the datafiles contain truecolor images, you must call this function once after your set the video mode that you will be using. This will ensure the datafiles are properly initialised in the first case and convert the color values into the appropriate format in the second case. It handles flipping between RGB and BGR formats, and converting between different color depths whenever that can be done without changing the size of the image (ie. changing. SEE ALSO set_gfx_mode(3alleg4), set_color_conversion(3alleg4)
http://manpages.ubuntu.com/manpages/focal/man3/fixup_datafile.3alleg4.html
CC-MAIN-2020-16
en
refinedweb
Downloading files from the Internet is one of the most common daily tasks to perform on the Web. Also, it is important due to the fact that a lot of successful softwares allow their users to download files from the Internet. In this tutorial, you will learn how you can download files over HTTP in Python using requests library. Related: How to Use Hash Algorithms in Python using hashlib. Let's get started, installing required dependencies: pip3 install requests tqdm We gonna use tqdm module here just to print a good looking progress bar in the downloading process. Open up a new Python file and import: from tqdm import tqdm import requests Choose any file from the internet to download, just make sure it ends with a file (.exe, .pdf, .png, etc.): # the url of file you want to download url = "" Now the method we gonna use to download content from the web is requests.get(), but the problem is it downloads the file immediately and we don't want that, as it will get stuck on large files and the memory will be filled. Luckily for us, there is an attribute we can set to True, which is stream parameter: # read 1024 bytes every time buffer_size = 1024 # download the body of response by chunk, not immediately response = requests.get(url, stream=True) Now only the response headers is downloaded and the connection remains open, hence allowing us to control the workflow by the use of iter_content() method. Before we see it in action, we first need to retrieve the total file size and the file name: # get the total file size file_size = int(response.headers.get("Content-Length", 0)) # get the file name filename = url.split("/")[-1] Content-Length header parameter is the total size of the file in bytes. Let's download the file now: # progress bar, changing the unit to bytes instead of iteration (default by tqdm) progress = tqdm(res.iter_content(buffer_size), f"Downloading {filename}", total=file_size, unit="B", unit_scale=True, unit_divisor=1024) with open(filename, "wb") as f: for data in progress: # write data read to the file f.write(data) # update the progress bar manually progress.update(len(data)) iter_content() method iterates over the response data, this avoids reading the content at once into memory for large responses, we specified buffer_size as the number of bytes it should read into memory in every loop. We then wrapped the iteration with a tqdm object, which will print a fancy progress bar. We also changed the tqdm default unit from iteration to bytes. After that, in each iteration, we read a chunk of data and write it to the file opened and update the progress bar. Here is my result: C:\file-downloader>python download.py Downloading winzip24-downwz.exe: 6%|█████▏ | 779k/11.8M [00:03<00:55, 210kB/s] It is working! Alright, we are done, as you may see, downloading files in Python is pretty easy using powerful libraries like requests, you can now use this on your Python applications, good luck! Here are some ideas you can implement: By the way, if you wish to download files in torrent, check this tutorial. Happy Coding ♥View Full Code
https://www.thepythoncode.com/article/download-files-python
CC-MAIN-2020-16
en
refinedweb
When writing truth tables, please list rows in the order used in all examples: FF, FT, TF, TT. For three-input tables, use the above four lines preceded by F, then the above four lines preceded by T. Exercise 2.5.15 In a truth table for two inputs, provide a column for each of the sixteen possible distinct functions. Give a small formula for each of these functions. Exercise 2.5.16 Write the truth table for xnor, the negation of exclusive-or, What is a more common name for this Boolean function? Exercise 2.5.17. Exercise 2.5.18 Exercise 2.5.19 For each of the following, find a satisflying truth assignment, (values of the propositions which make the formula true), if any exists. - (a ⇒¬b) ∧ a - (a ⇒ c ⇒¬b) ∧ (a ∨ b) Exercise 2.5.20 For each of the following, find a falsiflying truth assignment, (values of the propositions which make the formula false), if any exists. - (a ⇒¬b) ∨ a - (¬b ⇒ (a ⇒ c)) ∨ a ∧ b Exercise 2.5.21 Formula φ is stronger than formula ψ if ψ is true whenever φ is true (i.e., φ is at least a strong as ψ), but not conversely. Equivalently, this means that φ ⇒ ψ is always true, but ψ ⇒ φ is not always true. As one important use of this concept, if we know that ψ ⇒ θ, and that φ is stronger than ψ, then we also know that φ ⇒ θ. That holds simply by transitivity. Another important use, which is outside the scope of this module, is the idea of strengthening an inductive hypothesis. Similarly, φ is weaker than formula ψ whenever ψ is stronger than φ. Show which of the following hold. When true, show φ ⇒ ψ is true by a truth table, and show a falsi flying truth assignment for ψ ⇒ φ. When false, give a truth table and truth assignment the other way around. - a ∧ b is stronger than a ∨ b. - a ∨ b is stronger than a. - a is stronger than a ⇒ b. - b is stronger than a ⇒ b. Exercise 2.5.22 Using truth tables, show that (a ∨ c) ∧ (b ⇒ c) ∧ (c ⇒ a) is equivalent to (b ⇒ c) ∧ a. but not equivalent to (a ∨ c) ∧ (b ⇒ c). Exercise 2.5.23)));}} Exercise 2.5.24 Consider the following conditional code, which returns a boolean value. int i; bool a,b; ... if (a && (i > 0))return b;else if (a && i <= 0)return false;else if (a || b)return a;elsereturn (i > 0); Simplify it by flling33 or C++'s34 many levels of operator precedence. - 1148 reads
http://www.opentextbooks.org.hk/node/9562
CC-MAIN-2020-16
en
refinedweb
The Update function allows you to monitor inputs and other events regularly from a script and take appropriate action. For example, you might move a character when the “forward” key is pressed. An important thing to remember when handling time-based actions like this is that the game’s framerate is not constant and neither is the length of time between Update function calls. As an example of this, consider the task of moving an object forward gradually, one frame at a time. It might seem at first that you could just shift the object by a fixed distance each frame: //C# script example using UnityEngine; using System.Collections; public class ExampleScript : MonoBehaviour { public float distancePerFrame; void Update() { transform.Translate(0, 0, distancePerFrame); } } //JS script example var distancePerFrame: float; function Update() { transform.Translate(0, 0, distancePerFrame); } However, given that the frame time is not constant, the object will appear to move at an irregular speed. If the frame time is 10 milliseconds then the object will step forward by distancePerFrame one hundred times per second. But if the frame time increases to 25 milliseconds (due to CPU load, say) then it will only step forward forty times a second and therefore cover less distance. The solution is to scale the size of the movement by the frame time which you can read from the Time.deltaTime property: //C# script example using UnityEngine; using System.Collections; public class ExampleScript : MonoBehaviour { public float distancePerSecond; void Update() { transform.Translate(0, 0, distancePerSecond * Time.deltaTime); } } //JS script example var distancePerSecond: float; function Update() { transform.Translate(0, 0, distancePerSecond * Time.deltaTime); } Note that the movement is now given as distancePerSecond rather than distancePerFrame. As the framerate changes, the size of the movement step will change accordingly and so the object’s speed will be constant. Unlike the main frame update, Unity’s physics system does work to a fixed timestepA customizable framerate-independent interval that dictates when physics calculations and FixedUpdate() events are performed. More info See in Glossary, which is important for the accuracy and consistency of the simulation. At the start of the physics update, Unity sets an “alarm” by adding the fixed timestep value onto the time when the last physics update ended. The physics system will then perform calculations until the alarm goes off. You can change the size of the fixed timestep from the Time window and you can read it from a script using the Time.fixedDeltaTime property. Note that a lower value for the timestep will result in more frequent physics updates and more precise simulation but at the cost of greater CPU load. You probably won’t need to change the default fixed timestep unless you are placing high demands on the physics engineA system that simulates aspects of physical systems so that objects can accelerate correctly and be affected by collisions, gravity and other forces. More info See in Glossary. The fixed timestep keeps the physical simulation accurate in real time but it can cause problems in cases where the game makes heavy use of physics and the gameplay framerate has also become low (due to a large number of objects in play, say). The main frame update processing has to be “squeezed” in between the regular physics updates and if there is a lot of processing to do then several physics updates can take place during a single frame. Since the frame time, positions of objects and other properties are frozen at the start of the frame, the graphics can get out of sync with the more frequently updated physics. Naturally, there is only so much CPU power available but Unity has an option to let you effectively slow down physics time to let the frame processing catch up. The Maximum Allowed Timestep setting (in the Time window) puts a limit on the amount of time Unity will spend processing physics and FixedUpdate calls during a given frame update. If a frame update takes longer than Maximum Allowed Timestep to process, the physics engine will “stop time” and let the frame processing catch up. Once the frame update has finished, the physics will resume as though no time has passed since it was stopped. The result of this is that rigidbodiesA component that allows a GameObject to be affected by simulated gravity and other forces. More info See in Glossary will not move perfectly in real time as they usually do but will be slowed slightly. However, the physics “clock” will still track them as though they were moving normally. The slowing of physics time is usually not noticeable and is an acceptable trade-off against gameplay performance. For special effects, such as “bullet-time”, it is sometimes useful to slow the passage of game time so that animations and script responses happen at a reduced rate. Furthermore, you may sometimes want to freeze game time completely, as when the game is paused. Unity has a Time Scale property that controls how fast game time proceeds relative to real time. If the scale is set to 1.0 then game time matches real time. A value of 2.0 makes time pass twice as quickly in Unity (ie, the action will be speeded-up) while a value of 0.5 will slow gameplay down to half speed. A value of zero will make time “stop” completely. Note that the time scale doesn’t actually slow execution but simply changes the time step reported to the Update and FixedUpdate functions via Time.deltaTime and Time.fixedDeltaTime. The Update function is likely to be called more often than usual when game time is slowed down but the deltaTime step reported each frame will simply be reduced. Other script functions are not affected by the time scale so you can, for example, display a GUI with normal interaction when the game is paused. The Time window has a property to let you set the time scale globally but it is generally more useful to set the value from a script using the Time.timeScale property: //C# script example using UnityEngine; using System.Collections; public class ExampleScript : MonoBehaviour { void Pause() { Time.timeScale = 0; } void Resume() { Time.timeScale = 1; } } //JS script example function Pause() { Time.timeScale = 0; } function Resume() { Time.timeScale = 1; } A very special case of time management is where you want to record gameplay as a video. Since the task of saving screen images takes considerable time, the usual framerate of the game will be drastically reduced if you attempt to do this during normal gameplay. This will result in a video that doesn’t reflect the true performance of the game. Fortunately, Unity provides a Capture Framerate property that lets you get around this problem. When the property’s value is set to anything other than zero, game time will be slowed and the frame updates will be issued at precise regular intervals. The interval between frames is equal to 1 / Time.captureFramerate, so if the value is set to 5.0 then updates occur every fifth of a second. With the demands on framerate effectively reduced, you have time in the Update function to save screenshots or take other actions: //C# script example using UnityEngine; using System.Collections; public class ExampleScript : MonoBehaviour { //. string folder = "ScreenshotFolder";. Application.CaptureScreenshot(name); } } //JS script example //. var folder = "ScreenshotFolder"; var frameRate = 25; function Start () { // Set the playback framerate (real time will not relate to game time after this). Time.captureFramerate = frameRate; // Create the folder System.IO.Directory.CreateDirectory(folder); } function Update () { // Append filename to folder name (format is '0005 shot.png"') var name = String.Format("{0}/{1:D04} shot.png", folder, Time.frameCount ); // Capture the screenshot to the specified file. Application.CaptureScreenshot(name); } Although the video recorded using this technique typically looks very good, the game can be hard to play when slowed-down drastically. You may need to experiment with the value of Time.captureFramerate to allow ample recording time without unduly complicating the task of the test player. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/Manual/TimeFrameManagement.html
CC-MAIN-2020-16
en
refinedweb
import "github.com/spatialmodel/inmap/emissions/aep/aeputil" Package aeputil provides commonly used configuration and functions for the AEP library. doc.go excel.go inventory.go iterator.go scale.go spatial.go speciate.go Scale applies scaling factors to the given emissions records. type InventoryConfig struct { // NEIFiles lists National Emissions Inventory emissions files. // The file names can include environment variables. // The format is map[sector name][list of files]. NEIFiles map[string][]string // COARDSFiles lists COARDS-compliant NetCDF emission files // (NetCDF 4 and greater not supported). // Information regarding the COARDS NetCDF conventions are // available here:. // The file names can include environment variables. // The format is map[sector name][list of files]. // For COARDS files, the sector name will also be used // as the SCC code. COARDSFiles map[string][]string // COARDSYear specifies the year of emissions for COARDS emissions files. // COARDS emissions are assumed to be in units of mass of emissions per year. // The year will not be used for NEI emissions files. COARDSYear int // PolsToKeep lists pollutants from the NEI that should be kept. PolsToKeep aep.Speciation // InputUnits specifies the units of input data. Acceptable // values are `tons', `tonnes', `kg', `g', and `lbs'. InputUnits string // SrgSpec gives the location of the surrogate specification file. // It is used for assigning spatial locations to emissions records.. // It is used for assigning spatial locations to emissions records. // It is only used when SrgSpecType == "SMOKE". SrgShapefileDirectory string // GridRef specifies the locations of the spatial surrogate gridding // reference files used for processing emissions. // It is used for assigning spatial locations to emissions records. GridRef []string // SCCExactMatch specifies whether SCC codes must match exactly when processing // emissions. SCCExactMatch bool // FilterFunc specifies which records should be kept. // If it is nil, all records are kept. FilterFunc aep.RecFilter } InventoryConfig holds emissions inventory configuration information. func (c *InventoryConfig) ReadEmissions() (map[string][]aep.Record, *aep.InventoryReport, error) ReadEmissions returns emissions records for the files specified in the NEIFiles field in the receiver. The returned records are split up by sector. type Iterator interface { // Next returns the next record. Next() (aep.Record, error) // Report returns an emissions report on the records that have been // processed by this iterator. Report() *aep.InventoryReport } Iterator is an iterface for types that can iterate through a list of emissions records and return totals at the end. IteratorFromMap creates an Iterator from a map of emissions. This function is meant to be temporary until Inventory.ReadEmissions is replaced with an iterator. ScaleFunc returns an emissions scaling factor for the given pollutant in the given record. func ScaleNEIStateTrends(summaryFile string, sccDescriptions io.Reader, baseYear, scaleYear int) (ScaleFunc, error) ScaleNEIStateTrends provides an emissions scaling function to scale NEI emissions from baseYear to the specified scaleYear using EPA emissions summaries by year, state, SCC code, and pollutant available from. The "xls" file must be converted to an "xlsx" file before opening. type SpatialConfig struct { // SrgSpec gives the location of the surrogate specification file.. SrgShapefileDirectory string // SCCExactMatch specifies whether SCC codes must match exactly when processing // emissions. SCCExactMatch bool // GridRef specifies the locations of the spatial surrogate gridding // reference files used for processing the NEI. GridRef []string // OutputSR specifies the output spatial reference in Proj4 format. OutputSR string // InputSR specifies the input emissions spatial reference in Proj4 format. InputSR string // SimplifyTolerance is the tolerance for simplifying spatial surrogate // geometry, in units of OutputSR. SimplifyTolerance float64 // SpatialCache specifies the location for storing spatial emissions // data for quick access. If this is left empty, no cache will be used. SpatialCache string // MaxCacheEntries specifies the maximum number of emissions and concentrations // surrogates to hold in a memory cache. Larger numbers can result in faster // processing but increased memory usage. MaxCacheEntries int // GridCells specifies the geometry of the spatial grid. GridCells []geom.Polygonal // GridName specifies a name for the grid which is used in the names // of intermediate and output files. // Changes to the geometry of the grid must be accompanied by either a // a change in GridName or the deletion of all the files in the // SpatialCache directory. GridName string // contains filtered or unexported fields } SpatialConfig holds emissions spatialization configuration information. func (c *SpatialConfig) Iterator(parent Iterator, gridIndex int) *SpatialIterator Iterator creates a SpatialIterator from the given parent iterator for the given gridIndex. func (c *SpatialConfig) SpatialProcessor() (*aep.SpatialProcessor, error) SpatialProcessor returns the spatial processor associated with the receiver. SpatialIterator is an Iterator that spatializes the records that it processes. func (si *SpatialIterator) Next() (aep.Record, error) Next returns a spatialized a record from the parent iterator to fulfill the iterator interface. func (si *SpatialIterator) NextGridded() (aep.RecordGridded, error) NextGridded returns a spatialized a record from the parent iterator. func (si *SpatialIterator) Report() *aep.InventoryReport Report returns an emissions report on the records that have been processed by this iterator. func (si *SpatialIterator) SpatialTotals() (emissions map[aep.Pollutant]*sparse.SparseArray, units map[aep.Pollutant]unit.Dimensions) SpatialTotals returns spatial arrays of the total emissions for each pollutant, as well as their units. type SpeciateConfig struct { // These variables specify the locations of files used for // chemical speciation. SpecRef, SpecRefCombo, SpeciesProperties, GasProfile string GasSpecies, OtherGasSpecies, PMSpecies, MechAssignment string MolarWeight, SpeciesInfo string // ChemicalMechanism specifies which chemical mechanism to // use for speciation. ChemicalMechanism string // MassSpeciation specifies whether to use mass speciation. // If false, speciation will convert values to moles. MassSpeciation bool // SCCExactMatch specifies whether SCCs should be expected to match // exactly with the the speciation reference, or if partial matches // are acceptable. SCCExactMatch bool Speciation aep.Speciation // contains filtered or unexported fields } SpeciateConfig holds speciation configuration information. func (c *SpeciateConfig) Iterator(parent Iterator) Iterator Iterator creates a new iterator that consumes records from the given iterators and chemically speciates them. func (c *SpeciateConfig) Speciate(r aep.Record) (*SpeciatedRecord, error) Speciate chemically speciates the given record. SpeciatedRecord is an emissions record where chemical speciation has been performed. It should be created using SpeciateConfig.Speciate(). func (r *SpeciatedRecord) CombineEmissions(r2 aep.Record) CombineEmissions combines emissions from r2 with the receiver. func (r *SpeciatedRecord) DroppedEmissions() *aep.Emissions DroppedEmissions returns emissions that were dropped from the analysis during speciation to avoid double counting. func (r *SpeciatedRecord) GetEmissions() *aep.Emissions GetEmissions returns the speciated emissions. PeriodTotals returns total emissions for the given time period. Totals returns emissions totals. Package aeputil imports 19 packages (graph) and is imported by 4 packages. Updated 2019-11-20. Refresh now. Tools for package owners.
https://godoc.org/github.com/spatialmodel/inmap/emissions/aep/aeputil
CC-MAIN-2020-16
en
refinedweb
our attention will be fully concentrated on the scripting and dynamic languages support in Java. Since Java 7, the JVM has a direct support of modern dynamic (also often called scripting) languages and the Java 8 release delivered even more enhancements into this space. One of the strength of the dynamic languages is that the behavior of the program is defined at runtime, rather than at compile time. Among those languages, Ruby (), Python () and JavaScript () have gained a lot of popularity and are the most widely used ones at the moment. We are going to take a look on how Java scripting API opens a way to integrate those languages into existing Java applications. 2. Dynamic Languages Support As we already know very well, Java is a statically typed language. This means that all typed information for class, its members, method parameters and return values is available at compile time. Using all this details, the Java compiler emits strongly typed byte code which can then be efficiently interpreted by the JVM at runtime. However, dynamic languages perform type checking at runtime, rather than compile time. The challenge of dealing with dynamically languages is how to implement a runtime system that can choose the most appropriate implementation of a method to call after the program has been compiled. For a long time, the JVM had had no special support for dynamically typed languages. But Java 7 release introduced the new invokedynamic instruction that enabled the runtime system (JVM) to customize the linkage between a call site (the place where method is being called) and a method implementation. It really opened a door for effective dynamic languages support and implementations on JVM platform. 3. Scripting API As part of the Java 6 release back in 2006, the new scripting API has been introduced under the javax.script package. This extensible API was designed to plug in mostly any scripting language (which provides the script engine implementation) and run it on JVM platform. Under the hood, the Java scripting API is really small and quite simple. The initial step to begin working with scripting API is to create new instance of the ScriptEngineManager class. ScriptEngineManager provides the capability to discover and retrieve available scripting engines by their names from the running application classpath. Each scripting engine is represented using a respective ScriptEngine implementation and essentially provides the ability to execute the scripts using eval() functions family (which has multiple overloaded versions). Quite a number of popular scripting (dynamic) languages already provide support of the Java scripting API and in the next sections of this tutorial we will see how nice this API works in practice by playing with JavaScript, Groovy, Ruby/JRuby and Python/Jython. 4. JavaScript on JVM It is not by accident that we are going to start our journey with the JavaScript language as it was the one of the first scripting languages supported by the Java standard library scripting API. And also because, by and large, it is the single programming language every web browser understands. In its simplest form, the eval() function executes the script, passed to it as a plain Java string. The script has no state shared with the evaluator (or caller) and is self-contained piece of code. However, in typical real-world applications it is quite rare and more often than not some variables or properties are required to be provided to the script in order to perform some meaningful calculations or actions. With that being said, let us take a look on a quick example evaluating real JavaScript function call using simple variable bindings: final ScriptEngineManager factory = new ScriptEngineManager(); final ScriptEngine engine = factory.getEngineByName( "JavaScript" ); final Bindings bindings = engine.createBindings(); bindings.put( "str", "Calling JavaScript" ); bindings.put( "engine", engine ); engine.eval( "print(str + ' from ' + engine.getClass().getSimpleName() )", bindings ); Once executed, the following output will be printed on the console: Calling JavaScript from RhinoScriptEngine For quite a while, Rhino used to be the single JavaScript scripting engine, available on JVM. But the Java 8 release brought a brand-new implementation of JavaScript scripting engine called Nashorn (). From the API standpoint, there are not too many differences however the internal implementation differs significantly, promising much better performance. Here is the same example rewritten to use Nashorn JavaScript engine: final ScriptEngineManager factory = new ScriptEngineManager(); final ScriptEngine engine = factory.getEngineByName( "Nashorn" ); final Bindings bindings = engine.createBindings(); bindings.put( "engine", engine ); engine.eval( "print(str + ' from ' + engine.getClass().getSimpleName() )", bindings ); The following output will be printed on the console (please notice a different script engine implementation this time): Calling JavaScript from NashornScriptEngine Nonetheless, the examples of JavaScript code snippets we have looked at are quite trivial. You could actually evaluate whole JavaScript files using overloaded eval() function call and implement quite sophisticated algorithms, purely in JavaScript. In the next sections we are going to see such examples while exploring other scripting languages. 5. Groovy on JVM Groovy () is one of the most successful dynamic languages for the JVM platform. It is often used side by side with Java, however it also provides the Java scripting API engine implementation and could be used in a similar way as a JavaScript one. Let us make this Groovy example a bit more meaningful and interesting by developing a small standalone script which prints out on the console some details about every book from the collection shared with it by calling Java application. The Book class is quite simple and has only two properties, author and title: public class Book { private final String author; private final String title; public Book(final String author, final String title) { this.author = author; this.title = title; } public String getAuthor() { return author; } public String getTitle() { return title; } } The Groovy script (named just script.groovy) uses some nifty language features like closures and string interpolation to output the book properties to the console: books.each { println "Book '$it.title' is written by $it.author" } println "Executed by ${engine.getClass().simpleName}" println "Free memory (bytes): " + Runtime.getRuntime().freeMemory() Now let us execute this Groovy script using Java scripting API and predefined collection of books (surely, all about Groovy): final ScriptEngineManager factory = new ScriptEngineManager(); final ScriptEngine engine = factory.getEngineByName( "Groovy" ); final Collection< Book > books = Arrays.asList( new Book( "Venkat Subramaniam", "Programming Groovy 2" ), new Book( "Ken Kousen", "Making Java Groovy" ) ); final Bindings bindings = engine.createBindings(); bindings.put( "books", books ); bindings.put( "engine", engine ); try( final Reader reader = new InputStreamReader( Book.class.getResourceAsStream("/script.groovy" ) ) ) { engine.eval( reader, bindings ); } Please notice that the Groovy scripting engine has a full access to Java standard library and does not require any addition bindings. To confirm that, the last line from the Groovy script above accesses current runtime environment by calling the Runtime.getRuntime() static method and prints out the amount of free heap available to running JVM (in bytes). The following sample output is going to appear on the console: Book 'Programming Groovy 2' is written by Venkat Subramaniam Book 'Making Java Groovy' is written by Ken Kousen Executed by GroovyScriptEngineImpl Free memory (bytes): 153427528 It has been 10 years since Groovy was introduced. It quickly became very popular because of the innovative language features, similar to Java syntax and great interoperability with existing Java code. It may look like introduction of lambdas and Stream API in Java 8 has made Groovy a bit less appealing choice, however it is still widely used by Java developers. 6. Ruby on JVM Couple of years ago Ruby () was the most popular dynamic language used for web application development. Even though its popularity has somewhat shaded away nowadays, Ruby and its ecosystem brought a lot of innovations into modern web applications development, inspiring the creation and evolution of many other programming languages and frameworks. JRuby () is an implementation of the Ruby programming language for JVM platform. Similarly to Groovy, it also provides great interoperability with existing Java code preserving the beauty of the Ruby language syntax. Let us rewrite the Groovy script from the Groovy on JVM section in Ruby language (with name script.jruby) and evaluate it using the Java scripting API. $books.each do |it| java.lang.System.out.println( "Book '" + it.title + "' is written by " + it.author ) end java.lang.System.out.println( "Executed by " + $engine.getClass().simpleName ) java.lang.System.out.println( "Free memory (bytes): " + java.lang.Runtime.getRuntime().freeMemory().to_s ) The script evaluation codes stays mostly the same, except different scripting engine and the sample books collection, which is now all about Ruby. final ScriptEngineManager factory = new ScriptEngineManager(); final ScriptEngine engine = factory.getEngineByName( "jruby" ); final Collection< Book > books = Arrays.asList( new Book( "Sandi Metz", "Practical Object-Oriented Design in Ruby" ), new Book( "Paolo Perrotta", "Metaprogramming Ruby 2" ) ); final Bindings bindings = engine.createBindings(); bindings.put( "books", books ); bindings.put( "engine", engine ); try( final Reader reader = new InputStreamReader( Book.class.getResourceAsStream("/script.jruby" ) ) ) { engine.eval( reader, bindings ); } The following sample output is going to appear on the console: Book 'Practical Object-Oriented Design in Ruby' is written by Sandi Metz Book 'Metaprogramming Ruby 2' is written by Paolo Perrotta Executed by JRubyEngine Free memory (bytes): 142717584 As we can figure out from the JRuby code snippet above, using the classes from standard Java library is a bit verbose and have to be prefixed by package name (there are some tricks to get rid of that but we are not going in such specific details). 7. Python on JVM Our last but not least example is going to showcase the Python () language implementation on JVM platform, which is called Jython (). The Python language has gained a lot of traction recently and its popularity is growing every day. It is widely used by the scientific community and has a large set of libraries and frameworks, ranging from web development to natural language processing. Following the same path as with Ruby, we are going to rewrite the example script from Groovy on JVM section using Python language (with name script.py) and evaluate it using the Java scripting API. from java.lang import Runtime for it in books: print "Book '%s' is written by %s" % (it.title, it.author) print "Executed by " + engine.getClass().simpleName print "Free memory (bytes): " + str( Runtime.getRuntime().freeMemory() ) Let us instantiate the Jython scripting engine and execute the Python script above using already familiar Java scripting API. final ScriptEngineManager factory = new ScriptEngineManager(); final ScriptEngine engine = factory.getEngineByName( "jython" ); final Collection< Book > books = Arrays.asList( new Book( "Mark Lutz", "Learning Python" ), new Book( "Jamie Chan", "Learn Python in One Day and Learn It Well" ) ); final Bindings bindings = engine.createBindings(); bindings.put( "books", books ); bindings.put( "engine", engine ); try( final Reader reader = new InputStreamReader( Book.class.getResourceAsStream("/script.py" ) ) ) { engine.eval( reader, bindings ); } The following sample output will be printed out on the console: Book 'Learning Python' is written by Mark Lutz Book 'Learn Python in One Day and Learn It Well' is written by Jamie Chan Executed by PyScriptEngine Free memory (bytes): 132743352 The power of Python as a programming language is in its simplicity and steep learning curve. With an army of Python developers out there, the ability to integrate the Python scripting language into your Java applications as some kind of extensibility mechanism may sound like an interesting idea. 8. Using Scripting API The Java scripting API is a great way to enrich your Java applications with extensible scripting support, just pick your language. It is also the simplest way to plug in domain-specific languages (DSLs) and allows the business experts to express their intentions in the most convenient manner. The latest changes in the JVM itself (see please Dynamic Languages Support section) made it much friendlier runtime platform for different dynamic (scripting) languages implementations. No doubts, more and more scripting language engines will be available in the future, opening the door to seamless integration with new and existing Java applications. 9. What’s next Beginning from this part we are really starting the discussions about advanced concepts of Java as a language and JVM as excellent runtime execution platform. In the next part of the tutorial we are going to look at the Java Compiler API and the Java Compiler Tree API to learn how to manipulate Java sources at runtime. 9. Download Code This was a lesson of Dynamic Language Support, part 12 of Advanced Java course. You may download the source code here: advanced-java-part-12
https://www.javacodegeeks.com/2015/09/dynamic-languages-support.html
CC-MAIN-2020-16
en
refinedweb
Panel Panel Panel Panel Class Definition public : class Panel : FrameworkElement, IPanel public class Panel : FrameworkElement, IPanel Public Class Panel Inherits FrameworkElement Implements IPanel // This API is not available in Javascript. - Inheritance - - Attributes - Inherited Members Inherited properties Inherited events Inherited methods Constructors Properties Gets or sets a Brush that fills the panel content area. public : Brush Background { get; set; } public Brush Background { get; set; } Public ReadWrite Property Background As Brush // This API is not available in Javascript. <panel Background="{StaticResource resourceName}"/> - Identifies the Background dependency property. public : static DependencyProperty BackgroundProperty { get; } public static DependencyProperty BackgroundProperty { get; } Public Static ReadOnly Property BackgroundProperty As DependencyProperty // This API is not available in Javascript. The identifier for the Background dependency property. Gets the collection of child elements of the panel. public : UIElementCollection Children { get; } public UIElementCollection Children { get; } Public ReadOnly Property Children As UIElementCollection // This API is not available in Javascript. <panel> oneOrMoreUIElements </panel> The collection of child objects. The default is an empty collection. Gets or sets the collection of Transition style elements that apply to child content of a Panel subclass. public : TransitionCollection ChildrenTransitions { get; set; } public TransitionCollection ChildrenTransitions { get; set; } Public ReadWrite Property ChildrenTransitions As TransitionCollection // This API is not available in Javascript. <panel> <panel.ChildTransitions> <TransitionCollection> oneOrMoreTransitions </TransitionCollection> </panel.ChildTransitions> </panel> XAML syntax guide. Transition animations play a particular role in UI design of your app. The basic idea is that when there is a change or transition, the animation draws the attention of the user to the change. - See Also - ChildrenTransitionsProperty ChildrenTransitionsProperty ChildrenTransitionsProperty ChildrenTransitionsProperty Identifies the ChildrenTransitions dependency property. public : static DependencyProperty ChildrenTransitionsProperty { get; } public static DependencyProperty ChildrenTransitionsProperty { get; } Public Static ReadOnly Property ChildrenTransitionsProperty As DependencyProperty // This API is not available in Javascript. The identifier for the ChildrenTransitions dependency property. Gets a value that indicates whether this Panel is a container for UI items that are generated by an ItemsControl. public : PlatForm::Boolean IsItemsHost { get; } public bool IsItemsHost { get; } Public ReadOnly Property IsItemsHost As bool // This API is not available in Javascript. - Value - PlatForm::Boolean bool bool bool true if this instance of Panel is an items host; otherwise, false. The default is false. Remarks IsItemsHost is a calculated property where the value results from the system checking the parents of the Panel for an ItemsControl implementation. If one exists, then the value is true. In previous frameworks this property was settable. It's not settable in the Windows Runtime, and there should be no need to set it because the system has the calculation behavior. If you want a different relationship between a panel and a parent items control, just create it that way in your XAML control compositing. Identifies the IsItemsHost dependency property. public : static DependencyProperty IsItemsHostProperty { get; } public static DependencyProperty IsItemsHostProperty { get; } Public Static ReadOnly Property IsItemsHostProperty As DependencyProperty // This API is not available in Javascript. The identifier for the IsItemsHost dependency property.
https://docs.microsoft.com/en-us/uwp/api/windows.ui.xaml.controls.panel
CC-MAIN-2017-43
en
refinedweb
The Windows 10 Technical Preview brings key advances to Chakra, the JavaScript engine that powers Internet Explorer and store based Web apps across a whole range of Windows devices – phones, tablets, 2-in-1’s, PC’s and Xbox. As with all previous releases of Chakra in IE9, IE10 and IE11, this release is a significant step forward to create a JavaScript engine that is highly interoperable, spec compliant, secure and delivers great performance. Chakra now has a highly streamlined execution pipeline to deliver faster startup, supports various new and augmented optimizations in Chakra’s Just-in-Time (JIT) compiler to increase script execution throughput, and has an enhanced Garbage Collection (GC) subsystem to deliver better UI responsiveness for apps and sites. This post details some of these key performance improvements. Chakra’s Multi-tiered Pipeline: Historical background Since its inception in IE9, Chakra has supported a multi-tiered architecture – one which utilizes an interpreter for very fast startup, a parallel JIT compiler to generate highly optimized code for high throughput speeds, and a concurrent background GC to reduce pauses and deliver great UI responsiveness for apps and sites. Once the JavaScript source code for an app or site hits the JavaScript subsystem, Chakra performs a quick parse pass to check for syntax errors. After that, all other work in Chakra happens on an as-needed-per-function basis. Whenever possible, Chakra defers the parsing and generation of an abstract syntax tree (AST) for functions that are not needed for immediate execution, and pushes work, such as JIT compilation and GC, off the main UI thread, to harness the available power of the underlying hardware while keeping your apps and sites fast and responsive. When a function is executed for the first time, Chakra’s parser creates an AST representation of the function’s source. The AST is then converted to bytecode, which is immediately executed by Chakra’s interpreter. While the interpreter is executing the bytecode, it collects data such as type information and invocation counts to create a profile of the functions being executed. This profile data is used to generate highly optimized machine code (a.k.a. JIT’ed code) as a part of the JIT compilation of the function. When Chakra notices that a function or loop-body is being invoked multiple times in the interpreter, it queues up the function in Chakra’s background JIT compiler pipeline to generate optimized JIT’ed code for the function. Once the JIT’ed code is ready, Chakra replaces the function or loop entry points such that subsequent calls to the function or the loop start executing the faster JIT’ed code instead of continuing to execute the bytecode via the interpreter. Chakra. To strike a balance between the amounts of time spent JIT’ing the code vs. the memory footprint of the process, instead of JIT compiling a function every time a bailout happens, Chakra utilizes the stored JIT’ed code for a function or loop body until the time bailouts become excessive and exceed a specific threshold, which forces the code to be re-JIT’ed and the old JIT code to be discarded. Figure 1 – Chakra’s JavaScript execution pipeline in IE11 Improved Startup Performance: Streamlined execution pipeline Simple JIT: A new JIT compiling tier. Once the optimized JIT code is generated, Chakra then switches over code execution from the simple JIT’ed code version to the fully optimized JIT’ed code version. The other inherent advantage of having a Simple JIT tier is that in case a bailout happens, the function execution can utilize the faster switchover from interpreter to Simple JIT, till the time the fully optimized re-JIT’ed code is available. The Simple JIT compiler is essentially a less optimizing version of Chakra’s Full JIT compiler. Similar to Chakra’s Full JIT compiler, the Simple JIT compiler also executes on the concurrent background JIT thread, which is now shared between both JIT compilers. One of the key difference between the two JIT execution tiers is that unlike executing optimized JIT code, the simple JIT’ed code execution pipeline continues to collect profile data which is used by the Full JIT compiler to generate optimized JIT’ed code. Figure 2 – Chakra’s new Simple JIT tier Multiple Background JITs: Hardware accelerating your JavaScript Today, the browser and Web applications are used on a multitude of device configurations – be it phones, tablets, 2-in-1s, PCs or Xbox. While some of these device configurations restrict the availability of the hardware resources, applications running on top of beefy systems often fail to utilize the full power of the underlying hardware. Since inception in IE9, Chakra has used one parallel background thread for JIT compilation. Starting with Windows 10 Technical Preview, Chakra is now even more aware of the hardware it is running on. Whenever Chakra determines that it is running on a potentially underutilized hardware, Chakra now has the ability to spawn multiple concurrent background threads for JIT compilation. For cases where more than one concurrent background JIT thread is spawned, Chakra’s JIT compilation payload for both the Simple JIT and the Full JIT is split and queued for compilation across multiple JIT threads. This architectural change to Chakra’s execution pipeline helps reduce the overall JIT compilation latency – in turn making the switch over from the slower interpreted code to a simple or fully optimized version of JIT’ed code substantially faster at times. This change enables the TypeScript compiler to now run up to 30% faster in Chakra. Figure 3 – Simple and full JIT compilation, along with garbage collection is performed on multiple background threads, when available Fast JavaScript Execution: JIT compiler optimizations Previewing Equivalent Object Type Specialization The internal representation of an object’s property layout in Chakra is known as a “Type.” Based on the number of properties and layout of an object, Chakra creates either a Fast Type or a Slower Property Bag Type for each different object layout encountered during script execution. As properties are added to an object, its layout changes and a new type is created to represent the updated object layout. Most objects, which have the exact same property layout, share the same internal Fast Type. Figure 4 – Illustration of Chakra’s internal object types Despite having different property values, objects `o1` and `o2` in the above example share the same type (Type1) because they have the same properties in the same order, while objects `o3` and ‘o4’ have a different types (Type2 and Type3 respectively) because their layout is not exactly similar to that of `o1` or `o2`. To improve the performance of repeat property lookups for an internal Fast Type at a given call site, Chakra creates inline caches for the Fast Type to associate a property name with its associated slot in the layout. This enables Chakra to directly access the property slot, when a known object type comes repetitively at a call site. While executing code, if Chakra encounters an object of a different type than what is stored in the inline cache, an inline cache “miss” occurs. When a monomorphic inline cache (one which stores info for only a single type) miss occurs, Chakra needs to find the location of the property by accessing a property dictionary on the new type. This path is slower than getting the location from the inline cache when a match occurs. In IE11, Chakra delivered several type system enhancements, including the ability to create polymorphic inline caches for a given property access. Polymorphic caches provide the ability to store the type information of more than one Fast Type at a given call site, such that if multiple object types come repetitively to a call site, they continue to perform fast by utilizing the property slot information from the inlined cache for that type. The code snippet below is a simplified example that shows polymorphic inline caches in action. Despite the speedup provided by polymorphic inline caches for multiple types, from a performance perspective, polymorphic caches are somewhat slower than monomorphic (or a single) type cache, as the compiler needs to do a hash lookup for a type match for every access. In Windows 10 Technical Preview, Chakra introduces a new JIT optimization called “Equivalent Object Type Specialization,” which builds on top of “Object Type Specialization” that Chakra has supported since IE10. Object Type Specialization allows the JIT to eliminate redundant type checks against the inline cache when there are multiple property accesses to the same object. Instead of checking the cache for each access, the type is checked only for the first one. If it does not match, a bailout occurs. If it does match, Chakra does not check the type for other accesses as long as it can prove that the type of the object can’t be changed between the accesses. This enables properties to be accessed directly from the slot location that was stored in the profile data for the given type. Equivalent Object Type Specialization extends this concept to multiple types and enables Chakra to directly access property values from the slots, as long as the relative slot location of the properties protected by the given type check matches for all the given types. The following example showcases how this optimization kicks in for the same code sample as above, but improves the performance of such coding patterns by over 20%. Code Inlining Enhancements One of the key optimizations supported by JIT compilers is function inlining. Function inlining occurs when the function body of a called function is inserted, or inlined, into the caller’s body as if it was part of the caller’s source, thereby saving the overhead of function invocation and return (register saving and restore). For dynamic languages like JavaScript, inlining needs to verify that the correct function was inlined, which requires additional tracking to preserve the parameters and allow for stack walks in case code in the inlinee uses .caller or .argument. In Windows 10 Technical Preview, Chakra has eliminated the inlining overhead for most cases by using static data to avoid the dynamic overhead. This provides up to 10% performance boost in certain cases and makes the code generated by Chakra’s optimizing JIT compiler comparable to that of manually inlined code in terms of code execution performance. The code snippet below is a simplified example of inlining. When the doSomething function is called repetitively, Chakra’s optimizing Full JIT compiler eliminates the call to add by inlining the add function into doSomething. Of course, this inlining doesn’t happen at the JavaScript source level as shown below, but rather happens in the machine code generated by the JIT compiler. JIT compilers need to strike a balance to inlining. Inlining too much increases the memory overhead, in part from pressure on the register allocator as well as JIT compiler itself because a new copy of the inline function needs to be created in each place it is called. Inlining too little could lead to overall slower performance of the code. Chakra uses several heuristics to make inlining decisions based on data points like the bytecode size of a function, location of a function (leaf or non-leaf function) etc. For example, the smaller a function, the better chance it has of being inlined. Inlining ECMAScript5 (ES5) Getter and Setter APIs: Enabling and supporting performance optimizations for the latest additions to JavaScript language specifications (ES5, ES6 and beyond) helps ensure that the new language features are more widely adopted by developers. In IE11, Chakra added support for inlining ES5 property getters. Windows 10 Technical Preview extends this by enabling support for inlining of ES5 property setters. In the simplified example below, you can visualize how the getter and setter property functions of o.x are now inlined by Chakra, boosting their performance by 5-10% in specific cases. call() and apply() inlining: call() and apply() JavaScript methods are used extensively in real world code and JS frameworks like jQuery, at times to create mixins that help with code reuse. The call and apply methods of a target function allow setting the this binding as well as passing it arguments. The call method accepts target function arguments individually, e.g. foo.call(thisArg, arg1, arg2), while apply accepts them as a collection of arguments, e.g. foo.apply(thisArg, [arg1, arg2]). In IE11, Chakra added support for function call/apply inlining. In the simplified example below, the call invocation is converted into a straight (inlined) function call. In Windows 10 Technical Preview, Chakra takes this a step further and now inlines the call/apply target. The simplified example below illustrates this optimization, and in some of the patterns we tested, this optimization improves the performance by over 15%. Auto-typed Array Optimizations Starting with IE11, Chakra’s backing store and heuristics were optimized to treat numeric JavaScript arrays as typed arrays. This enabled JavaScript arrays with either all integers or all floats, to be auto-detected by Chakra and encoded internally as typed arrays. This change enabled Chakra to optimize array loads, avoid integer tagging and avoid boxing of float values for arrays of such type. In Windows 10 Technical Preview, Chakra further enhances the performance of such arrays by adding optimizations to hoist out array bounds checks, speed up internal array memory loads, and length loads. Apart from usage in real Web, many JS utility libraries that loop over arrays like Lo-Dash and Underscore benefit from these optimizations. In specific patterns we tested, Chakra now performs 40% faster when operating on such arrays. The code sample below illustrates the hoisting of bounds checks and length loads that is now done by Chakra, to speed up array operations. Chakra’s bounds check elimination handles many of the typical array access cases: And the bounds check elimination works outside of loops as well: Better UI Responsiveness: Garbage Collection Improvements Chakra has a mark and sweep garbage collector that supports concurrent and partial collections. In Windows 10 Technical Preview, Chakra continues to build upon the GC improvements in IE11 by pushing even more work to the dedicated background GC thread. In IE11, when a full concurrent GC was initiated, Chakra’s background GC would perform an initial marking pass, rescan to find objects that were modified by main thread execution while the background GC thread was marking, and perform a second marking pass to mark objects found during the rescan. Once the second marking pass was complete, the main thread was stopped and a final rescan and marking pass was performed, followed by a sweep performed mostly by the background GC thread to find unreachable objects and add them back to the allocation pool. Figure 5 – Full concurrent GC life cycle in IE11 In IE11, the final mark pass was performed only on the main thread and could cause delays if there were lots of objects to mark. Those delays contributed to dropped frames or animation stuttering in some cases. In Windows 10 Technical Preview, this final mark pass is now split between the main thread and the dedicated GC thread to reduce the main thread execution pauses even further. With this change, the time that Chakra’s GC spends in the final mark phase on the main thread has reduced up to 48%. Figure 6 – Full concurrent GC life cycle in Windows 10 Technical Preview Summary We are excited to preview the above performance optimizations in Chakra in the Windows 10 Technical Preview. Without making any changes to your app or site code, these optimizations will allow your apps and sites to have a faster startup by utilizing Chakra’s streamlined execution pipeline that now supports a Simple JIT and multiple background JIT threads, have increased execution throughput by utilizing Chakra’s more efficient type, inlining and auto-typed array optimizations, and have improved UI responsiveness due to a more parallelized GC. Given that performance is a never ending pursuit, we remain committed to refining these optimizations, improving Chakra’s performance further and are excited for what’s to come. Stay tuned for more updates and insights as we continue to make progress. In the meanwhile, if you have any feedback on the above or anything related to Chakra, please drop us a note or simply reach out on Twitter @IEDevChat, or on Connect. — John-David Dalton, Gaurav Seth, Louis Lafreniere Chakra Team.
https://blogs.msdn.microsoft.com/ie/2014/10/09/announcing-key-advances-to-javascript-performance-in-windows-10-technical-preview/?replytocom=12363
CC-MAIN-2017-43
en
refinedweb
Sometimes it’s tempting to re-invent the wheel to make a device function exactly the way you want. I am re-visiting the field of homemade electrophysiology equipment, and although I’ve already published a home made electocardiograph (ECG), I wish to revisit that project and make it much more elegant, while also planning for a pulse oximeter, an electroencephalograph (EEG), and an electrogastrogram (EGG). This project is divided into 3 major components: the low-noise microvoltage amplifier, a digital analog to digital converter with PC connectivity, and software to display and analyze the traces. My first challenge is to create that middle step, a device to read voltage (from 0-5V) and send this data to a computer. This project demonstrates a simple solution for the frustrating problem of sending data from a microcontroller to a PC with a USB connection. My solution utilizes a USB FTDI serial-to-usb cable, allowing me to simply put header pins on my device which I can plug into providing the microcontroller-computer link. This avoids the need for soldering surface-mount FTDI chips (which gets expensive if you put one in every project). FTDI cables are inexpensive (about $11 shipped on eBay) and I’ve gotten a lot of mileage out of mine and know I will continue to use it for future projects. If you are interested in MCU/PC communication, consider one of these cables as a rapid development prototyping tool. I’m certainly enjoying mine! It is important to me that my design is minimalistic, inexpensive, and functions natively on Linux and Windows without installing special driver-related software, and can be visualized in real-time using native Python libraries, such that the same code can be executed identically on all operating systems with minimal computer-side configuration. I’d say I succeeded in this effort, and while the project could use some small touches to polish it up, it’s already solid and proven in its usefulness and functionality. This is my final device. It’s reading voltage on a single pin, sending this data to a computer through a USB connection, and custom software (written entirely in Python, designed to be a cross-platform solution) displays the signal in real time. Although it’s capable of recording and displaying 5 channels at the same time, it’s demonstrated displaying only one. Let’s check-out a video of it in action: This 5-channel realtime USB analog sensor, coupled with custom cross-platform open-source software, will serve as the foundation for a slew of electrophysiological experiments, but can also be easily expanded to serve as an inexpensive multichannel digital oscilloscope. While more advanced solutions exist, this has the advantage of being minimally complex (consisting of a single microchip), inexpensive, and easy to build. To the right is my working environment during the development of this project. You can see electronics, my computer, microchips, and coffee, but an intriguingly odd array of immunological posters in the background. I spent a couple weeks camping-out in a molecular biology laboratory here at UF and got a lot of work done, part of which involved diving into electronics again. At the time this photo was taken, I hadn’t worked much at my home workstation. It’s a cool picture, so I’m holding onto it. Below is a simplified description of the circuit schematic that is employed in this project. Note that there are 6 ADC (analog to digital converter) inputs on the ATMega48 IC, but for whatever reason I ended-up only hard-coding 5 into the software. Eventually I’ll go back and re-declare this project a 6-channel sensor, but since I don’t have six things to measure at the moment I’m fine keeping it the way it is. RST, SCK, MISO, and MOSI are used to program the microcontroller and do not need to be connected to anything for operation. The max232 was initially used as a level converter to allow the micro-controller to communicate with a PC via the serial port. However, shortly after this project was devised an upgrade was used to allow it to connect via USB. Continue reading for details… Below you can see the circuit breadboarded. The potentiometer (small blue box) simulated an analog input signal. The lower board is my AVR programmer, and is connected to RST, SCK, MISO, MOSI, and GND to allow me to write code on my laptop and program the board. It’s a Fun4DIY.com AVR programmer which can be yours for $11 shipped! I’m not affiliated with their company, but I love that little board. It’s a clone of the AVR ISP MK-II. As you can see, the USB AVR programmer I’m using is supported in Linux. I did all of my development in Ubuntu Linux, writing AVR-GCC (C) code in my favorite Linux code editor Geany, then loaded the code onto the chip with AVRDude. I found a simple way to add USB functionality in a standard, reproducible way that works without requiring the soldering of a SMT FTDI chip, and avoids custom libraries like V-USB which don’t easily have drivers that are supported by major operating systems (Windows) without special software. I understand that the simplest long-term and commercially-logical solution would be to use that SMT chip, but I didn’t feel like dealing with it. Instead, I added header pins which allow me to snap-on a pre-made FTDI USB cable. They’re a bit expensive ($12 on ebay) but all I need is 1 and I can use it in all my projects since it’s a sinch to connect and disconnect. Beside, it supplies power to the target board! It’s supported in Linux and in Windows with established drivers that are shipped with the operating system. It’s a bit of a shortcut, but I like this solution. It also eliminates the need for the max232 chip, since it can sense the voltages outputted by the microcontroller directly. The system works by individually reading the 10-bit ADC pins on the microcontroller (providing values from 0-1024 to represent voltage from 0-5V or 0-1.1V depending on how the code is written), converting these values to text, and sending them as a string via the serial protocol. The FTDI cable reads these values and transmits them to the PC through a USB connection, which looks like “COM5” on my Windows computer. Values can be seen in any serial terminal program (i.e., hyperterminal), or accessed through Python with the PySerial module. As you can see, I’m getting quite good at home-brewn PCBs. While it would be fantastic to design a board and have it made professionally, this is expensive and takes some time. In my case, I only have a few hours here or there to work on projects. If I have time to design a board, I want it made immediately! I can make this start to finish in about an hour. I use a classic toner transfer method with ferric chloride, and a dremel drill press to create the holes. I haven’t attacked single-layer SMT designs yet, but I can see its convenience, and look forward to giving it a shot before too long. Here’s the final board ready for digitally reporting analog voltages. You can see 3 small headers on the far left and 2 at the top of the chip. These are for RST, SCK, MISO, MOSI, and GND for programming the chip. Once it’s programmed, it doesn’t need to be programmed again. Although I wrote the code for an ATMega48, it works fine on a pin-compatible ATMega8 which is pictured here. The connector at the top is that FTDI USB cable, and it supplies power and USB serial connectivity to the board. If you look closely, you can see that modified code has been loaded on this board with a Linux laptop. This thing is an exciting little board, because it has so many possibilities. It could read voltages of a single channel in extremely high speed and send that data continuously, or it could read from many channels and send it at any rate, or even cooler would be to add some bidirectional serial communication capabilities to allow the computer to tell the microcontroller which channels to read and how often to report the values back. There is a lot of potential for this little design, and I’m glad I have it working. Unfortunately I lost the schematics to this device because I formatted the computer that had the Eagle files on it. It should be simple and intuitive enough to be able to design again. The code for the microcontroller and code for the real-time visualization software will be posted below shortly. Below are some videos of this board in use in one form or another: Here is the code that is loaded onto the microcontroller: #define F_CPU 8000000UL #include <avr/io.h> #include <util/delay.h> void readADC(char adcn){ //ADMUX = 0b0100000+adcn; // AVCC ref on ADCn ADMUX = 0b1100000+adcn; // AVCC ref on ADCn ADCSRA |= (1<<ADSC); // reset value while (ADCSRA & (1<<ADSC)) {}; // wait for measurement } int main (void){ DDRD=255; init_usart(); ADCSRA = 0b10000111; //ADC Enable, Manual Trigger, Prescaler ADCSRB = 0; int adcs[8]={0,0,0,0,0,0,0,0}; char i=0; for(;;){ for (i=0;i<8;i++){readADC(i);adcs[i]=ADC>>6;} for (i=0;i<5;i++){sendNum(adcs[i]);send(44);} readADC(0); send(10);// LINE BREAK send(13); //return _delay_ms(3);_delay_ms(5); } } void sendNum(unsigned int num){ char theIntAsString[7]; int i; sprintf(theIntAsString, "%u", num); for (i=0; i < strlen(theIntAsString); i++){ send(theIntAsString[i]); } } void send (unsigned char c){ while((UCSR0A & (1<<UDRE0)) == 0) {} UDR0 = c; } void init_usart () { // ATMEGA48 SETTINGS int BAUD_PRESCALE = 12; UBRR0L = BAUD_PRESCALE; // Load lower 8-bits UBRR0H = (BAUD_PRESCALE >> 8); // Load upper 8-bits UCSR0A = 0; UCSR0B = (1<<RXEN0)|(1<<TXEN0); //rx and tx UCSR0C = (1<<UCSZ01) | (1<<UCSZ00); //We want 8 data bits } Here is the code that runs on the computer, allowing reading and real-time graphing of the serial data. It’s written in Python and has been tested in both Linux and Windows. It requires *NO* non-standard python libraries, making it very easy to distribute. Graphs are drawn (somewhat inefficiently) using lines in TK. Subsequent development went into improving the visualization, and drastic improvements have been made since this code was written, and updated code will be shared shortly. This is functional, so it’s worth sharing. import Tkinter, random, time import socket, sys, serial class App: def white(self): self.lines=[] self.lastpos=0 self.c.create_rectangle(0, 0, 800, 512, fill="black") for y in range(0,512,50): self.c.create_line(0, y, 800, y, fill="#333333",dash=(4, 4)) self.c.create_text(5, y-10, fill="#999999", text=str(y*2), anchor="w") for x in range(100,800,100): self.c.create_line(x, 0, x, 512, fill="#333333",dash=(4, 4)) self.c.create_text(x+3, 500-10, fill="#999999", text=str(x/100)+"s", anchor="w") self.lineRedraw=self.c.create_line(0, 800, 0, 0, fill="red") self.lines1text=self.c.create_text(800-3, 10, fill="#00FF00", text=str("TEST"), anchor="e") for x in range(800): self.lines.append(self.c.create_line(x, 0, x, 0, fill="#00FF00")) def addPoint(self,val): self.data[self.xpos]=val self.line1avg+=val if self.xpos%10==0: self.c.itemconfig(self.lines1text,text=str(self.line1avg/10.0)) self.line1avg=0 if self.xpos>0:self.c.coords(self.lines[self.xpos],(self.xpos-1,self.lastpos,self.xpos,val)) if self.xpos<800:self.c.coords(self.lineRedraw,(self.xpos+1,0,self.xpos+1,800)) self.lastpos=val self.xpos+=1 if self.xpos==800: self.xpos=0 self.totalPoints+=800 print "FPS:",self.totalPoints/(time.time()-self.timeStart) t.update() def __init__(self, t): self.xpos=0 self.line1avg=0 self.data=[0]*800 self.c = Tkinter.Canvas(t, width=800, height=512) self.c.pack() self.totalPoints=0 self.white() self.timeStart=time.time() t = Tkinter.Tk() a = App(t) #ser = serial.Serial('COM1', 19200, timeout=1) ser = serial.Serial('/dev/ttyUSB0', 38400, timeout=1) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1) while True: while True: #try to get a reading #print "LISTENING" raw=str(ser.readline()) #print raw raw=raw.replace("n","").replace("r","") raw=raw.split(",") #print raw try: point=(int(raw[0])-200)*2 break except: print "FAIL" pass point=point/2 a.addPoint(point) If you re-create this device of a portion of it, let me know! I’d love to share it on my website. Good luck! 21 thoughts on “Multichannel USB Analog Sensor with ATMega48” This looks like a great platform for sensor experimentation. And the Python realtime display program is beautifully done. Nice work and thanks for posting. I was wondering if you were planning on publishing the schematics for the EKG device you demonstrated in the DIY ecg 2 – prototype 1 video that was based on the AD620? I am working on a very similar prototype that would use the AD620 as well and am just looking for some guidance. Thanks in advance. -Ed Hi, Scott! Great project, I’d love to recreate this. Do you think you could post the full schematics sometime? Also, I noticed that the microcontroller code is incomplete, could you post the full code? 🙂 Thanks! –Rob This looks awesome! Cant wait for you to share the updated code 🙂 Ill try playing with this for now 🙂 Hi! Seems so cool! but… there’s a problem in getting the whole code… it shows up uncomplete… Thanks for noticing that! I added the rest of the code. I’m not sure how it got clipped off 🙂 Fascinating stuff! What sample rate is this setup able to achieve? Nice. But why not just use a PIC18F2550 (or 2455 or 4550)? It has way more code space, more SRAM, and it does USB so you don’t need the FTDI cable or the MAX232. Doesn’t get any more minimalist than that. Keep on with the projects. Cannot get the AVR code to compile. Here is the error I’m getting from the make command in OSX 10.7. Any help? Thanks in advance avr-gcc -Wall -Os -DF_CPU=8000000 -mmcu=atmega8 -c main.c -o main.o main.c: In function ‘main’: main.c:13:9: warning: implicit declaration of function ‘init_usart’ main.c:15:5: error: ‘ADCSRB’ undeclared (first use in this function) main.c:15:5: note: each undeclared identifier is reported only once for each function it appears in main.c:21:17: warning: array subscript has type ‘char’ main.c:22:17: warning: implicit declaration of function ‘sendNum’ main.c:22:17: warning: array subscript has type ‘char’ main.c:22:17: warning: implicit declaration of function ‘send’ main.c: At top level: main.c:30:6: warning: conflicting types for ‘sendNum’ main.c:22:35: note: previous implicit declaration of ‘sendNum’ was here main.c: In function ‘sendNum’: main.c:33:9: warning: implicit declaration of function ‘sprintf’ main.c:33:9: warning: incompatible implicit declaration of built-in function ‘sprintf’ main.c:34:9: warning: implicit declaration of function ‘strlen’ main.c:34:23: warning: incompatible implicit declaration of built-in function ‘strlen’ main.c: At top level: main.c:40:6: warning: conflicting types for ‘send’ main.c:22:52: note: previous implicit declaration of ‘send’ was here main.c: In function ‘send’: main.c:41:16: error: ‘UCSR0A’ undeclared (first use in this function) main.c:41:29: error: ‘UDRE0’ undeclared (first use in this function) main.c:42:9: error: ‘UDR0’ undeclared (first use in this function) main.c: At top level: main.c:45:6: warning: conflicting types for ‘init_usart’ main.c:13:9: note: previous implicit declaration of ‘init_usart’ was here main.c: In function ‘init_usart’: main.c:48:9: error: ‘UBRR0L’ undeclared (first use in this function) main.c:49:9: error: ‘UBRR0H’ undeclared (first use in this function) main.c:50:9: error: ‘UCSR0A’ undeclared (first use in this function) main.c:51:9: error: ‘UCSR0B’ undeclared (first use in this function) main.c:51:22: error: ‘RXEN0’ undeclared (first use in this function) main.c:51:33: error: ‘TXEN0’ undeclared (first use in this function) main.c:52:9: error: ‘UCSR0C’ undeclared (first use in this function) main.c:52:22: error: ‘UCSZ01’ undeclared (first use in this function) main.c:52:36: error: ‘UCSZ00’ undeclared (first use in this function) make: *** [main.o] Error 1 Sounds like a library problem. Can you blink a led with a 3 line LED_blink.c program? I think its a compiler configuration problem rather than a code problem. Yes I have gotten the blink program from this tutorial running I am still attempting to base the ECG circuit off of the Arduino board but am having some troubles so I thought Id give the true AVR a try. Also I am using the AD627 which I am assuming is similar enough to the 620 to yield the same results. Hi Scott, I found this post very helpful. I recreated the device, but with an ATmega32 on a small AVR development board that I have. I used the serial port connected via a MAX-232 level converter. This I connected to the USB port using a serial-USB adapter. I modified the code on the micro-controller according to the ATmega32. In the Python code I just uncommented the “print ser” and “print raw”, which runs on the PC. Now the problem — When I run the device, the python code shows the “FAIL” message. The ‘print’ lines revealed that the line being read from the serial port is blank. So I guess this means that data are not reaching the serial port. I have checked that the serial-USB adapter is working and also checked that my TXD pin is transmitting data. Therefore, I think the problem is because of a baud-rate mismatch between the device and the PC. I noticed that in your code you have declared F_CPU as 8 MHz, but on your device there is no external crystal, nor is there any code to set the internal RC oscillator (which must be 1 MHz) to 8 MHz. So my question is, what clock source is your device using and how did you get an accurate baud rate ? Thanks in advance. Nishchay The clock source actually _is_ the internal oscillator. The internal oscillator runs at 8MHz. Often, by default, the DIV/8 fuse is enabled, causing the clock to be divided by 8 (1MHz). I imagine if you disable the DIV/8 fuse, your device will work. You can then use an AVR baud rate calculator reference page like to help you with the rest. Good luck! I am trying to send out (x,y) coordinates of object in image to microcontroller via USB to Serial cable in Kubuntu version.I am using OpenCV C library for programming purpose.How to interface cable to laptop and make changes in my existing C program. Great project.. I am working on designing ecg system using arm processor. i need to write assembly c language to read and display ecg in the pc. can u suggest me how to do this? Thank You very much , that helped me . I followed your code and successfully implemented on ATMEGA16/32 . I didn’t known about sprintf function and by this post i came to know about that useful function and also a similar function “itoa” . I dont know python programming hence i am not able to compile it successfully ,although i tried to compile by following some tutorials but always stuck at “import serial” , hence i used an alternate named “live-graph” written in java. The IA-2142-U is a Dual 8bit Analog Output module, with up to 8 Digital Inputs and 3 Digital Outputs. The IA-2142-U structure and software control commands are Series-3000 compatible, including the watchdog protection circuit, and user interface extra Led and Jumper. USB Analog The post is very interesting. I think the hardware possibilities have widened now, too. I haven’t had the opportunity to do much beyond demo programs yet with one device I’ve gotten, but I think it might be something you would find useful. This is the Teensy++ 2.0 board from PJRC electronics. It is programmed via USB and offers the capability to use USB in user programs in any of several different modes, including HID. The HID operation is especially interesting since that is supported cross-platform without needing device driver installation. Teensy++ 2.0 website It can be programmed with C or in an Arduino mode. I located this device as something to pair with a Raspberry Pi single-board Linux computer. The Teensy board supplies the few IO and AD capabilities that the Raspberry Pi lacks, and communication can be via USB, simplifying hardware interactions. In terms of your project, I would think this pair of devices would handle the complete set of acquisition, processing, and display tasks, and could be made into a small, portable, battery-driven package. I’ve gotten a UBEC to pair with a 2500MAH lead-acid 12V battery to handle power needs for my work. Scott, looks like the source code is truncated, the Python programs ends at a.addPoint(point), with the two while clause unfinished… No, that’s accurate. One of the while clauses has already exited by the time that addPoint() is called. The other one loops forever. Dear Scott; Thank you for posting your multichannel usb device. You certainly did your homework, and what a beautiful job. I am working on somethig quasi-similiar, and need a little advice. My ThunderVolt device will soon have a voltage output, and I want to display that voltage on a PC. I am looking for a programmer for hire to build a program to display my data. I wonder if you would like to make some cash doing this for me. Basically I want to input +5 volts and a ground from a PC or Mac into my device. There the +5 will be dropped across two resistances, one of which will be a precision 47 K ohm resistor in series with the user’s body resistance, which is usually 50 K to several million ohms. A third wire from the USB will tap off the voltage across the 47 K ohm and send it back to the user’s computer. There, the computer will read this voltage, compute the amperage by using the 47 K ohm resistance, and calculate the power level in watts. The computer will display this first reading of the resistor on the screen, with print capability. One of my questions is: Does a PC have sufficient onboard capability so that no additional hardware is needed to determine the voltage from the third usb line? I do not want to have to make an additional hardware attachment onto my device, since there isn’t room in my case box to do it, and the cost would be prohibitive. I just want to plug my device’s usb female into a usb-male-to-male cable and connect that to a usb port on a user’s computer. Then, the user will install my software, grab the two handholds and his/her computer will display a number on the screen. The computer will measure the 47 K ohm resistor’s voltage, calculate the amperage across the resistor, and compute the power across the user’s resistance in micro watts. The computer will use the first initial reading from the 47 K resistor, because that is the most accurate. I am a poor Maine resident trying to make a go of a cottage industry, but I believe I have found a very interesting method of quantifying the vitality of a human being in watts. My pre-zap and post-zap tests show a gain in power of about 6%, depending on how healthy or sick the tester is. I previously used a DVM to do the testing, which is really the same thing but it measures in resistance. Since the body is essentially an electrical charge, I wanted a reading in watts, not ohms. Dr. Albert Szent Gygorgyi, MD Ph.D, and Nobel prize winner wrote a thesis on this subject. It beacame the backstop for my ThunderVolt device, along with Tesla’s radiant energy, Royal Raymond Rife work, William Reich, and Hulda Clark, who make the first practical zapper. My device is built on these five outstanding people. I invite your kind reply. Sincerely yours, Steve Coffman antioxc@gmail.com I know that there are software programs out there that read cpu voltages on a computer, and they don’t need hardware accessories or other installs to work. Hi Scott, This is a great Python code to monitor my arduino uno attach with a geophone~ And it works awesome !!!! Thanks a lot~~ I am trying to make a wireless geophone, it just getting started Jimmy You must log in to post a comment.
https://www.swharden.com/wp/2012-06-14-multichannel-usb-analog-sensor-with-atmega48/
CC-MAIN-2017-43
en
refinedweb
In this tutorial we are going to learn how to programmatically change the Theme Accent color in Windows Phone device. In our previous article we have seen how programmatically detect the theme running in the background (). This article is an extension of the previous article where we are going to programmatically detect the accent color and apply it to a control. Accent colors are the background colors that can be selected from a list of available resources that best suits the device as per the convenience. Accent color with themes just change the Font color and it will not affect the Font size or control sizes. There are different Accent colors available at present for Windows Phone and it Keeps increasing on each and every updates which Microsoft provides as shown in the list below. Windows Phone is provided with 10 different colors selection and by default 1 for the manufacturer company can select on his own. So totally 11 Accent colors will be available to select. Let us see how we can implement the Accent color changes in Windows Phone programmatically, Open Visual Studio 2010 and create a new Silverlight for Windows Phone 7 project with a valid project name as shown in the screen below. Now drag and drop few controls that are used to show the output on the accent color changes, since we cant impose this on a control directly we will use the text box to show the output. So drag and drop 3 text blocks with some dummy values as shown in the screen below. XAML Code: <phone:PhoneApplicationPage x: </StackPanel> <!–ContentPanel – place additional content here–> <Grid x: <TextBlock Height="40" Style="{StaticResource PhoneTextAccentStyle}" HorizontalAlignment="Left" Margin="37,44,0,0" Name="textBlock1" Text="" VerticalAlignment="Top" Width="382" /> <TextBlock Height="40" HorizontalAlignment="Left" Margin="37,110,0,0" Name="textBlock2" Text="" VerticalAlignment="Top" Width="382" /> <TextBlock Height="40" Style="{StaticResource PhoneTextAccentStyle}" HorizontalAlignment="Left" Margin="37,176,0,0" Name="textBlock3" Text="" VerticalAlignment="Top" Width="382" /> </Grid> </Grid> </phone:PhoneApplicationPage> In the above XAML code we can see the textblock1 and textblock3 is provided with the style pointing to the PhonetextAccentStyle which indicates it gets the style based on the accent color selected from the settings. Now in the code behind add the below code which gets some data based on the selection, basically the hexcolor of the selected accent as shown below. Code Behind: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsoft.Phone.Controls; namespace F5debugHowto41 { public partial class MainPage : PhoneApplicationPage { // Constructor public MainPage() { InitializeComponent(); Color currentAccentColorHex = (Color)Application.Current.Resources["PhoneAccentColor"]; string strHexa = currentAccentColorHex.ToString(); string strRGB = currentAccentColorHex.R.ToString() + ";" + currentAccentColorHex.G.ToString() +";" + currentAccentColorHex.B.ToString(); textBlock1.Text = "Accent Color Hex – " + strHexa; textBlock2.Text = "Accent Apha Channel" + currentAccentColorHex.A.ToString(); textBlock3.Text = "Accent Color RGB – " + strRGB; } } } Screen: We can see the textbox1 and textbox3 is affected as per the accent color, but textbox2 is not affected as this has not been provided with the theme style, now change the theme accent color by going to settings and we can see the changed affected to the textbox1 and textbox3 alone as shown in the screen below. That’s it from this tutorial, so here we have seen how to play around with the Accent colors and how to style the project which can use the device theme effectively to provide a unique design for the Windows phone application development. See you all in the next tutorial until then bye bye and Happy Programming!! {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/how-play-theme-accent-color
CC-MAIN-2017-43
en
refinedweb
Declared Overloaded Record Fields (DORF) Thumbnail Sketch This proposal is addressing the narrow issue of namespacing for record field names by allowing more than one record in the same module to share a field name. Specifically the record is under usual H98 namespace and module/qualification control, so that for the record type in an importing module: - Some fields are both getable and setable; - has been prototyped, and methods get and set Record declarations generate a Has instance for each record type/field combination. As well as type arguments for the record and field, there is a third argument for the field's type, which is set at the instance level using equality constraints in a functional-dependencies style. Here is the Has class ( r is the record, fld is the proxy type for the field, t is the fields type), an ... myCust.customer_id ... -- dot notation is sugar for reverse func apply Note that the Has mechanism uses a Proxy as the type 'peg' for a field (this is the wildcard argument to get and set): - The Proxy must be declared once, and is then under regular name control. - The field selector function also must be declared once, using the Proxy. --and set. - Parametric polymorphic fields can be applied in polymorphic contexts, and can be setincluding changing the type of the record. - Higher-ranked polymorphic fields can be applied in polymorphic contexts, but cannot be set. Uses equality constraints on the instance to 'improve' types. Hasuses type family functions to manage type-changing update, which adds complexity -- see Implementer's view. - Multiple fields can be updated in a single expression (using familiar H98 syntax), but this desugars to nested updates, which is inefficient. - Pattern matching and record creation using data constructor prefix to { ... } work as per H98 (using DisambiguateRecordFields and friends). . DORF Full motivation and examples Explained in 5 wiki pages (these proposals are linked but somewhat orthogonal): - No Mono Record Fields (precursor to DORF) - DORF -- Application Programmer's view (this page) - DORF -- Implement. Option Four: Type Punning on the `fieldLabel` q.v. .]
https://ghc.haskell.org/trac/ghc/wiki/Records/DeclaredOverloadedRecordFields?version=27
CC-MAIN-2017-43
en
refinedweb
Most of my career has been in .Net development and I’m pretty comfortable applying design patterns in C#, but as I’m learning Ruby, I was finding it difficult to figure out how to implement them without creating awkward, hard-to-read code. Recently a local Ruby guru, Nate Klaiber, recommended that I pick up the book Design Patterns in Ruby. Well, he more than recommended it, he actually gave a copy autographed by the author (totally unexpected but very, very appreciated!). He has a thorough review of the book (and many others) on his book review site. As I read through the book, I’m going to do a series of posts showing examples of how each pattern would be applied in both C# and Ruby. (The book has around a dozen patterns so this will probably take several weeks to get through them all). The first one we’re going to work through is the Template pattern, which is arguably one of the easiest patterns to learn. The template pattern involves creating a base class with methods that can be overridden by subclasses. The base class itself has a central method that will call the other methods in the object. In the examples below, we’re going to create two vehicle factory classes, one that creates a vehicle specifically for winter travelling, and another that creates vehicles for summertime cruising. *Note: I’m certainly no Ruby expert (or C# expert for that matter), but am learning here as I go. If you have a better way to do something, please leave a comment! C# Example The template method is all over the .Net framework. A perfect example is ASP.Net’s page lifecycle. You’re given hooks to perform actions during page load, init, etc. Let’s start by creating our Vehicle class. using System.Collections.Generic; namespace DesignPatterns.TemplatePattern { public class Vehicle { public string SteeringWheel { get; set; } public string Tires { get; set; } public string Seats { get; set; } public string Engine { get; set; } public string MakeModel { get; set; } public IList<string> Amenities { get; set; } public Vehicle() { Amenities = new List<string>(); } } } Next we’ll create our base class. Note that CreateVehicle() is the only public method. The rest are only accessible to objects that derive from this class. And some of the “hook” methods have a default implementation while others are declared abstract and must be overridden by the subclass. namespace DesignPatterns.TemplatePattern { public abstract class VehicleFactoryBase { protected Vehicle VehicleUnderConstruction { get; set; } protected virtual void AddSteeringWheel() { VehicleUnderConstruction.SteeringWheel = "Standard Steering Wheel"; } protected virtual void AddTires() { VehicleUnderConstruction.Tires = "All Season Tires"; } protected virtual void AddSeats() { VehicleUnderConstruction.Seats = "Cloth Bucket Seats"; } protected virtual void AddAmenities() { } protected abstract void AddEngine(); protected abstract void AddMakeModel(); public Vehicle CreateVehicle() { VehicleUnderConstruction = new Vehicle(); AddSteeringWheel(); AddTires(); AddSeats(); AddAmenities(); AddEngine(); AddMakeModel(); return VehicleUnderConstruction; } } } Now we’re ready to create our first implementation, a vehicle factory that creates vehicles designed for living in the snow belt. namespace DesignPatterns.TemplatePattern { public class WinterVehicleFactory : VehicleFactoryBase { protected override void AddTires() { VehicleUnderConstruction.Tires = "Snow Tires"; } protected override void AddAmenities() { VehicleUnderConstruction.Amenities.Add("4 Wheel Drive"); VehicleUnderConstruction.Amenities.Add("Snow Plow"); } protected override void AddEngine() { VehicleUnderConstruction.Engine = "4.7L V-8"; } protected override void AddMakeModel() { VehicleUnderConstruction.MakeModel = "Jeep Grand Cherokee"; } } } Our second implementation will be a vehicle factory that creates our summer cruiser. namespace DesignPatterns.TemplatePattern { public class SummerVehicleFactory : VehicleFactoryBase { protected override void AddSeats() { VehicleUnderConstruction.Seats = "Leather Bucket Seats"; } protected override void AddAmenities() { VehicleUnderConstruction.Amenities.Add("Premium Sound System"); } protected override void AddEngine() { VehicleUnderConstruction.Engine = "3.7L V6"; } protected override void AddMakeModel() { VehicleUnderConstruction.MakeModel = "Nissan 370Z Coupe"; } } } And finally, below are the unit tests needed to verify both factories creates their vehicles as expected. using NUnit.Framework; namespace DesignPatterns.TemplatePattern { [TestFixture] public class When_creating_a_winter_vehicle { private Vehicle winterVehicle; [SetUp] public void EstablishContext() { winterVehicle = new WinterVehicleFactory().CreateVehicle(); } [Test] public void Should_create_a_Jeep_Grand_Cherokee() { Assert.That(winterVehicle.MakeModel, Is.EqualTo("Jeep Grand Cherokee")); } [Test] public void Should_have_a_standard_steering_wheel() { Assert.That(winterVehicle.SteeringWheel, Is.EqualTo("Standard Steering Wheel")); } [Test] public void Should_have_snow_tires() { Assert.That(winterVehicle.Tires, Is.EqualTo("Snow Tires")); } [Test] public void Should_have_a_V8_engine() { Assert.That(winterVehicle.Engine, Is.EqualTo("4.7L V-8")); } [Test] public void Should_have_standard_cloth_seats() { Assert.That(winterVehicle.Seats, Is.EqualTo("Cloth Bucket Seats")); } [Test] public void Should_have_4_wheel_drive() { Assert.That(winterVehicle.Amenities.Contains("4 Wheel Drive")); } [Test] public void Should_have_a_snow_plow() { Assert.That(winterVehicle.Amenities.Contains("Snow Plow")); } [Test] public void Should_not_have_any_other_amenities() { Assert.That(winterVehicle.Amenities.Count, Is.EqualTo(2)); } } } using NUnit.Framework; namespace DesignPatterns.TemplatePattern { [TestFixture] public class When_creating_a_summer_vehicle { private Vehicle summerVehicle; [SetUp] public void EstablishContext() { summerVehicle = new SummerVehicleFactory().CreateVehicle(); } [Test] public void Should_create_a_Nissan_370Z() { Assert.That(summerVehicle.MakeModel, Is.EqualTo("Nissan 370Z Coupe")); } [Test] public void Should_have_a_standard_steering_wheel() { Assert.That(summerVehicle.SteeringWheel, Is.EqualTo("Standard Steering Wheel")); } [Test] public void Should_have_all_season_tires() { Assert.That(summerVehicle.Tires, Is.EqualTo("All Season Tires")); } [Test] public void Should_have_a_V6_engine() { Assert.That(summerVehicle.Engine, Is.EqualTo("3.7L V6")); } [Test] public void Should_have_leather_seats() { Assert.That(summerVehicle.Seats, Is.EqualTo("Leather Bucket Seats")); } [Test] public void Should_have_a_premium_sound_system() { Assert.That(summerVehicle.Amenities.Contains("Premium Sound System")); } [Test] public void Should_not_have_any_other_amenities() { Assert.That(summerVehicle.Amenities.Count, Is.EqualTo(1)); } } } Ruby Example Now let’s create the same vehicle class in Ruby. Really impressed with how much smaller this class is! class Vehicle attr_accessor :steering_wheel, :tires, :seats attr_accessor :amenities, :engine, :make_model def initialize @amenities = [] end end Next, we’ll create our base vehicle factory class. It’s worth noting that Ruby doesn’t have an “abstract” keyword. To get the same effect, we’ll raise an exception if an expected method was not overridden in a subclass. require 'lib/Vehicle' class VehicleFactoryBase def create_vehicle @vehicle_under_construction = Vehicle.new add_steering_wheel add_tires add_seats add_amenities add_engine add_make_model @vehicle_under_construction end protected def add_steering_wheel @vehicle_under_construction.steering_wheel = "Standard Steering Wheel" end def add_tires @vehicle_under_construction.tires = "All Season Tires" end def add_seats @vehicle_under_construction.seats = "Cloth Bucket Seats" end def add_amenities end def add_engine raise "subclass must include the logic for setting the engine" end def add_make_model raise "subclass must include the logic for setting the make and model" end end And here’s our first subclass, the winter vehicle factory. require 'lib/Vehicle' require 'lib/VehicleFactoryBase' class WinterVehicleFactory < VehicleFactoryBase def add_tires @vehicle_under_construction.tires = "Snow Tires" end def add_amenities @vehicle_under_construction.amenities.push "4 Wheel Drive" @vehicle_under_construction.amenities.push "Snow Plow" end def add_engine @vehicle_under_construction.engine = "4.7L V-8" end def add_make_model @vehicle_under_construction.make_model = "Jeep Grand Cherokee" end end [/source] <p>Next up is the summer vehicle factory.</p> require 'rubygems' require "spec" require 'lib/SummerVehicleFactory' describe "When creating a summer cruiser" do before(:each) do @summer_cruiser = SummerVehicleFactory.new.create_vehicle end it "should be a Nissan 370Z" do @summer_cruiser.make_model.should == "Nissan 370Z Coupe" end it "should have a standard steering wheel" do @summer_cruiser.steering_wheel.should == "Standard Steering Wheel" end it "should have all season tires" do @summer_cruiser.tires.should == "All Season Tires" end it "should have a V6 engine" do @summer_cruiser.engine.should == "3.7L V6" end it "should have leather seats" do @summer_cruiser.seats.should == "Leather Bucket Seats" end it "should have a premium sound system" do @summer_cruiser.amenities.should include("Premium Sound System") end it "should not have any other amenities" do @summer_cruiser.amenities.length.should == 1 end end In the next post we’ll look at the strategy pattern. Stay tuned! Ruby is ridiculously pretty… I’m assuming there is no “abstract” concept in Ruby, since there is no compiler to complain that you’re missing a method. Instead you would just rely on your tests to execute that method, correct? re: @Kevin It is impressive how much easier the code is on the eyes – especially the unit tests! And yep, you’re absolutely right. Ruby doesn’t really have an “abstract” concept – gotta have those unit tests 🙂 […] of posts comparing design pattern implementations in Ruby and C#, but only finished the first one (Template pattern). Will be continuing this series very soon. I […] […] the previous post of this series, we looked at how the Template pattern is implemented in both Ruby and C#. In this […]
http://www.gembalabs.com/2009/04/24/comparing-design-patterns-in-ruby-and-c-the-template-pattern/
CC-MAIN-2017-22
en
refinedweb
Question: Refer to Problem 12 11 If OhYes was a load fund Refer to Problem 12.11. If OhYes was a load fund with a 2% front-end load, what would be the HPR? Answer to relevant QuestionsYou are considering the purchase of shares in a closed-end mutual fund. The NAV is equal to $22.50 and the latest close is $20.00. Is this fund trading at a premium or a discount? How big is the premium or discount? Briefly discuss holding period return (HPR) and yield as measures of investment return. Are they equivalent? Explain. Briefly describe each of the following return measures for assessing portfolio performance and explain how they are used. a. Sharpe's measure b. Treynor's measure c. Jensen's measure (Jensen's alpha) Describe the 2 items an investor should consider before reaching a decision to sell an investment. Niki Malone’s portfolio earned a return of 11.8% during the year just ended. The portfolio’s standard deviation of return was 14.1%. The risk-free rate is currently 6.2%. During the year, the return on the market ... Post your question
http://www.solutioninn.com/refer-to-problem-1211-if-ohyes-was-a-load-fund
CC-MAIN-2017-22
en
refinedweb
Patent application title: Fixed content storage within a partitioned content platform, with replication Inventors: David B. Pickney (Andover, MA, US) Matthew M. Mcdonald (Quincy, MA, US) Benjamin J. Isherwood (Tewsbury, MA, US) IPC8 Class: AG06F1730FI USPC Class: 707617 Class name: Publication date: 2011-05-05 Patent application number: 20110106757 other tenants. Claims: 1. A storage method operating across a set of distributed locations, wherein at each location a redundant array of independent nodes are networked together to provide a cluster, comprising: logically partitioning a first cluster at a first location into a set of one or more tenants, wherein at least one tenant has associated therewith one or more namespaces, wherein a namespace comprises a collection of data objects; configuring a link between the first cluster and a second cluster; and replicating over the link information associated with a first tenant. 2. The storage method as described in claim 1 wherein the information associated with the first tenant includes one or more associated namespaces and object data associated with those namespaces. 3. The storage method as described in claim 2 wherein the information associated with the first tenant also includes tenant administrative and configuration information. 4. The storage method as described in claim 1 further including the step of: upon a given occurrence associated with the first cluster, redirecting clients of the first cluster to the second cluster. 5. The storage method as described in claim 1 further including providing access to the information in the second cluster on a read-only basis. 6. The storage method as described in claim 1 further including adding a second tenant to the link. 7. The storage method as described in claim 6 further including replicating over the link information associated with the second tenant without impairing a replication metric associated with replication of information associated with the first tenant. 8. A storage method operating across a set of distributed locations, wherein at each location a redundant array of independent nodes are networked together to provide a cluster, comprising: logically partitioning a first cluster at a first location into a set of one or more tenants, wherein a first tenant has associated therewith one or more namespaces, wherein a namespace comprises a collection of data objects; configuring a link between the first cluster and each of second and third clusters; and replicating over each link information associated with a first tenant. 9. The storage method as described in claim 8 wherein a link between the first cluster and the second cluster has a given replication link characteristic that differs from the characteristic of the link between the first cluster and the third cluster. 10. The storage method as described in claim 9 wherein the given replication link characteristic is a quality-of-service (QoS) or a function implemented across the link. 11. The storage method as described in claim 8 wherein the information associated with the first tenant includes one or more associated namespaces and object data associated with those namespaces. 12. The storage method as described in claim 11 wherein the information associated with the first tenant also includes tenant administrative and configuration information. 13. The storage method as described in claim 8 further including the step of: upon a given occurrence associated with the first cluster, redirecting clients of the first cluster to the second cluster or the third cluster. 14. The storage method as described in claim 8 further including providing access to the information in the second cluster or the third cluster on a read-only basis. 15. The storage method as described in claim 8 further including adding a second tenant to the link. 16. The storage method as described in claim 15 further including replicating over the link information associated with the second tenant without impairing a replication metric associated with replication of information associated with the first tenant.." BACKGROUND OF THE INVENTION [0005] 1. Technical Field [0006] The present invention relates generally to techniques for highly available, reliable, and persistent data storage in a distributed computer network. [0007] 2. Description of the Related Art [000 [0009] replicating data for other tenants. [0010]11] FIG. 1 is a simplified block diagram of a fixed content storage archive in which the present invention may be implemented; [0012] FIG. 2 is a simplified representation of a redundant array of independent nodes each of which is symmetric and supports an archive cluster application according to the present invention; [0013] FIG. 3 is a high level representation of the various components of the archive cluster application executing on a given node; [0014] FIG. 4 illustrates how a cluster is partitioned according to the techniques described herein; [0015] FIG. 5 illustrates an Overview page of a tenant administrator console; [0016] FIG. 6 illustrates a Namespace page of the tenant administrator console; [0017] FIG. 7 illustrates a Create Namespace container page of the tenant administrator console; [0018] FIG. 8 illustrates a Namespace Overview container page for a given namespace; [0019] FIG. 9 illustrates a Policies container page for the given namespace by which the administrator can configure a given policy; [0020] FIG. 10 illustrates how an administrator enables versioning for the namespace; [0021] FIG. 11 illustrates how an administrator enables a disposition service for the namespace; [0022] FIG. 12 illustrates how an administrator enables a privileged delete option for the namespace; [0023] FIG. 13 illustrates how an administrator enables retention classes for the namespace; [0024] FIG. 14 illustrates a Replication tab for the tenant; [0025] FIG. 15 illustrates one of the Namespaces in the Replication tab showing the graphs and statistics for the replication of that namespace; [0026] FIG. 16 illustrates how content is replicated to one or more remote archive sites to facilitate archival-based business continuity and/or disaster recovery; [0027] FIG. 17 represents how an administrator can create links between clusters to facilitate object level replication; and [0028] FIG. 18 illustrates how tenant data is replicated according to the subject matter of this disclosure. DETAILED DESCRIPTION OF THE INVENTION [0029]43] The content platform also may implement a replication scheme such as described in Ser. No. 11/936,317, filed Nov. 7, 2007, the disclosure of which is incorporated by reference. According to that disclosure, a cluster recovery process is implemented across a set of distributed archives, where each individual archive is a storage cluster of preferably symmetric nodes. Each node of a cluster typically executes an instance of an application (as described above) that provides object-based storage of fixed content data and associated metadata. According to the storage method, an association or "link" between a first cluster and a second cluster is first established to facilitate replication. The first cluster is sometimes referred to as a "primary" whereas the "second" cluster is sometimes referred to as a "replica." Once the link is made, the first cluster's fixed content data and metadata are then replicated from the first cluster to the second cluster, preferably in a continuous manner. Upon a failure of the first cluster, however, a failover operation occurs, and clients of the first cluster are redirected to the second cluster. Upon repair or replacement of the first cluster (a "restore"), the repaired or replaced first cluster resumes authority for servicing the clients of the first cluster. [0044] FIG. 16 illustrates a Primary cluster 1600 together with a three Replica clusters 1602 (one located in Wabasha, USA, one located in Luton, UK, and one located in Kyoto, JP). Typically, clusters are located in different geographic locations, although this is not a limitation or requirement. Content is replicated (through replication process) from the primary cluster (PC) 1600 to each replica cluster (RC) 1602 enabling business continuity and disaster recovery. Thus, in the event of an outage, client applications can failover to the replica cluster minimizing system downtime. Restore functionality (a recovery process) provides rapid re-population of a new primary cluster, which may be either the re-built/re-stored original primary cluster, or an entirely new primary cluster. [0045] FIG. 17 illustrates an administration console graphical user interface (GUI) 1700 that allows creation of replication Links between clusters. As noted above, a link enables a source namespace to be replicated to a specified target cluster. The link may be configured with one or more options. Thus, a digital signature option may be selected to guarantee authenticity of the link. A compression option may be selected to enable data compression across the link to minimize WAN bandwidth requirements. An encryption option may be selected to enable encryption (e.g., SSL) to be used if the link needs to be secured, which may be the case if the link encompasses a public network (such as the Internet). In addition, a scheduling option enables selection of when replication should take place, and how aggressive the replication should be. These options preferably are each configured by the Administrator. Preferably, archive objects are replicated from a primary cluster to one or more replica clusters in a secure manner. [0046] Preferably, replication is tracked at the object level, which includes fixed content data, metadata, and policy information (e.g., shredding attributes, and the like). The GUI may also expose metrics that include, for example, replication progress in terms of number of objects and capacity. Any archive may include a machine through which an Administrator can configure an archive for replication, recovery and fail-back. [0047] A content platform such as described above may implement a data protection level (DPL) scheme such as described in Ser. No. 11/675,224, filed Feb. 15, 2007, the disclosure of which is incorporated by reference. [0048]49] The following terminology applies to the subject matter that is now described: [0050] Data Account (DA): an authenticated account that provides access to one or more namespaces. The account has a separate set of CRUD (create, read, update, and delete) privileges for each namespace that it can access. [0051]52] Authenticated Namespace (ANS): a namespace (preferably HTTP-only) that requires authenticated data access. [0053]54] Tenant: a grouping of namespace(s) and possibly other subtenants. [0055] Top-Level Tenant (TLT): a tenant which has no parent tenant, e.g. an enterprise. [0056] Subtenant: a tenant whose parent is another tenant; e.g. the enterprise's financing department. [0057] Default Tenant: the top-level tenant that contains only the default namespace. [0058] Cluster: a physical archive instance, such as described above. [0059]60]61]62]63]64]65]66]67] <namespace-name>.<tenant-name>.<cluster-domain-suffix> These names comply with conventional domain name system (DNS) standards. As noted above, tenant names on a cluster must not collide. [0068]69]70]71]72]73]74]75] on a namespace-by-namespace basis, which is highly desirable and provides increased management flexibility. [0076]77]78] If replication is enabled for the tenant, the administrator console will provide a display of graphs and statistics, preferably on a per-namespace level. FIG. 14 illustrates a representative UI page 1400 when Replication is selected from the main Services tab. The page lists each Namespace that has been configured for the tenant, such as Namespace 1402. Data in a column 1404 indicates the up-to-date status of the replica (meaning all objects are replicated as of this date and time), and data in a "Day's Behind" graphic 1406 indicates how far behind the replica is relative to the primary. When the user selects any one of the identified Namespaces, such as 1402, a Namespace Overview container 1500 expands as shown in FIG. 15, exposing more detailed graphs and statistics about the selected namespace. The graph 1504 illustrates the data transfer rate and chart 1506 illustrates summary historical data. [0079] Data at a tenant level is replicated for tenants configured for replication. In the event of a disaster or other issue at the primary cluster, all of the archive configuration settings can be restored for the tenant from the replica. [0080] FIG. 18 illustrates the basic concept of replication as applied to tenants and namespaces. As illustrated, there is a primary cluster 1800 and at least one replica 1802. The link 1804 has been established between the clusters. Primary cluster 1800 has two configured tenants T1 and T2, and tenant T1 has two namespaces NS1 and NS2, which have been configured in the manner previously described and illustrated. Tenant T2 has one namespace, NS3. Cluster 1802 includes configured tenant T3, having two namespaces NS4 and NS5, as shown. According to this disclosure, the tenant T1 and its associated namespaces has been replicated over link 1804 that has been established between cluster 1800 and cluster 1802. The replica is indicated in dashed form in cluster 1802. The tenant T1 is read/write on cluster 1800, but read-only on the replica cluster 1802. Preferably, replication over the link 1804 is uni-directional, and information is replicated in one direction at a time (i.e. the entire link either is replicating from source to replica, or restoring in the opposite direction). Generally, the replication link replicates at least some tenant information from the source cluster to the replica cluster. [0081] The information to be replicated includes accounts for tenants and their associated namespaces. In a typical implementation, the following items may be selected for replication over a particular replication link: any number of top level directories from the default namespace, and any number of top level tenants. If a tenant is selected, preferably all associated information will be replicated, namely: tenant admin accounts, tenant data accounts, tenant admin logs, tenant configuration information (including their retention class definitions), all associated namespaces and their data, and all associated namespace logs (including compliance logs), and so forth. Preferably, replication metrics are provided (in the UI) for both the replication link and individual namespaces within it. One or more of these metrics may be available for the replication link: total bytes replicated (ever), total bytes restored (ever), number of operations replicated (ever), number of operations restored (ever), current bytes per second transfer rate, current operations per second rate, current errors per second rate, greatest replication backlog time (for whichever namespace is farthest behind), average replication backlog time (averaged across all replicated namespaces), and smallest replication backlog time. Preferably, one or more of the following metrics are viewable for each replicated namespace: total bytes replicated (ever), total bytes restored (ever), current bytes per second transfer rate, current operations per second rate, current errors per second rate, and replication backlog time. [0082] There may be one or more replication links per tenant or namespace. A particular cluster may have one or more outbound links (to one or more replica clusters), and one or more inbound links (from one or more replica clusters). If multiple links are supported, one tenant can be replicated to different clusters, and replication links with different characteristics or quality-of-service (QoS) may be implemented. In such a multi-link implementation, a tenant can replicate its namespaces over different replication links based on the link characteristics (static or dynamic). [0083] In a typical use case, such as shown in FIG. 18, the replica tenant T1 in cluster 1802 is read-only (although this is not a limitation). As a consequence, no configuration change may be made by any tenant administrator logged into the replica tenant, and no deletes or purges may be made by any data access account to any namespace associated with the replica tenant Likewise, while a data account (a user) with a search privilege may log in to the search UI on the replica and issue a query, however, the user may not do any update action, such as delete, hold, purge, privilege delete, or save the results of the query. [0084] Preferably, there are one or more exceptions to this read-only restriction for system events on the replica, and administrative and data accounts on the replica. System events specific to the replicated tenant and associated namespaces cause writes to administrative logs. Thus, actions such as logging into the replica tenant admin UI are logged on the replica as a system event, and certain events (e.g., irreparable file found) logged in a replica's namespace are also logged in the replica's system events. Moreover, if a tenant admin fails a consecutive login check on the replica, his or her account may be disabled on the replica. Similarly, if a data account fails a consecutive authentication check on the replica, the account is disabled on the replica. Because all configuration (including admin and data accounts) preferably is read-only on the replica, the procedure for re-enabling these accounts on the replica preferably is as follows: on the source cluster, disable account; re-enable the account; and wait for the account to be replicated. The update on the source is then propagated to the replica, and the account on the replica becomes enabled. [0085] To allow for a new tenant to be added to the replication link without stalling the progress of other tenants, the following replication algorithm is implemented. By way of background, a replication algorithm can be visualized as a series of loops (over time, and region) culminating in a query for all objects that changed in a certain time range on a specific region. This approach is modified as follows: loop over each region, performing a certain amount of work, until all objects are replicated within a requested time interval. There is no requirement that each region (or namespace within a region) be synchronized with respect to any lower bound of its change time. A "content collector" (such as a Tenant content collector, an AdminLog content collector) work entity is then defined. The content collector for namespaces preferably is region-based in that it must further subdivide its work among multiple regions of the cluster. The flow control then works as follows: [0086] While there are intervals left: [0087] 1. Define the current time interval: [0088] 2. For each content collector: [0089] 1. Replicate eligible content for the given time interval [0090] 3. If the content collector is region-based: [0091] 1. For each region: [0092] 1. For each namepace to be replicated: [0093] 1. Replicate eligible objects within time interval [0094] The last step (replicating all eligible objects within the time interval) is done up to a "unit of work." Because the number of objects to replicate in a time window may be large, the amount of work done may be limited if desired, e.g., by capping the "amount of work" done for each namespace (e.g., the number of objects replicated, the quantity of bytes replicated, or some other measure). [0095] The approach described above is beneficial because it enables a tenant or namespace to be added to an existing replication link without interfering with the progress of other objects on the link. Tenants and namespaces can be replicated with varying QoS levels, as described, and replication of an individual namespace can be paused without affecting other tenants. Further, by structuring the database appropriately, the replication algorithm also can do fine-grained work among different object types. The technique enables rotation among the namespaces at a fairly granular level so that replication of objects across namespaces appears uniform. [0096] The replication techniques described herein can be expanded to include other types of data (beyond tenant and namespace data), such as administrative and account data, administrative logs, retention classes, database configurations and objects, and the like. [0097] The replication logic preferably processes changes on a region-specific basis within the cluster. The replication logic thus may query and generate changes for the following types of information: tenant and namespace-specific admin logs, tenant admin accounts, tenant data accounts, tenant configuration, and namespace configuration. A query gathers matching records for each of these types of data. [0098]99]. [0100]. [0101]. [0102]. [0103] The described subject matter enables different administrators to perform cluster and tenant management, and to minimize the information that passes between these two areas. [0104] The subject matter is not limited to any particular type of use case. A typical use environment is an enterprise, although the techniques may be implemented by a storage service provider or an entity that operates a storage cloud. [0105]. [0106]07]08] While given components of the system have been described separately, one of ordinary skill will appreciate that some of the functions may be combined or shared in given instructions, program sequences, code portions, and the like. [0109]. [0110] Although the present invention has been described in the context of an archive for "fixed content," this is not a limitation either. The techniques described herein may be applied equally to storage systems that allow append and replace type modifications to the content. [0111] The replication techniques described herein may be extended. Thus, for example, the techniques may be used in the context of network attached storage (NAS) for the purpose of replicating key information from NAS virtual servers or application containers to the content platform. This would allow one NAS system to read the information from another NAS system to enable reconstitution of NAS system configuration data. This approach can also be extended to the objects and files sent to the content platform from the NAS system components. Patent applications by Matthew M. Mcdonald, Quincy, MA US User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20110106757
CC-MAIN-2014-41
en
refinedweb
Symbol Handling Wolfram Language symbols are the ultimate atoms of symbolic data. Every symbol has a unique name, exists in a certain Wolfram Language context or namespace, and can have a variety of types of values and attributes. ReferenceReference Symbol — the head of a symbol; create a symbol from a name SymbolName — give the name of a symbol as a string Context — give the name of the context of a symbol Names — find a list of symbols with names matching a pattern NameQ — test whether a string corresponds to the name of any symbol Remove — completely remove a symbol so its name is no longer recognized $NewSymbol — a function to be applied to the name of any new symbol ValueQ — test whether a symbol has any type of value OwnValues ▪ DownValues ▪ UpValues ▪ Options ▪ Attributes Information — give information on the values of a symbol Clear — clear all values associated with a symbol Save — save values associated with a symbol Unique — generate a symbol with a unique name Converting between Expressions and Strings » ToString ▪ ToExpression ▪ ...
http://reference.wolfram.com/language/guide/SymbolHandling.html
CC-MAIN-2014-41
en
refinedweb
07 January 2010 03:38 [Source: ICIS news] SINGAPORE (ICIS news)--Taiwan’s Chinese Petroleum Corp has bought a 30,000-tonne cargo of heavy naphtha at a premium of $20-$25/tonne (€14-17/tonne) to Japan spot quotes for February delivery, traders said on Thursday. Prices were much higher, compared with what CPC paid for its term naphtha supply. The refiner secured 675,000 tonnes of full-range naphtha for delivery between April and December at a discount of $4.00-$5.00/tonne to ?xml:namespace> Asian spot naphtha prices are on a rally amid razor-thin supply and robust petrochemical demand. By midday Thursday trade, second half February naphtha prices scaled up to $768.50-$771.50/tonne because of strong crude, versus $758-$759/tonne on Wed
http://www.icis.com/Articles/2010/01/07/9323082/taiwan-cpc-buys-february-naphtha-at-20-25tonne-premium.html
CC-MAIN-2014-41
en
refinedweb
This module provides convenient access to the database classes (Db) of wxWindows. These classes have been donated to the wxWindows library by Remstar International. (Note: these classes are not supported on MacOS X at the time of writing (november 2003)). These database objects support ODBC connections and have been tested with wxWindows on the following databases: Oracle (v7, v8, v8i), Sybase (ASA and ASE), MS SQL Server (v7 - minimal testing), MS Access (97 and 2000), MySQL, DBase (IV, V) (using ODBC emulation), PostgreSQL, INFORMIX, VIRTUOSO, DB2, Interbase, Pervasive SQL . The database functions also work with console applications and do not need to initialize the WXCore libraries. The examples in this document are all based on the pubs database that is available in MS Access 97 and 'comma separated text' format from. We assume that your system is configured in such a way that pubs is the datasource name of this database. (On Windows XP for example, this is done using the /start - settings - control panel - administrative tools - data sources (ODBC)/ menu.) The available data sources on your system can be retrieved using dbGetDataSources. Here is an example from my system: *Main> dbGetDataSources >>= print [("pubs","Microsoft Access Driver (*.mdb)")] Connections are established with the dbWithConnection call. It takes a datasource name, a user name, a password, and a function that is applied to the resulting database connection: dbWithConnection "pubs" "" "" (\db -> ...) (Note that most database operations automatically raise a database exception (DbError) on failure. These exceptions can be caught using catchDbError.) The resulting database (Db) can be queried using dbQuery. The dbQuery call applies a function to each row (DbRow) in the result set. Using calls like dbRowGetValue and dbRowGetString, you can retrieve the values from the result rows. printAuthorNames = do names <- dbWithConnection "pubs" "" "" (\db -> dbQuery db "SELECT au_fname, au_lname FROM authors" (\row -> do fname <- dbRowGetString row "au_fname" lname <- dbRowGetString row "au_lname" return (fname ++ " " ++ lname) )) putStrLn (unlines names) The overloaded function dbRowGetValue can retrieve any kind of database value (DbValue) (except for strings since standard Haskell98 does not support overlapping instances). For most datatypes, there is also a non-overloaded version, like dbRowGetInteger and dbRowGetString. The dbRowGet... functions are also available as dbRowGet...Mb, which returns Nothing when a NULL value is encountered (instead of raising an exception), for example, dbRowGetIntegerMb and dbRowGetStringMb. If necessary, more data types can be supported by defining your own DbValue instances and using dbRowGetValue to retrieve those values. You can use dbRowGetColumnInfo to retrieve column information (ColumnInfo) about a particular column, for example, to retieve the number of decimal digits in a currency value. Complete meta information about a particular data source can be retrieved using dbGetDataSourceInfo, that takes a data source name, user name, and password as arguments, and returns a DbInfo structure: *Main> dbGetDataSourceInfo "pubs" "" "" >>= print catalog: C:\daan\temp\db\pubs2 schema : tables : ... 8: name : authors type : TABLE remarks: columns: 1: name : au_id index : 1 type : VARCHAR size : 12 sqltp : SqlVarChar type id: DbVarChar digits : 0 prec : 0 remarks: Author Key pkey : 0 ptables: [] fkey : 0 ftable : 2: name : au_fname index : 2 type : VARCHAR ... Changes to the database can be made using dbExecute. All these actions are done in transaction mode and are only comitted when wrapped with a dbTransaction. Execute a SQL query against a database. Takes a function as argument that is applied to every database row (DbRow). The results of these applications are returned as a list. Raises a DbError on failure. do names <- dbQuery db "SELECT au_fname FROM authors" (\row -> dbRowGetString row "au_fname") putStr (unlines names) Execute a SQL query against a database. Takes a function as argument that is applied to every row in the database. Raises a DbError on failure. dbQuery_ db "SELECT au_fname FROM authors" (\row -> do fname <- dbRowGetString row "au_fname" putStrLn fname) Execute an IO action as a transaction on a particular database. When no exception is raised, the changes to a database are committed. Always use this when using dbExecute statements that update the database. do dbWithConnection "pubs" "" "" $ \db -> dbTransaction db $ dbExecute db "CREATE TABLE TestTable ( TestField LONG)" Get the complete meta information of a data source. Takes the data source name, a user id, and password as arguments. dbGetDataSourceInfo dsn userid password = dbWithConnection dsn userId password dbGetInfo Automatically raise a database exception when False is returned. You can use this method around basic database methods to conveniently throw Haskell exceptions. dbHandleExn db $ dbExecSql db "SELECT au_fname FROM authors"
http://hackage.haskell.org/package/wxcore-0.10.1/docs/Graphics-UI-WXCore-Db.html
CC-MAIN-2014-41
en
refinedweb
When coordinating code between a master page, a child page and several controls, it can be very useful to have a listing of when each event is fired. There are a lot of lists on the web but they are rarely complete, and I have yet to find the code on how the lists were created. In the end, I wrote my own testing framework, which is the topic of this article. It turns out to be very simple: override methods to make a note of what is happening, then continue. I did this by creating custom classes to replace the standard Page and MasterPage objects, and by writing "wrapper" web controls. If you are familiar with these techniques or are just looking for a reference, feel free to go to the Results sections at the bottom. Page MasterPage This article is targeted towards beginner-ish web programmers who already know the basics of web programming and have a test site where they can experiment. It was written and tested using ASP.NET 2.0, but there's nothing here that should break with any later versions of .NET. I used VB because that is the language I use every day; the code is simple and should translate into C# very easily. The first thing I did was create the TestChild and TestMaster classes, which will replace the regular Page and MasterPage classes. The advantage to putting the code in separate classes is that we can have several different test pages without having to copy the code. They start off looking like this: TestChild TestMaster Public Class TestChild Inherits System.Web.UI.Page End Class Public Class TestMaster Inherits System.Web.UI.MasterPage End Class Next is to override the methods we want to document. All of these overrides look the same: they write some text into the response stream, then call the same method in the base class. Here is a typical example: Protected Overrides Sub OnLoad(ByVal e As System.EventArgs) Response.Write("Child Load<br/>" + vbCrLf) MyBase.OnLoad(e) End Sub The break tag will put the text on its own line when the page is delivered to the browser, and the carriage return/line feed will put a break in the page's source. These just make the text easier to read. The OnLoad override in TestMaster is identical, except that it writes "Master Load". OnLoad In TestChild, I overrode these methods: LoadControlState LoadViewState OnDataBinding OnInit OnInitComplete OnLoadComplete OnPreInit OnPreLoad OnPreRender OnPreRenderComplete OnSaveStateComplete Render RenderChildren RenderControl SaveControlState SaveViewState Master pages do not have as many interesting methods. Here is what I overrode in TestMaster: You may have noticed that I did not override the OnUnload method. This is because the Unload event is fired after the page has been rendered; Response no longer exists so an error gets thrown. OnUnload Unload Response To get information on control events, we need to create custom controls that will report back. I chose to write a button and a grid, because the ordering of Click and DataBinding is usually where I go wrong (I'm not the only one to change a grid's data before the data is reloaded, am I?) Click DataBinding Namespace TestControls Public Class TestButton Inherits System.Web.UI.WebControls.Button Protected Function Response() As HttpResponse Return HttpContext.Current.Response End Function End Class Public Class TestGrid Inherits System.Web.UI.WebControls.GridView Protected Function Response() As HttpResponse Return HttpContext.Current.Response End Function End Class End Namespace Web controls do not have a Response method so I added one; this is just to make the code a bit cleaner and easier to maintain. By putting the controls in a namespace, we can configure the website to attach a prefix for IntelliSense. I'm not sure if this is actually necessary, but it is useful enough that I always do it. In the web.config file, find the <pages> block inside <system.web>, and add this: web.config <pages> <system.web> <controls> <add tagPrefix="test" namespace="TestControls"/> </controls> As with TestChild and TestMaster, we override the methods we want to document so they write to the response stream and call the underlying method. The only difference is that we use the ID property of the control rather than static text; this way, we can have several controls on a page and know which message belongs to which control. ID Protected Overrides Sub OnPreRender(ByVal e As System.EventArgs) Response.Write(Me.ID + " PreRender<br/>" + vbCrLf) MyBase.OnPreRender(e) End Sub Most of the interesting methods I wanted to document were in both controls: RenderContents In TestButton, I also overrode OnClick and OnCommand. In TestGrid, I overrode OnDataBound. TestButton OnClick OnCommand TestGrid OnDataBound For the grid, let's create a very simple XML data file that we can link to. <?xml version="1.0" encoding="utf-8" ?> <People> <Person firstName="Gregory" lastName="Gadow"/> </People> Now we are ready to actually create some pages. For this demo, I am using a single master page and a single child page. <%@ Master Language="VB" Inherits="TestMaster"%> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> <html xmlns="" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <asp:ContentPlaceHolder </form> </body> </html> <%@ Page Language="VB" Inherits="TestChild" MasterPageFile="~/Test.master" Title="Untitled Page" %> <asp:Content </asp:Content> The key here is the Inherits attribute in the <%@ Master %> and <%@ Page %> directives. If you are using the single-page code model (which is what I'm doing here), Inherits tells the compiler to use a class other than the standard one as the page's base; you will need to add the tag yourself. If you are using separate code-behind files, the tag is inserted automatically and will point to your code-behind; you do NOT want to change it there. Instead, go to the code-behind file and change it to be based on your custom classes. Inherits <%@ Master %> <%@ Page %> We are ready for our first test. Open a browser and point it to the child page you just created. Child PreInit Master Init Child Init Child InitComplete Child PreLoad Child Load Master Load Child LoadComplete Child PreRender Master PreRender Child PreRenderComplete Child SaveViewState Master SaveViewState Child SaveStateComplete Child RenderControl Child Render Child RenderChildren Master RenderControl Master Render Master RenderChildren There you are: the events of a master and child page, documented in the order they occurred. If you look at the browser source, you will notice that the text occurs before even the DOCTYPE declaration. This makes sense, because the messages were sent to the response stream before the page was rendered. DOCTYPE Now we add some controls to the child page: <%@ Page Language="VB" Inherits="TestChild" MasterPageFile="~/Test.master" Title="Untitled Page" %> <asp:Content <hr /> <h3>Child page controls</h3> <div style="margin-bottom:1em;"> <test:TestGrid <asp:XmlDataSource </div> <test:TestButton </asp:Content> The horizontal rule is just a visual separator from the text and our controls, and the div tag groups the grid and its data source while providing a bit of spacing. The documentation now shows: div Child PreInit Child_Grid Init Child_Button Init Master Init Child Init Child InitComplete Child PreLoad Child Load Master Load Child_Grid Load Child_Button Load Child LoadComplete Child PreRender Master PreRender Child_Grid DataBinding Child_Grid DataBound events for Child_Grid occur before Child_Button because the grid is above the button. If you reverse these controls, you will see that the order of their events also gets reversed. Child_Grid Child_Button There are a few interesting things to note. The grid's DataBinding and DataBound events occur after the master page's PreRender but before the child page's PreRender. The grid saves its control state, but the button does not. And the controls save their view state after the master page does, which saves its view state after the child page. DataBound PreRender Below the line, we see how the controls got rendered. Notice that the grid gets put on the page during RenderChildren while the button is written during Render. Click on the button to create a postback, and look at the events. Child PreInit Child_Grid Init Child_Button Init Master Init Child Init Child InitComplete Child_Grid LoadControlState Child_Grid LoadViewState Child_Button LoadViewState Child PreLoad Child Load Master Load Child_Grid Load Child_Button Load Child_Button Click Child_Button Command Child LoadComplete Child PreRender Master PreRender grid does not bind to any data this time; instead, it reloads the data from the control state. Because this is a postback, there is a view state that can be loaded so the controls do just that. After the button gets loaded, its Click event is processed, then its Command event. The controls are rendered just as they were before. Command There are many more tests that can be done: put controls on the master page above and below the content placeholder, try nested master pages, track different controls. Please experiment, and if you get any interesting results, please post them as comments.
http://www.codeproject.com/Articles/106781/Documenting-the-Life-Cycle-of-an-ASP-Page?fid=1586043&df=90&mpp=10&sort=Position&spc=None&tid=3626606&PageFlow=FixedWidth
CC-MAIN-2014-41
en
refinedweb
Syntax Highlighting You can establish a custom font and color scheme, for more information, see Default syntax highlighting: Code Highlighting When ReSharper detects an error and highlights it with a red curly line or displays an unresolved symbol in red. Hover the pointer over the error to display its description as a tooltip. For more information about the ways to find out why the code is highlighted, see Since ReSharper has its own code inspections, you can specify whether to display them as Solution-Wide Analysis ReSharper does not only analyze errors in the current Visual Basic .NET file, but also inspects the whole solution taking the dependencies between files into account and shows the results of analysis in the Errors in Solution window. For more information, see Inspect This Inspect This is a shortcut to several analysis features. Those are rather powerful and allow you to see how values and method calls flow through you code. The list of available features depends in current context. For more information, see Code Completion Code Completion features help you write code faster by providing a set of items to complete based on surrounding context. For more information, see Three Code Completion features are available in Visual Basic: Symbol Completion ReSharper suggests namespaces, types, methods, fields, properties, etc. Quick-Fixes Remove redundant 'imports' If none of the symbols from a particular namespace are used, the corresponding Imports directive is considered as redundant. ReSharper provides a quick-fix to remove all such directives from a file. Import type If you use a symbol from a namespace that is not imported, ReSharper suggests to import the corresponding namespace and provides the necessary quick-fix. Add 'Async' modifier As you know, asynchronous operation support is a new feature still being developed. It has some advantages over synchronous programming. As ReSharper keeps pace with times, this feature is supported since ReSharper 6.1. The. Examples of Context Actions Add new format item If you need to add some runtime data (= dynamic data) to a string literal, use this context action. It wraps the string literal with the. Create overload without parameter For each parameter of a function there's a context action that will create a function without that parameter which calls the original function. Implement member After a base class is defined, the next logical step is to implement its members in all classes derived from the base class. You can write code manually, but a better decision is to apply the appropriate context action. ReSharper automatically detects all derived classes and prompts you to decide where a base class member should be implemented and generates code. Rearrange Code.
http://www.jetbrains.com/resharper/webhelp60/ReSharper_by_Language__Visual_Basic__Code_Analysis_and_Coding_Assistance.html
CC-MAIN-2014-41
en
refinedweb
java.lang.Object javax.swing.tree.DefaultMutableTreeNodejavax.swing.tree.DefaultMutableTreeNode oracle.javatools.ui.tree.lazy.LazyParentNodeoracle.javatools.ui.tree.lazy.LazyParentNode public class LazyParentNode A LazyParentNode is a type of DefaultMutableTreeNode that doesn't compute its children until the user attempts to expand the node. Thus avoiding expensive creation of child nodes that might never be seen by the users. To use: Create an implementation of LazyParent which fetches the children, create a LazyParentNode and add it to a tree's DefaultTreeModel The LazyParent instance given at construction time is set as the 'UserObject' of this DefaultMutableTreeNode. If the process to determine the children takes any significant time then a temporary LazyProgressNode is displayed in the tree to give the user feedback on what's happening. public LazyParentNode(javax.swing.tree.DefaultTreeModel treeModel, LazyParent lazyParent) public LazyParentNode(javax.swing.tree.DefaultTreeModel treeModel, LazyParent lazyParent, LazyProgressController controller) The controller will inform listeners whether any progress nodes in this tree are currently fetching their children and has the ability to cancel fetching. treeModel- The DefaultTreeModel of the tree containing the nodes lazyParent- A LazyParent implementation that will fetch the children of this node controller- a controller to track all of nodes in the tree
http://docs.oracle.com/cd/E16162_01/apirefs.1112/e17493/oracle/javatools/ui/tree/lazy/LazyParentNode.html
CC-MAIN-2014-41
en
refinedweb
Introduction. Using these two posts, you should now be able to get up and running and create a working external list that not only leverages SQL Azure as your line-of-business (LOB) data, but also gives you a little jQuery veneer. However, the question I’ve been wrangling with of late is how to leverage BCS in SP-O using a WCF service? Using a WCF service can be much more powerful than using the SQL Azure proxy because you can literally model many different types of data that are beyond SQL Azure—ranging from REST endpoints, in-memory data objects, entity-data model driven services, service bus-mediated connections to on-premises data, and so on. I’m still working through a couple of more complex scenarios, but I wanted to get something out there as I’ve had a number of requests on this. Thus, in this post, I’ll show you a simple way to use WCF to create an external list with the two core operations: read list and read item. The reason you will want to use WCF is that you will at some point want to connect your own LOB back-end to the SP-O external list, and you’re more than likely going to want to mediate the loading of that data with a WCF service of some sort. Now, if you’ve used the BDC Metadata Model templates in Visual Studio 2010, then you’ll be familiar with how you model the LOB data you’re trying to load into the external list (else, you’d just stick with a direct connection to SQL Azure) by using the web methods that map to the CRUD operations. You will do it in this post, though, using a generic WCF service project template (or more precisely, a Windows Azure Cloud project with a WCF Service role). In a set of future posts, I’ll walk through more complex scenarios where you have full CRUD methods that are secured through certificates. The high-level procedure to create an external list in SP-O that uses a cloud-based WCF service is as follows: 1. Create a back-end LOB data source that you’ll run your service against; 2. Create a WCF service and deploy to Windows Azure; 3. Assess the permissions in the Metadata Store; and 4. Create an external content type (ECT) in SharePoint Designer that digests your WCF service and creates the list for you. LOB Data In this example, the LOB data is going to be SQL Azure—yes, I know, but the back-end data source is less important than the connection to that data. That said, if you navigate to your Windows Azure portal () and sign in using your LiveID, you’ll see the options similar to the below in your developer portal. (You need to have a developer account to use Windows Azure, and if you don’t you can get a free trial here:.) Here are the general steps to create a new SQL Azure db: 1. Click Database to display the database management capabilities in the Windows Azure portal. 2. Click Create to create a new database. Provide a name for the database and select Web edition and 1GB as the maximum size. 3. Click the Firewall Rules accordion control to manage your firewall rules . Note that you’ll need to ensure you have the firewall of your machine registered here so you can access the SQL Azure database. 4.After you create your SQL Azure database, you can now navigate away open SQL Server 2008 R2 Management Studio. 5.When prompted, provide the name of your server and enter the log-in information. Also, click the Options button to expose the Connections Properties tab and select Customers (or whatever you named your SQL Azure database). Click Connect. SQL Server will connect to your new SQL Azure database. 6. When SQL Server connects to your SQL Azure instance, click the New Query button as illustrated in the following image. 7. You now have a query window with an active connection to our account. Now you have the Customer database, you need to create a table called CustomerData. To do this, type something similar to the following SQL script and click the Execute Query button: CREATE TABLE [CustomerData]( [CustomerID] [int] IDENTITY(1,1)NOT NULL PRIMARY KEY CLUSTERED, [Title] [nvarchar](8)NULL, [FirstName] [nvarchar](50)NOT NULL, [LastName] [nvarchar](50)NOT NULL, [EmailAddress] [nvarchar](50)NULL, [Phone] [nvarchar](30)NULL, [Timestamp] [timestamp] NOT NULL ) 8. You’ll now want to create a set of records for your new database table. To do this, type something similar to the following SQL script (adding different data in new records as many times as you’d like). INSERT INTO [CustomerData] ([Title],[FirstName],[LastName],[EmailAddress],[Phone]) VALUES ('Dr', 'Ties', 'Arts', 'ties@fabrikam.com','555-994-7711'), ('Mr', 'Rob', 'Barker', 'robb@fabrikam.com','555-933-6514') … 9. Eventually, you will have a number of records. To view all of the records you entered, type the following script and click the Execute Query button (where in this script Customers is the database name and CustomerData is the table name). Select * from Customers.dbo.CustomerData 14. The picture below illustrates the type of results you would see upon entering this SQL script in the query window. 15. Close the SQL Server 2008 R2 Management Studio, as you are now done adding records. This gives you a pretty high-level view of how to create and populate the SQL Azure DB. You can get a ton more information from the Windows Azure Developer Training Kit, which can be found here:. Now that you’re done with the SQL Azure DB, let’s move onto the cloud-based WCF service. Creating the WCF Service As I mentioned earlier, the WCF service needs to ‘model’ the data you’re pulling back from your LOB and exposing in your external list. Modeling the data means at a minimum creating a Read List method and Read Item method. Optional methods, and ones you’d arguably want to include, would be Create Method, Update Method and Delete Method. The WCF service you create will be using the Cloud template and be deployed to your Windows Azure account. To create the WCF service: 1. Open Visual Studio 2010 and click New Project. Select the Cloud option and provide a name for the project. 2. Select the WCF Service Web Role and click the small right-arrow to add the service to the cloud project. Click OK. 3. The cloud solution will create a new service project and the Windows Azure cloud project, which you use to deploy the service directly to your Windows Azure account. 4. You’ll want to add the SQL Azure db to your project, so select Data on the file menu and select Add New Data Source. 5. You can also right-click the main service project and select Add, New Item. Select Data and then select ADO.NET Entity Data Model. 6. You’re then prompted to walk through a wizard to add a new database as an entity data model. In the first step, select Generate from Database. In the second, select New Connection and then connect to your SQL Azure instance. (You’ll need to either obfuscate the connection string or include it in your web.config before moving on. Given this is a sample, select Yes to include in your web.config and click Next.) Now, select the tables you want to include in your entity data model, and click Finish. 7. Make sure you set the Build Action to EntityDeploy and Copy always, else you’ll spend countless hours with delightfully vague errors to work through. (Note that you may also try setting Embedded Resource as the Build Action if you have deploy issues. I had an issue with this the other day and noticed that when analyzing my assemblies using Reflector that some EDM resource files were missing. Setting Embedded Resource solved this problem.) This ensures your entity data model gets deployed to Windows Azure. (You’ll also note that in the screenshot below, I renamed my services to be more intuitive than Service1. I also have a clientaccesspolicy.xml file, which is only necessary if you’re going to consume the service from, say, a Silverlight application.) 8. Right-click the service project and select Add and then select Class. Provide a name for the class, and then add the following variables to the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace SharePointCallingSvc { public class CustomerRecord { public int objCustomerID { get; set; } public string objTitle { get; set; } public string objFirstName { get; set; } public string objLastName { get; set; } public string objEmailAddress { get; set; } public string objHomePhone { get; set; } } } 9. In the service code, add a method that will read one item and a method that will retrieve all items from the SQL Azure (or our LOB) data store. using System; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Activation; using System.Collections.Generic; using Microsoft.ServiceBus; namespace SharePointCallingSvc { [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class SharePointCallingService : ISharePointCallingService { public CustomerRecord[] GetCustomers() { using (CustomersEntities db = new CustomersEntities()) { var myItems = (from c in db.Customers select c); CustomerRecord[] myCustomerArray = new CustomerRecord[myItems.Count()]; int i = 0; foreach (Customer item in myItems) { myCustomerArray[i] = new CustomerRecord(); myCustomerArray[i].objCustomerID = item.CustomerID; myCustomerArray[i].objTitle = item.Title; myCustomerArray[i].objFirstName = item.FirstName; myCustomerArray[i].objLastName = item.LastName; myCustomerArray[i].objEmailAddress = item.EmailAddress; myCustomerArray[i].objHomePhone = item.Phone; i++; } return myCustomerArray; } } public CustomerRecord GetCustomer(int paramCustomerID) { using (CustomersEntities db = new CustomersEntities()) { var myItem = (from c in db.Customers where c.CustomerID == paramCustomerID select c).FirstOrDefault(); CustomerRecord returnCustomer = new CustomerRecord(); returnCustomer.objCustomerID = myItem.CustomerID; returnCustomer.objTitle = myItem.Title; returnCustomer.objFirstName = myItem.FirstName; returnCustomer.objLastName = myItem.LastName; returnCustomer.objEmailAddress = myItem.EmailAddress; returnCustomer.objHomePhone = myItem.Phone; return returnCustomer; } } } } 10. Add the interface contract. using System; using System.Collections.Generic; using System.Runtime.Serialization; using System.ServiceModel; namespace SharePointCallingSvc { [ServiceContract] public interface ISharePointCallingService { [OperationContract] CustomerRecord[] GetCustomers(); [OperationContract] CustomerRecord GetCustomer(int CustomerID); } } 11. Right-click the Cloud project and select either Publish or Package. Publish is a more automated way of publishing your code to Windows Azure, but you need to configure it. Package is more manual. If you’ve never published an application to Windows Azure, select Package. Your solution will build and then Windows Explorer will open with the two built files—a configuration file and package. 12. Leave the Windows Explorer open for the time being and jump back to your Windows Azure portal. 13. Above, you can see that I’ve got a new hosted service set up, but if you don’t click New Hosted Service in the ribbon, fill out the properties of the new hosted service (e.g. Name, URI prefix, deployment name, etc.). Click Deploy to production environment and then click Browse Locally to load the package and configuration files—which should still be open in Windows Explorer. 14. Click OK, and then go have a coffee; it’ll take a few minutes to fully deploy. 15. When it is deployed, you will be able to click on the service definition to retrieve the WSDL, which should reflect the two web methods you included earlier. 16. At this point, I would create a simple test app to make sure your service works as expected. If it does, your one method (GetCustomer) will take an ID and pass back a single record, and your other method (GetCustomers) will return all of the records in the LOB back-end. Assuming your service works fine, you’re now ready to move onto the next step, which is making sure you’ve set the permissions for your ECT. Setting the Permissions for the External Content Type To set the permissions on the Metadata Store, where the ECTs are stored, simply navigate to the Business Data Connectivity option in your SP-O portal, and select Set Metadata Store Permissions. Type in the person you want to have permissions for the ECT, and click Add, and then set the explicit permissions. Click OK when done. You’re now ready for the final step: creating the external content type using the simple service. Creating the External Content Type Creating the external content type is similar to how you did it using SharePoint Foundation and SharePoint Designer; you create a new ECT and SharePoint Designer saves it directly to the site for you. 1. Navigate to your SP-O site and then click Site Actions and then select Edit in SharePoint Designer. 2. Click External Content Types in the left-hand navigation pane. 3. Click the External Content Type in the ribbon. Add a Name and a Display Name and leave the other options defaulted. Click the ‘Click here to discover…’ link. 4. In the External Data Source Type Selection, select WCF Service from the drop-down and then click OK. 5. You now add metadata about the WCF service in the WCF Connection dialog. For example, add the service metadata URL (e.g.), select WSDL as the metadata connection mode, and then add the service URL to the Service Endpoint URL (). Add an optional name. Click OK, and your service should resolve, and you’ll now be able to add the Read Item and Read List operations. 6. Below, you can see that I now have the two web methods that I created exposed in my data connection—so I can now create the ECT and save it to the Metadata Store. 7. To do this, right-click on each of the methods in sequence. When right-clicking the GetCustomer method, make sure you select the Read Item operation and follow the wizard. When right-clicking the GetCustomers method, select Read List as the operation, as is shown below. 8. Both times you right-click and select an operation, a wizard will open to guide you through the process of creating that operation. For example, when you right-click and select New Read Item Operation, you’ll be prompted with a first step where you simply click Next. In the next step, you’ll then need to map the ID in your web method as the Identifier. You then click Next and then Finish. At this point, you’ve created both the Read Item and Read List operations and can click the Create Lists & Form button to create a new list using that ECT. The result is a new external list, which is reflective of the class name properties as the column headers. And voila, you’re now done. You’ve created an external list using BCS for SP-O using a WCF service talking to a SQL Azure back-end. To reiterate, we used the WCF service because we wanted to model our own return data using the web methods in the service. On the other side of that service could literally be many different types of LOB data. While you have the Metadata Store permissions that you assessed and set, the actual service is in some way unsecured. To get a service working to test out the modeling, this would be fine; though, for production code you may want to provide a more secure channel that comes with a username/password authentication and is protected using a certificate/key. A Couple of Tips When deploying services to Windows Azure, test often. I always test locally before I move to the cloud. You can build and test in the local emulator environment, or you can deploy the service to IIS to test it out there. Also, I am not a bindings expert in WCF, so this is an area I’m looking into right now to try and understand the best binding method for the service. For example, the serviceModel elements from my web.config are below, and I’m in the process of trying out different bindings—to manage both secure and non-secure WCF-based web services. <system.serviceModel> <client /> <bindings> <customBinding> <binding name="WCFServiceWebRole1.CloudToOnPremForwarder.customBinding0"> <binaryMessageEncoding /> <httpTransport /> </binding> </customBinding> </bindings> <services> <service behaviorConfiguration="WCFServiceWebRole1.CloudWCFServiceBehavior" name="SharePointCallingSvc.SharePointCallingService"> <endpoint address="" binding="customBinding" bindingConfiguration="WCFServiceWebRole1.CloudToOnPremForwarder.customBinding0" contract="SharePointCallingSvc.ISharePointCallingService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="WCFServiceWebRole1.CloudWCFServiceBehavior"> <useRequestHeadersForMetadataAddress> <defaultPorts> <add scheme="http" port="81" /> <add scheme="https" port="444" /> </defaultPorts> </useRequestHeadersForMetadataAddress> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Lastly, if you get any ‘metadata resource’ errors, always make sure you’ve set the right properties on the entity data model. The resource files for my entity data model were not being generated and deployed, so I got this error a couple of times. What’s Next? As I mentioned earlier in the blog, I’ll next follow up with the other web methods/operations that you’d want to build into the cloud-based WCF service and will hopefully get a chance to discuss how to manage secure connections using certificates. It’d also be great to hear from other folks who are banging away at this. I know the MS writers are working hard to get some docs out on it, but it’s always good to test out different scenarios to take the BCS/SP-O functionality for a good test-run given the functionality is there now. Happy coding. Steve @redmondhockey
http://blogs.msdn.com/b/steve_fox/archive/2011/11/12/leveraging-wcf-services-to-connect-bcs-with-sharepoint-online.aspx
CC-MAIN-2014-41
en
refinedweb
But you can also find currently supported latest release on CodePlex website. CodePlex website: Mad Library I'm not planning to put here sourcecode, because it's being actualized regularly. This article is about how to make some part of project programming done faster than before with usage of Mad Library. Mad Library is interested in more 'zones' of programming. Sometimes, this library can help you. Mad Library is a free Open Source .NET Framework library which provides many accelerations to normal programming with preprogrammed functions. … and there are also another loads of smaller not specified functions for help and effectivity in programming. Mad Library also contains unclassified functions which are also usable in more sections of programming. Some of them are in examples which are written here. .NET framework provides more automatically and unautomatically generated properties which are usable for programmer. Mad Library also offers you attribute tagging. ProgramPropertyAttributte serves for setting custom properties of program when you don’t want or you don’t have time for own attributes. Example is missing because program properties are attributes. ProgramPropertyAttributte is documented, so you can read comments written for it in source code. ProgramPropertyAttributte Collections such as List or Dictionary are often used in code. Mad Library offers you Collection<> class for storing and managing reference types with public implicit constructor. Example of usage: Collection<> Collection<MyType> col = new Collection<MyType>(); col.Add(new MyType("BCD")); col.Add(new MyType("ABC")); col.Add(new MyType("CDE")); col.Add(new MyType("1T")); IOrderedEnumerable<MyType> ocol = col.OrderBy(t => t.Name); col.InsertAt(0, new MyType("ZABC")); col.InsertAt(col.Count - 1, new MyType("XYZ")); foreach (MyType mt in col) Console.WriteLine(mt.Name); Console.WriteLine("==============="); foreach (MyType t in ocol) Console.WriteLine(t.Name); Console.Read(); Mad Library is systematical. Most of data and information holding types are deriving from this interface, so when you’re working with some HTML or CSS elements or properties, or when you’re working with requests and responses, you can use functions of these interfaces for parsing string into it or for building string from it. This small example shows you how can you these interfaces for managing communicating data types. HttpResponse resp = new HttpResponse(); IParsable ip = resp as IParsable; ip.Parse("HTTP/1.1 200 OK\r\nOPTIONS\r\n\r\nDATADATA\r\n\r\n"); Console.WriteLine((resp as IBuildable).Build()); Console.Read(); Managing is one just one small bit helpful part of Mad Library which is available for derivation and for next use. Managing of items can help you when you will try to make small application with should manage many kinds of items. Item first = new Item(1, "First test item."); //Creates new item Item second = new Item(2); //Creates new item second.Name = "A second one."; //Sets name of second to 'A second one.' Items i = new Items(first, second); //Creates new item collection. i.Sort(ItemsSort.Name); //Sorts created collection by item name. foreach (Item a in i) //Enumerates sorted collection. Console.WriteLine(a.Name); //Prints name of enumerated item. Our product management is primary targeted for derivation. Product first = new Product(99991, "First product.", 200.5); Product second = new Product(99992, "Another product.", 200.5000001); Products collection = new Products(first, second); collection.SetProductsCategory(new ProductCategory("Beer", "Our best beers.")); foreach (Product p in collection) Console.WriteLine(p.Category); User management is based on User class and it’s created for fast use. User first = new User("User2", "TestTest", "test@test.test"); User second = new User("User1", "Test", "test@test.test"); Users col = new Users(first, second); col.Sort(SortConfiguration.Username); Console.WriteLine(first.VerifyUser("User1", "TestTest")); Console.WriteLine(first.CompareUsers(second)); foreach (User u in col) Console.WriteLine(u.Email); Persons ready in last check in. You can see them (with documentation) on codeplex. Mathematics are in Mad Library targeted for now on fractions, proportionalities and percentage. Fractions are frequently missing in code when you need to use some mathematical or physical equations and you need to be precise. Mad Library offers you Fraction class which is documented and prepared for normal use. Operators are set for basic mathematical operations and and conversions. Fraction f = new Fraction(10, 20); f += new Fraction(1, 2); f /= new Fraction(1, 2); f *= new Fraction(15, 20); double res = f; Console.WriteLine(res); Console.Read(); Sometimes you need to use direct proportionality in your code. Direct proportionality is simpler than inverse and it seems like you don’t need to use some external codes for it in your project. But Mad Library is sure that you sometimes needed use some equations or functions that are really simple but later you found out that you spent lot of time when you were working on it. So down in this part of document is written example of that, what Mad Library offers you in this problematic. DirectProportionality proportionality = new DirectProportionality(3); double x = proportionality.GetX(36); double y = proportionality.GetY(x); double coefficient = MadLibrary.Math.DirectProportionality.GetCoefficient(x, y); Console.WriteLine(proportionality.Coefficient == coefficient); As it’s written higher in this document, in programming are sometimes occurring some easy problems about which you now how to easily solve them, but you must spent lot of time when you’re working on it. Inverse proportionality is bit more difficult than direct, but for computer it’s almost same. Mad Library offers you solution for this kind of proportionality. InverseProportionality prop = new InverseProportionality(10); double x = prop.GetX(10); double y = prop.GetY(x); double coefficient = MadLibrary.Math.InverseProportionality.GetCoefficient(x, y); Console.WriteLine(coefficient == prop.Coefficient); Calculation of percentage is also simple and it’s also missing in .NET 4.5 library. Percentage percentage = new Percentage(500, 50); Console.WriteLine(percentage.Value); double bas = percentage.GetBase(50); double val = percentage.GetValue(bas); Console.WriteLine(percentage == MadLibrary.Math.Percentage.GetPercentage(bas, val)); Mad Library offers you very usable part of itself which is named Networking. Networking is primary targeted on Client<->Server communication, but it also contains P2P communication functions and communication by more protocols, eg. HTTP (builders of HTTP request and HTTP response), NODA, … Client – Server communication is many times simple, but from experiences you certainly know that you must spend lot of time when you’re working on it. Sometimes, when you’re creating application which is primary based on Client - Server communication you must program your own code because you have some special functions, methods or properties which you must do. But sometimes you’re using this kind of communication only for small operations, or this kind of communications is not so important part of your application. Then you can use preprogrammed options of Mad Library. Code: Communicator cm = new Communicator(new Requester(new IPEndPoint(IPAddress.Loopback, 1025))); cm.RequestData = new MadLibrary.Net.Client.Request("GIVE ME RESPONSE"); ExampleServer srv = new ExampleServer(); srv.Start(); cm.Open(); cm.Request(); Console.WriteLine(cm.ResponseData); cm.Close(); srv.End(); Console.Read(); ExampleServer: public class ExampleServer : Server { public ExampleServer() : base(1025) { } public override void Process(Socket client) { byte[] data = new byte[5]; client.Receive(data); client.Send(data); } } HTTP protocol is mostly used protocol in the world. It’s our daily part. Daily part of professional programmers and also daily part of normal users. So these utterance is reason why was build HTTP manager in Mad Library. For now, we have just holders, builders and parsers in Mad Library, but these types are compatible with Request and Response classes in Client-Server communication by the operators (for now implicit). By these conversions you can send HTTP requests and get HTTP responses by the communicator and server. Down is written example about few functions of HTTP in Mad Libraries. HttpRequest request = new HttpRequest(); request.Method = "GET"; request.HttpVersion = "1.1"; request.Options.Add("User-Agent", "unknown"); request.RequestedFile = "index.html"; request.Data.Add("variable", "value"); Console.WriteLine(request.Build()); HttpResponse resp = new HttpResponse(); resp.HttpVersion = "1.1"; resp.Content = ""; resp.Options.Add("Server", "this one"); resp.StatusCode = 200; resp.StatusMessage = "OK"; Console.WriteLine(resp.Build()); Console.Read(); More about NDMP. Code: ServiceListener<ExampleService> listener = new ServiceListener<ExampleService>(1025); listener.Start(); Client client = new Client(new IPEndPoint(IPAddress.Loopback, 1025)); client.Connect(); MadLibrary.Net.Ndmp.Response resp = client.Select("wWwWwWw"); client.Disconnect(); Console.WriteLine(resp.ToString()); Console.Read(); ExampleService: public class ExampleService : Service { public ExampleService() : base() { } public ExampleService(IPEndPoint client) : base(client) { } public override Response ProcessClient(Request request) { return base.ProcessClient(request); } } Peer-to-Peer communication is also important part of network communication and programming. Mad Library offers you classes which can hold P2P communication information and which can manage/mediate that kind of communication. So here is example: Peerer p = new Peerer(new PeererInfo(1025, System.Net.IPAddress.Loopback)); new Thread(new ThreadStart(() => { Thread.Sleep(100); p.Connect(); new Thread(new ThreadStart(() => { Console.WriteLine(p.Receive(3)); })).Start(); p.Send("LOL"); Thread.Sleep(100); })).Start(); Console.ReadLine(); We’re communicating everyday. We’re using our mouth and tongue to create speech. We’re using our fingers and hands to write informations, contracts and create masterpieces or farces. But why we must strain ourselves? We can use computers! Mad Library offers you ways and means to make your existence as programmer easier by random text generator. Here is small example of usage. RandomText rt = new RandomText(SupportedCharacters.All, CharacterCase.Lower, 10); rt.Generate(); Console.WriteLine(rt.GeneratedText); string gtedText = MadLibrary.Text.RandomText.Generate(SupportedCharacters.Classic, CharacterCase.Alternative, 15, false); Console.WriteLine(gtedText); Console.Read(); Formatting is not ready yet. We're planning to make Wikipedia formatting parser and BBCodes. As it’s written somewhere here, web sites, web files, HTTP protocol and all about and above it is being used and developed daily. Web is for now large and important part of Mad Library. It contains managing of web files, web directories, web pages, web sites, HTML documents, CSS files, etc. and it works good. Out web “toolkit” () is simple and usable. It’s not based on .NET’s System.Xml, it’s our own creation. System.Xml HTML is markup language for web pages which are being transferred by HTTP protocol. It’s similar to XML and by the years it started to be more freely in writing and it changed in a lot things. There where added new tags which are now supported by most of modern web browsers. Mad Library is primary and mostly working with classic XHTML 1.0 Strict, so when you’re trying to parse some strings or files into HTML classes in library, be sure that your strings and files are complying basic important rules of that version. So here is small example of that what you can do with HTML in Mad Library. HtmlDocument doc = new HtmlDocument(); //creating new HTML document HtmlElement html = new HtmlElement("html", ElementType.Compound); //creating root element html.Elements.Add(new HtmlElement("head", ElementType.Compound)); HtmlElement body = new HtmlElement("body", ElementType.Compound); html.Elements.Add(body); HtmlElement div = new HtmlElement("div", ElementType.Compound); div.InnerText = "HELLO WORLD!"; div.Attributes.Add(new HtmlAttribute("style", "background-color: red;color: black;font-weight: bold;")); body.Elements.Add(div); doc.Elements.Add(html); doc.Save("C:\\Web\\index.html"); Output: <html> <head> </head> <body> <div style="background-color: red;color: black;font-weight: bold;"> HELLO WORLD! </div> </body> </html> CSS is style sheet language which is being daily used by web designers. It’s often used with markup languages and it has simple syntax with simple rules. For now, lastest version of CSS is CSS 3.0 with more delicate and harder functions for making web page more beautiful. Mad Library contains simple small parser and builder for this language, so you can use it and you don’t have to code all parsing processes. Down is small example of CSS management in Mad Library. CssDocument doc = new CssDocument(); CssStyle style = new CssStyle(new CssGroup("div", CssGroupType.Element)); style.Properties.Add(new CssProperty("background-color", "black"), new CssProperty("color", "white")); CssStyle hover = new CssStyle(new CssGroup[]{new CssGroup("div", CssGroupType.Element)}, new CssGroup("hover", CssGroupType.PseudoClass)); hover.Properties.Add(new CssProperty("background-color", "white"), new CssProperty("color", "black")); doc.Styles.Add(style, hover); doc.Save("C:\\Web\\main.css"); Output: div { background-color<span class="code-none"><span class="code-none">: black<span class="code-none"><span class="code-none">; color<span class="code-none"><span class="code-none">: white<span class="code-none"><span class="code-none">; <span class="code-none"><span class="code-none">} div:hover <span class="code-none"><span class="code-none">{ background-color<span class="code-none"><span class="code-none">: white<span class="code-none"><span class="code-none">; color<span class="code-none"><span class="code-none">: black<span class="code-none"><span class="code-none">; <span class="code-none"><span class="code-none">}</span></</span></span></span></span>span></</span>span></</span></span></span></span></span>span></</span> All web documents specified in Mad Library are derived from WebFile class which contains basic properties common for this kind of documents. There’s also WebDirectory which loads files of web pages by HTTP call path with WebPage class. And there’s also WebSite class which is usable for management of some portal or some portfolio, simply – for web site management. Here’s a small example: WebFile WebDirectory WebPage WebSite HtmlDocument doc = new HtmlDocument("index"); CssDocument css = new CssDocument("main"); WebDirectory directory = new WebDirectory("Web", doc, css); WebPage page = new WebPage(doc, css); WebSite site = new WebSite(directory, "C:\\Web"); CssDocument ncss = site.GetStyles("main.css"); Console.WriteLine(css.Name == ncss.Name); True These examples are showing only small parts of functions which are in Mad Library. To be informed about new releases you can follow us on CodePlex. Thank you for reading. Just small note: please don't be frightened when you'll see some grammatical.
http://www.codeproject.com/Articles/405309/Making-some-parts-of-programming-easier-Mad-Librar
CC-MAIN-2014-41
en
refinedweb
This article is in the Product Showcase section for our sponsors at CodeProject. These reviews are intended to provide you with information on products and services that we consider useful and of value to developers. Spokes offers different ways to use its API and there are some interesting thing that can be done to get to know your device. Python is my language of choice when 'poking around' to see see what an API can generate based on different inputs. This helps me to come up to speed and start leveraging its full potential. With Spokes it is the same. You can also use a language that offers an interactive approach to play with its API. In this example below, I connect to Spokes via its REST API by simply using the httplib and json modules from Python 2.7.3: import httplib import json class PLTDevice: def __init__(self, spokes, uid): self.DeviceInfoURL = '/Spokes/DeviceServices/'+uid+'/Info' self.AttachURL = '/Spokes/DeviceServices/'+uid+'/Attach' self.ReleaseURL = '/Spokes/DeviceServices/'+uid+'/Release' self.attached = False self.session = None self.uid = uid self.spokes = spokes self.spokes.conn.request('GET', self.DeviceInfoURL); r = self.spokes.conn.getresponse() if r.status == 200: response = r.read() response = json.loads(response) self.device = response['Result'] def __getattr__(self, name): if name in self.device.keys(): return self.device[name] def attach(self): self.spokes.conn.request('GET', self.AttachURL); r = self.spokes.conn.getresponse() if r.status == 200: response = r.read() response = json.loads(response) if response['Err']==None: self.attached = True self.session = response['Result'] def release(self): self.spokes.conn.request('GET', self.ReleaseURL); r = self.spokes.conn.getresponse() if r.status == 200: response = r.read() response = json.loads(response) if response['Err']==None: self.attached = False self.session = None def get_events(self, queue=127): eventsURL = '/Spokes/DeviceServices/'+self.session+'/Events?queue='+str(queue) self.spokes.conn.request('GET', eventsURL); r = self.spokes.conn.getresponse() if r.status == 200: response = r.read() response = json.loads(response) print response class Spokes: def __init__(self): self.host = '127.0.0.1' self.port = '32001' self.conn = httplib.HTTPConnection(self.host+":"+self.port) self.DevicesListURL = '/Spokes/DeviceServices/DeviceList' self.devices=[] def get_device_list(self): self.conn.request('GET', self.DevicesListURL); r = self.conn.getresponse() if r.status == 200: response = r.read() response = json.loads(response) if response['Err']==None: for d in response['Result']: self.devices.append(PLTDevice(self, d['Uid'])) return self.devices This is by no means a full solution, it is simply a quick and dirty way to invoke the most basic APIs from the REST interface (I've added close to no error checks). It does show how one could interact with Spokes in an interactive way to learn the device events and how to use them. Now, after loading the code above and switching to the interactive mode: >>> s=Spokes() >>> print s <__main__.Spokes instance at 0x029FDFD0> >>> print s.devices [] >>> dl = s.get_device_list() >>> print len(dl) 1 >>> d = dl[0] >>> print d.attached False >>> d.attach() >>> print d.attached True >>> d.get_events() {u'Description': u'', u'Err': None, u'Type_Name': u'DeviceEventArray', u'Result': [{u'Event_Log_Type_Name': u'HeadsetStateChange', u'Event_Id': 14, u'Timestamp': 634939487913969843L, u'Age': 10853, u'Event_Name': u'Doff', u'Event_Log_Type_Id': 2}, {u'Event_Log_Type_Name': u'HeadsetStateChange', u'Event_Id': 13, u'Timestamp': 634939487969032861L, u'Age': 5347, u'Event_Name': u'Don', u'Event_Log_Type_Id': 2}], u'Type': 6, u'isError': False} Note that since I didn't continue implementing the code for the events and other APIs, the event list is displayed as it was returned. I simply parse the JSON result and print it in get_events(): get_events() I specially like Python for its flexibility on accessing the values on its data structures as demonstrated above in PLTDevice.__getattr__. ProductName is not defined as a member in the class, but still gets accessed as one. Its content comes from the response we received from Spokes. PLTDevice.__getattr__ ProductName >>> print d.ProductName Plantronics BT300 From here you could start building your app to actually monitor the events queue and act on them, or keep on looking around in interactive mode to see what else you can do with the device and APIs. This approach usually works well for me on troubleshooting REST APIs and I hope it helps you as well. Just another way of doing it.)
http://www.codeproject.com/Articles/530394/Connecting-to-Spokes-REST-API-in-Python
CC-MAIN-2014-41
en
refinedweb
31 August 2009 16:54 [Source: ICIS news] SINGAPORE (ICIS news)--Sinopec has proposed selling its purified terephthalic acid (PTA) and monoethylene glycol (MEG) for September delivery within the Chinese domestic market at lower and rollover prices, respectively, compared with the previous month, company officials said Monday. For PTA, the supplier proposed a September price of yuan (CNY) 8,000/tonne delivered, CNY200/tonne lower than the August final price. For MEG, Sinopec’s proposed September price was a rollover from its August final number of CNY6,350/tonne delivered. Some of Sinopec’s customers expressed surprise over its PTA price, as other leading producers, such as Xiang Lu Petrochemical and Yisheng Petrochemical, had last week proposed selling September volumes at CNY7,900/tonne delivered. “For the MEG price, it’s within expectations, but you know, a proposed price is just tentative, we’d still have to see how the market develops at the end of the month,” said a source from Yixing Ming Yang Polyester, one of Sinopec’s customers for the fibre intermediates. Sinopec is ?xml:namespace> Freya Tang and Clytze Yan of CBI China contributed to this article. ($1 = CNY 6.83) For more on Sinopec visit ICIS company intelligence For more on PTA and MEG visit ICIS chemical intelligence To discuss issues facing the chemical nidustry
http://www.icis.com/Articles/2009/08/31/9243839/sinopec-seeks-lower-pta-price-for-sept-rolls-over-meg.html
CC-MAIN-2014-41
en
refinedweb
in reply to Newbie, AD problem lastLogon is stored as a large integer that represents the number of 100 nanosecond intervals since January 1, 1601 (UTC). A value of zero means that the last logon time is unknown. I use an incantation like follows to get date type values out of AD: sub msqtime2perl { # MicroSoft QuadTime to Perl my $foo = shift; my ($high,$low) = map { $foo->{ $_ } } qw(HighPart LowPart); return unless $high and $low; return ((unpack("L",pack("L",$low)) + unpack("L",pack("L",$high)) * (2 ** 32))) / 10000000) - 11644473600; } if ( $userObject->{lastLogon} and $userObject->{lastLogon}->{HighPart} + ) { $epochtime = msqtime2perl( $userObject->{lastLogon} ); } [download] Perl Cookbook How to Cook Everything The Anarchist Cookbook Creative Accounting Exposed To Serve Man Cooking for Geeks Star Trek Cooking Manual Manifold Destiny Other Results (155 votes), past polls
http://www.perlmonks.org/?node_id=906789
CC-MAIN-2014-41
en
refinedweb
Create Radio Buttons in SWT Create Radio Buttons in SWT This section illustrates you how to create radio button. In SWT, the style RADIO defined in the Button class allows to create radio button. We SWT Radio Buttons in SWT In SWT, the style RADIO defined in the Button... SWT The given example will show you how to create scroll bar in Java... in Java. Create ToolTip Text in SWT Radio Buttons - Java Beginners Radio Buttons Hello Sir, How to create the code for the password... the radion buttons in display the same page in jsp.I need only how to make the question and answer page using the radio buttons.please help me to solve radio buttons radio buttons write a program to create an applet button which has a list of radio buttons with titles of various colours.set the background colour... has a list of radio buttons with titles of various colors and a button Buttons Buttons I have created a web page with radio button group with two radio buttons for accepting the home appliances categories,Kitchen appliances... radio button is selected.Which event listener do I need to implement for this task Radio Buttons in Jsp - JSP-Servlet Radio Buttons in Jsp Hi, i have a page in which there are lot of radio buttons [IMG][/IMG] see the above picture..." depending on the value in the String radio button has to be checked. How to do SWT in Eclipse - Java Beginners SWT in Eclipse hi.. how to call a function in SWT when the shell is about to close... ?? thanks in advance Tab sequence problem with Radio buttons - JSP-Servlet Tab sequence problem with Radio buttons Hi, I have membership type in application as 1 year(radio button) 2 year(radio button) 4 year(radio button) courier delivery courier(radio button) currently tab sequence going SWT Solaris - Swing AWT :// Thanks...SWT Solaris Hi, When I am using SWT in my application it works... in SWT which will give the exact behaviour in all platforms. Thanks Selecting a Radio Button component in Java Selecting a Radio Button component in Java  ... shows five radio buttons with labeled by "First", "Second"...-different radio buttons. Following are the screen shots for the result Radio Buttons in DB Very Urgent - JSP-Servlet Radio Buttons in DB Very Urgent Respected Sir/Madam, I am... in the database.Here I need Radio Buttons added dynamically for each Row. When I click the corresponding Radio Button and Submit,The Emp ID and Emp Name must Dojo Radio Button radio buttons in dojo. For creating radio button you need "...(dijit.form.CheckBox) for RadioButtons to work. [Radio buttons are used when there is a list....] Radio Buttons are the same as html but dojo provides more controls and styling prog. using radio buttons for simple calculator prog. using radio buttons for simple calculator import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; class Calculator extends JFrame { private final Font BIGGER_FONT = new Font Tutorial, Java Tutorials Tutorials Here we are providing many tutorials on Java related... SWT Tutorials... NIO Java NIO Tutorials Radio buttons in html Radio buttons in html Here is an example of radio button in html.In this example we have display two radio button Male and Female. The user select only one.... Example radioButton.html <html> <head><title>Radio button ; C Tutorials | Java Tutorials | PHP Tutorials | Linux... Tutorials | Dojo Tutorials | Java Script Tutorial | CVS Tutorial... Tutorials | Ruby-Rails Tutorials | SWT Tutorial | Wicket Tutorial  Radio Buttons in HTML Radio Buttons in HTML  ... The Tutorial illustrates an example from Radio Buttons in HTML.In this Tutorial, the code explain to create a Radio Buttons. The code enables a user to select one Java Swing Tutorials Java Swing Tutorials Java Swing tutorials - Here you will find many Java Swing... and you can use it in your program. Java Swing tutorials first gives you brief WRITE A CODE IN STRUTS ACTION CLASS FOR CHECK BOXES AND RADIO BUTTONS - Struts for check boxes and radio buttons and for submit buttons. i have a jsp page which contains check boxes,radio buttons.and when i click submit button the related...WRITE A CODE IN STRUTS ACTION CLASS FOR CHECK BOXES AND RADIO BUTTONS  inserting data from radio buttons to database - JSP-Servlet inserting data from radio buttons to database hi, my problem... of radio buttons. the feedback.jsp should look like same as follows: Please... as per his choice. when user completes all the selection of radio buttons and he Display Label and radio buttons runtime with respect to DB values Display Label and radio buttons runtime with respect to DB values Requirement: I am getting alertCondition,Y,W values from DB the the cooresponding fields like the following: JLbel- alertCondition-"alert1","Alert2" JradioButton radio nuttpn radio nuttpn i have created a html page containing 20 multiple choice questions when i clicl the options which are radio buttons. I have tp retroebe it in a jsp pahe ansabe it in mysql database...further i have to match JSP Radio Button MySQL insert - JSP-Servlet JSP Radio Button MySQL insert Hi, I have an HTML form which has a couple of radio buttons for example (gender: male/female) and some check boxes... however I wanted to ask you if there are tutorials or perhaps you can help SWT login form - Swing AWT (); } } For more information on SWT visit to : Thanks...SWT login form Hi, I want code to create a login form in SWT. My Radio Button In Java Radio Button In Java Introduction In this section, you will learn how to create Radio Button on the frame. The java AWT , top-level window, are represent by the CheckBoxGroup Canvas won't draw on composite (SWT) Canvas won't draw on composite (SWT) I can't get a canvas to draw on a composite with SWT. I've made the composite on a shell and given it a layout... new to java. The relevant code is below. public static void main(String[] args SWT TextEditor SWT TextEditor In this section, we are going to show you how to create a TextEditor using SWT in Java. In SWT, the classes ToolBar and ToolItem are allowed to create Dojo Radio Button ; In this section, you will learn how to create radio buttons in dojo...(dijit.form.CheckBox) for RadioButtons to work. [Radio buttons are used when there is a list....] Radio Buttons are the same as html but dojo provides more controls and styling Use Group Class in SWT group, radio buttons are defined. Here is the code of GroupExample.java... Use Group Class in SWT In this section, you will learn how to use Group class. In SWT need help for writting code in struts action class for check boxes and radio buttons - Struts people.iwould like to write code in struts action class for check boxes and radio buttons and for submit buttons. i have a jsp page which contains check boxes,radio...need help for writting code in struts action class for check boxes and radio java tutorials java tutorials Hi, Much appreciated response. i am looking for the links of java tutorials which describes both core and advanced java concepts... java in detail with relevant explanations and examples systematically ,it would Tutorials dependent radio button dependent radio button Hi. I have 4 radio buttons say all,chocolate,cookie,icecream. If I select all the other 3 should not able to be selected... of the Radio Button is to select only one option at a time. So you don't need Multiple buttons in struts using java script Multiple buttons in struts using java script Multiple buttons in struts using java script SWT "Enter" key event - Swing AWT :// Thanks...SWT "Enter" key event Can any one post me the sample code to get the enter key event? My requirement is , I want some SWT button action How to use radio button in jsp page How to use radio button in jsp page This is detailed java code how to use radio... to selected radio buttons. First page in this example lets the user enter its name Radio button Validation Radio button Validation Hi.. How to validate radio button in java?if the radio button is not selected an error message should be given... Please...()==false)){ JOptionPane.showMessageDialog(null,"Please select radio button..  AWT Tutorials AWT Tutorials How can i create multiple labels using AWT???? Java Applet Example multiple labels 1)AppletExample.java: import javax.swing.*; import java.applet.*; import java.awt.*; import Radio Tag <html:radio>: Radio Tag <html:radio>: html:radio Tag - This Tag renders an HTML <input> element of type radio... property The corresponding bean property for this radio tag Jigloo SWT/Swing GUI Builder Jigloo SWT/Swing GUI Builder  ... for the Eclipse Java IDE and WebSphere Studio, which allows you to build and manage both Swing and SWT GUI classes. Jigloo creates and manages code for all tutorials - Java Beginners tutorials may i get tutorials for imaging or image processing in java Hi friend, Please explain problem in details what you want with image in java. Thanks Create Tabs in Java using SWT Create Tabs in Java using SWT This Tab Example in Java, will teach you how to create tabs using SWT in Java. After going through the example you will be able to create...); java.awt.Frame frame = SWT_AWT.new_Frame(composite); JApplet applet==new JApplet Java - JDK Tutorials Java - JDK Tutorials This is the list of JDK tutorials which... should learn the Java beginners tutorial before learning these tutorials. View the Java video tutorials, which will help you in learning Java quickly. We Tomahawk selectOneRadio tag to create radio buttons on the page. It renders html input tag with type... of radio buttons. If this is set to "spread" value then it doesn't display html but provides feature to display radio buttons Roseindia Tutorials computing platforms and programming languages like Java Tutorials, JSP Tutorials...://roseindia.net/, where you can find large number of tutorials Java Tutorials... of your use. To view tutorials on various topics, visit: Java Tutorial how to display a table and buttons in swings - Java Beginners how to display a table and buttons in swings Hi frends, Actually... different buttons below this displayed table using swings.....please can any... the table and iam not getting buttons below the table......... Thanking you Radio Button in Java Radio Button in Java Radio Button is a circular button on web page that can be selected...;java RadioButtonTest On execution of code ,the program create you a Radio Radio Button in HTML of radio buttons. It is necessary that the name remains the same within a group... to remember while using Radio Button: All Radio Buttons within a group must share the same name Value of the Radio Buttons within a group must be different Java programming for beginners video tutorials to find the Java programming for beginners video tutorials. Let's know the url of the video tutorials. Thanks Hi, Check the tutorials Java Programming... for Java beginners. All the tutorials contains free examples. Thanks Is it possible in SWT ? Is it possible in SWT ? I want drop down like google search (ie, when we type one letter then the word start with that are displayed). when the drop... do this in SWT ? Thanks DBUnit Tutorials Java applications. With the help of DbUnit you can repopulate your database with sample data and perform unit testing of the Java application. This helps Appending Strings - Java Tutorials .style1 { text-align: center; } Appending Strings In java, the two main ways for appending string at the end are : First method is using += operator. Second method is using append() method. In this section, we Thread Deadlocks - Java Tutorials Thread Deadlock Detection in Java Thread deadlock relates to the multitasking. When two threads have circular dependency on a synchronized, deadlock is possible. In other words, a situation where a thread is waiting for an object j2me tutorials - Java Beginners Java App - Add, Delete, Reorder elements of the buttons Java App - Add, Delete, Reorder elements of the buttons Hello, I'm developing a Java application. I created this interface with MockupScreens... of these elements ... Let us put these buttons in a JPanel called btnsUnit SWT Create Scroll Bar in Java using SWT Create Scroll Bar in Java using SWT This section is all about creating scroll bar in Java SWT The given example will show you how to create scroll bar in Java using Create Multiple Buttons Create Multiple Buttons using Java Swing  ... buttons labeled with the letters from A to Z respectively. To display them, we have created an array of letters. Using this array, we have labeled the buttons Radio button validation using jsp - JSP-Servlet a value for radio Buttons) then it will return to same jsp page with the given...Radio button validation using jsp I had one jsp Page and servlet. I did my validations in servlet for my jsp page which contains the radio Tutorials on Java Tutorials on Java Tutorials on Java topics help programmers to learn.... These tutorials are part of online Java course that is provided by RoseIndia which help... Tutorials Java Util Examples List Threading in Java Overview of Networking Get radio button value after submiting page Get radio button value after submiting page Radio buttons are dynamically generated.After selecting radio button & submitting the page...; } You have to keep the name attribute same for all radio buttons Create a JRadioButton Component in Java button in java swing. Radio Button is like check box. Differences between check... to another where Radio Buttons are the different-different button like check box... with the help of this program. This example provides two radio buttons same JSP Tutorials - Page2 JSP Tutorials page 2 JSP Examples Hello World JSP Page... This section shows you how to import a java package or the class in your jsp.... In the Java Server Pages Technology, multiple actions are accessed by using Drop down and radio button value on edit action mr.,mrs.,miss for payment type there are to radio buttons as by cash &...Drop down and radio button value on edit action HI, I have... the value from dropdown and radio button.. But the problem goes with edit action Exception in Java - Java Tutorials Flex Tutorials Flex Tutorials  ... behavior in buttons through its rollOverEffect property. The first button... will study for..in loop which is similar to for each loop of C#, Java and other Java Video Tutorials for beginners Java Video Tutorials for beginners are being provided at Roseindia online for free. These video tutorials are prepared by some of the best minds of Java... contains advance Java tutorials that are designed to help Java professionals Dojo Button, Dojo Button onclick, Dojo Buttons Drag and Drop Example in SWT Drag and Drop Example in SWT Drag and Drop in Java - This section is going to illustrates you how to create a program to drag and drop the tree item in Java . In SWT JAVA - XML JAVA hi.. i want to talk to any SWT expert in JAVA... how can i do it? Hi friend, For read more information,Examples and Tutorials on SWT visit to : Thanks
http://www.roseindia.net/tutorialhelp/comment/98818
CC-MAIN-2014-41
en
refinedweb
Ticket #1145 (closed defect: fixed) ran_migration receives the app name as string, but migration as migration object. Description I have a code snippet as such: def check_need_to_run(app=None, migration=None, method=None, *args, **kwargs): if app == "foo" and migration == "0041_add_new_bar": process_post_migration() ran_migration.connect(check_need_to_process) But it errors out because south.migration.base's eq method is being called. The documents should be updated to reflect that the migration object is being passed, or else south.migration.migrator.Migrator should be updated to pass the migration name as a string object. Change History Note: See TracTickets for help on using tickets. Docs corrected in [aa19d6a1a7b0].
http://south.aeracode.org/ticket/1145
CC-MAIN-2014-41
en
refinedweb
Choosing a Replication Method Updated: March 28, 2003 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 If you plan to use multiple targets and you want to synchronize data in those targets, you need to choose a replication method. You have several methods for replicating data: - A manual replication method (such as using the command-line tool Robocopy.exe, which is available in the Windows Server 2003 Deployment Kit) - FRS - A third-party replication tool It is not mandatory to use FRS to keep targets synchronized. In fact, by default, FRS is not enabled for DFS targets in domain-based DFS namespaces. However, in general you do want to make sure that the underlying shared folders that correspond to DFS links and targets are synchronized to present the same data to users, regardless of the folder that they want to access. The following sections describe when to use manual replication or FRS. For an Excel spreadsheet to assist you in documenting the replication method for each target, see "DFS Configuration Worksheet" (Sdcfsv_1.xls) on the Windows Server 2003 Deployment Kit companion CD (or see "DFS Configuration Worksheet" on the Web at). For information about using third-party replication tools, consult the documentation provided by your software vendor. When to Use Manual Replication If the data in the shared folder is static, you can replicate the data by doing a one-time copy of the data to a target in the replica set. Even if the data in the shared folder is dynamic but changes infrequently, you might want to keep the targets synchronized by downloading the initial copies over the network and then manually updating them with changes. You must use manual replication if you plan to use a stand-alone DFS namespace or if one or more link targets for a particular link do not run Windows 2000 Server or Windows Server 2003. When to Use FRS FRS works by detecting changes to file and folders in a replica set and replicating those changes to other file servers in the replica set. When a change occurs, FRS replicates the entire file, not just the changed bytes. You can use FRS only if you are using a domain-based DFS namespace, and only servers running a Windows 2000 Server or Windows Server 2003 operating system can be part of a replica set. In addition, all replica sets must be created on NTFS volumes. FRS offers a number of advantages over manually copying files, including: Continuous replication FRS can provide continuous replication, subject to server and network loads. When a file or folder change occurs, and the file or folder is closed, FRS can begin replicating the changed file or folder to outbound partners (that is, the replica members that will receive the changed file or folder) within five seconds. Replication scheduling You can schedule replication to occur at specified times and durations as needed by your organization. Scheduling replication to occur during evening hours, for example, can reduce the cost of transmitting data over expensive WAN links. Replicating data during off-hours also frees up network bandwidth for other uses. Compression To save disk space, FRS compresses files in the staging directory by using NTFS compression. Files sent between replica members remain compressed when transmitted over the network. Authenticated RPC with encryption To provide secure communications, FRS uses Kerberos authentication protocol for authenticated remote procedure call (RPC) to encrypt the data sent between members of a replica set. Fault-tolerant replication path FRS does not rely on broadcast technology, and it can provide fault-tolerant distribution via multiple connection paths between members. If a given replica member is unavailable, the data will flow via a different route. FRS uses logic that prevents a file from being sent more than once to any given member. Conflict resolution FRS can resolve file and folder conflicts to make data consistent among the replica members. If two identically named files on different servers are added to the replica set, FRS uses a "last writer wins" algorithm, which means that the most recent update to a file in a replica set becomes the version of the file or folder that replicates to the other members of the replica set. If two identically named folders on different servers are added to the replica tree, FRS identifies the conflict during replication and renames the folder that was most recently created. Both folders are replicated to all servers in the replica set, and administrators can later merge the contents of two folders or take some other measure to reestablish the single folder. Replication integrity FRS relies on the update sequence number (USN) journal to log records of files that have changed on a replica member. Files are replicated only after they have been modified and closed. As a result, FRS does not lose track of a changed file even if a replica member shuts down abruptly. After the replica member comes back online, FRS replicates changes that originated from other replica members, as well as replicating local changes that occurred before the shutdown. This replication takes place according to the replication schedule. When you use FRS, the link targets might not always be completely synchronized. As a result, one client’s view of a link in a DFS namespace can be different from another client’s view of the same link. This inconsistency can happen when clients have been referred to different link targets in the namespace. Link targets do become consistent with time, but you might experience temporary inconsistencies due to replication latency when updates are occurring. For more information about using FRS, see "Choosing an Availability Strategy for Business-Critical Data" later in this chapter. FRS is typically used to keep link targets synchronized. It is also possible to put files and folders directly in a domain-based DFS root target and then enable replication on the root so that the files and folders are replicated to all root targets. However, avoid enabling replication on domain-based DFS roots for the following reasons: -. Note - If morphed folders do occur, you must use the /RemoveReparse:<DirectoryName> parameter in Dfsutil.exe to delete each morphed folder. For more information about morphed folders, see "Choosing an Availability Strategy for Business-Critical Data" later in this chapter. - When adding a new root target to an FRS replicated root, you cannot replicate the contents of individual folders in the root based on business priority. Instead, the entire contents of the root are replicated to the new root target. On the other hand,. On the other hand, if you enable replication on individual links, you can take a new link target offline while the initial replication takes place or whenever you want to restrict access to a particular link target.
http://technet.microsoft.com/en-us/library/cc759493(v=ws.10).aspx
CC-MAIN-2014-41
en
refinedweb
it is to add or change functionality without impacting existing core system functionality. Let’s take a simple example. Suppose your company have a core product to track all the users in a sports club. Within your product architecture, you have a domain model represented by JPA POJOs. The domain model contains many POJOs including – of course – a User POJO. package com.alex.staveley.persistence /** * User entity. Represents Users in the Sports Club. * * Note: The SQL to generate a table for this in MySQL is: * * CREATE TABLE USER (ID INT NOT NULL auto_increment, NAME varchar(255) NOT NULL, * PRIMARY KEY (ID)) ENGINE=InnoDB; */ @Entity public class User { /* Surrogate Key - automatically generated by DB. */ @GeneratedValue(strategy=GenerationType.IDENTITY) @Id private int id; private String name; public int getId() { return id; } public void setName(String name) { this.name=name; } public String getName() { return name; } } Now, some customers like your product but they need some customisations done before they buy it. For example, one customer wants the attribute birthplace added to the User and wants this persisted. The logical place for this attribute is – of course – in the User POJO, but no other customer wants this attribute. So what do you do? Do you make a specific User class just for this customer and then swap it in just for them? What happens when you change your Product User class then? What happens if another customer wants another customisation? Or changes their mind? Are you sensing things are going to get messy? Thankfully, one implementation of JPA: Eclipselink helps out here. The 2.3 release (available since June 2011, latest release being a 2.3.2 maintenance released just recently, 9th December, 2011) includes some very features which work a treat for this type of scenario. Let’s elaborate. By simply adding the @VirtualAccessmethods Eclipselink annotation to a POJO we signal to Eclipselink that the POJO may have some extra (also known as virtual) attributes. You don’t have to specify any of these extra attributes in code, otherwise they wouldn’t be very virtual! You just have to specify a generic getter and setter to cater for their getting and setting. You also have to have somewhere to store them in memory, something like a good old hashmap – which of course should be transient because we don’t persist the hashmap itself. Note: They don’t have to be stored in a HashMap, it’s just a popular choice! Let’s take a look at our revamped User which is now extensible! @Entity @VirtualAccessMethods public class User { /* Surrogate Key - automatically generated by DB. */ @GeneratedValue(strategy=GenerationType.IDENTITY) @Id private int id; private String name; @Transient private Map<String, Object> extensions = new HashMap(); public int getId() { return id; } public void setName(String name) { this.name=name; } public String getName() { return name; } public <t> T get(String name) { return (T) extensions.get(name); } public Object set(String name, Object value) { return extensions.put(name, value); } } So, is that it? Well there’s a little bit more magic. You have to tell eclipselink about your additional attributes. More specifically: what their names and datatypes are. You do this by updating your eclipselink-orm.xml which resides in the same META-INF folder that the persistent.xml is in. <?xml version="1.0" encoding="UTF-8"?> <entity-mappings <entity class="com.alex.staveley.persistence.User"> <attributes> <basic name="thebirthplace" attribute- <column name="birthplace"/> <access-methods </basic> </attributes> </entity> </entity-mappings> Now this configuration simply states, the User entity has an additional attribute which in java is “thebirthplace” and it is virtual. This means it is not explictly defined in the POJO but if we were to debug things, we’d see the attribute having the name ‘thebirthplace’ in memory. This configuration also states that the corresponding database column for the attribute is birthplace. And eclipselink can get and set this method by using the generic get /set methods. You wanna test it? Well add the column to your database table. In MySql this would be: alter table user add column birthplace varchar(64) Then run this simple test: @Test public void testCreateUser() { User user = new User(); user.setName("User1Name"); user.set("thebirthplace", "donabate"); entitymanager.getTransaction().begin(); entitymanager.persist(user); entitymanager.getTransaction().commit(); entitymanager.close(); } So now, we can have one User POJO in our product code which is extensible. Each customer can have their own attributes added to the User – as they wish. And of course, each customer is separated from all other customers very easily by just ensuring each customer’s extensions resides in a specific eclipslink-orm.xml. Remember, you are free to name these files as you want and if you don’t use the default names you just update the persistence.xml file to state what names you are using. This approach means that when we want to update User in our product, we only have to update one and only User POJO (because we have ensured there is only one). But when specific attributes have to be added for specific customer(s), we don’t touch the User POJO code. We simple make the changes to the XML and do not have to recompile anything from the core product. And of course, at any time it is easy to see what the customisations are for any customer by just simply looking at the appropriate eclipselink-orm.file. Ye Ha. Happy Extending! References: - Extending your JPA POJOs from our JCG partner Alex Staveley at the Dublin’s Tech Blog - - Related Articles :
http://www.javacodegeeks.com/2012/01/extending-your-jpa-pojos.html
CC-MAIN-2014-41
en
refinedweb
Since this is ultimately covered by the [MS-CFB] Open Specification Document, I will present a means of examples of using python to alter Windows specific file-system security. Python provides a means for native file manipulation using the win32security module. This module allows the application in question call all relevant Win32 Security API functions. These functions will correlate to the exact security API function names found in the Windows API. These functions can be found on MSDN at the following URL:. However, the module in question only implements a subset of these functions, and the entire list of supported functions can be found here:. Displaying the current file-system controls of a file import win32security # obtain security descriptor of said owner of filesecdesc = win32security.GetFileSecurity("C:\Temp\VerifySchTask.txt", win32security.OWNER_SECURITY_INFORMATION) # the sec_desc object now contains everything we need in regards to file permissions.# we can now call any of the various win32 security api functions. # for example, let's get the owner's sid:osid = secdesc.GetSecurityDescriptorOwner()#display owner sidprint osid # now let’s just dump the security descriptor object to viewprint secdescAltering the ACL of a Fileimport win32security # apply the security descriptor to the aforementioned file.SetFileSecurity( FILE, DACL_SECURITY_INFORMATION , secdesc)Encryption The contents of the file can be encrypted and stored in user-defined streams. In fact, if you review [MS-OFFCRYPTO], you will see this is how the latest version of Office 2010 works. It takes the XML content and stores it in a special stream that contains the encrypted-form of the working document. However, it is much easier to present/store the file on a file-system that is encrypted (e.g. on Windows you have BitLocker, EFS, etc…).However, if you wish to perform your own security mechanism, you can review the \EncryptedPackage stream structure as defined in Section 2.3.4.4 of [MS-OFFCRYPTO]. This stream contains the entire office document in compressed & encrypted form.
http://blogs.msdn.com/b/openspecification/archive/2011/06/10/exploring-the-cfb-file-format-9.aspx
CC-MAIN-2014-41
en
refinedweb
Lately I’ve been involved on starting up my team blog: if you read Italian and want to listen about stories from Support Engineers working on various technologies, you can’t avoid signing up to the feed: Now guess what’s been the topic of the posts I’ve been writing there… yes! Memory Management on Windows CE\Mobile and NETCF!! (so far the intro and then part 0, 1, 2 – don’t ask me why I made it 0-based, even after an intro… I really don’t remember) And since I’ve been adopting an approach that is not usually the one used to explain how things work, I honestly think I can reuse the same one and translate it here… probably I’m going to repeat some concepts I’ve already blogged about, however it may be worth in order to have one single document, this post, as hopefully a possible quite-ultimate way to describe how memory is handled… is the bar too high?? INTRODUCTION ABOUT WINDOWS CE\MOBILE So, let’s start: Windows CE and Windows Mobile are not the same thing. After working on this for a while it can be obviuos, but if you’re at your first experience with these so-called “Smart Devices” then it may not be so. We must be clear about the terminology, specifically about terms like “platform”, “operating system”, “Platform Builder”, “Adaptation Kit”, “OEM”, “ODM”, etc.. In reality the true name of “Windows CE” would nowadays be “Windows Embedded CE”, however I don’t want to mess with a product that was once known as “Windows XP Embedded” and which nowadays is differentiated in the following products: So, when I’ll write “Windows CE” here I’ll mean the historical name of “Windows Embedded CE”: let’s forget the other “Windows Embedded X” products (out of my scope) and let’s concentrate on Windows CE\Mobile. Windows CE is a platform for OEMs (Original Equipment Manufacturer). This means that we provide to the manufacturer an Integrated Development Environment very similar to Visual Studio (indeed, Windows CE 6.0 is integrated with Visual Studio), but with the aim of developing Operating Systems (instead of applications), based on the platform provided by Microsoft. The tool is called "Platform Builder for Windows CE”, and up to version 5.0 was a separate tool from Visual Studio. Windows CE is a modular platform. This means that the OEM is totally free to include only the modules, drivers and applications of his interest. Microsoft provides about 90% of the source code of the Windows CE platform, as well as code examples for drivers and various recommendations (which the OEM might or might not follow). For example, if the device is not equipped with an audio output, then the OEM won’t add a sound driver. If it doesn’t have a display, then the OEM will not develop nor insert a video driver. And so on for the network connectivity, or a barcode-scanner, a camera, and so on. On a Windows CE-based device, the OEM can include whatever he wants. That's why, from the point of view of technical support to application-developers, sometimes we can’t help a programmer who is targeting a specific device whose operating system is based on Windows CE: furthermore, the OEM can decide whether to offer application-developers the opportunity to programmatically interact with special functions of the device through a so-called "Private SDK” (which may also contain a emulator image, for example). An important detail: differently from Windows Embedded OSs (Standard \ Enterprise \ POSReady \ NavReady \ Server), for Operating Systems based on Windows CE the OEMs actually *COMPILE* the source code of the platform (apart from about 10% provided by Microsoft and corresponding to the core kernel and other features). Now: Windows Mobile is a particular customization of Windows CE, but in this case the OEM needs to create an Operating System that meets a set of requirements, called "Windows Mobile Logo Test Kit". The tool used by Windows Mobile-OEM is called "Adaptation Kit for Windows Mobile", a special edition of "Platform Builder" and allows to adapt the "Windows Mobile”-Platform to the exact hardware that the OEM has built or that he requested to an ODM (“Original Device Manufacturer”). In the Windows Mobile’s scenario we can’t forget Mobile Operators also, which often "brand" a device requiring the OEM to include specific applications and usually configure the connectivity of the mobile network (GPRS, UMTS, WAP, etc.).. WARNING: nothing prohibits a WinMo-OEM to include special features such as a barcode-scanner or an RFID chip or anything else... the important thing is that the minimal set is the same. Moreover, also WinMo-OEM can provide a “Private SDK” to programmatically expose specific functionality related to their operating system (see for example Samsung SDK containing private APIs for the accelerometer and other features, which are documented and supported by Samsung itself). Finally, one last thing before starting talking about memory: Windows Mobile 5.0, 6, 6.1 and 6.5 are all platforms based on Windows CE 5.0. So, they all share the same Virtual Memory management mechanisms, except in some details for the latter (mainly with some benefits for application-developers). VIRTUAL MEMORY ON WINDOWS CE\MOBILE So now we can start talking about how memory is managed on Operating Systems based on Windows CE 5.0. And I’m being specific on Windows Embedded CE *5.0* because on 6.0 memory management is (finally!) totally changed and there are no more the limitations we’re going to discuss in the remainder. Incidentally, this is part of the same limitations described for Windows CE 4.2 by Doug Boling’s Windows CE .NET Advanced Memory Management, although the article is dated 2002! Fortunately, some improvements have been introduced from Windows Mobile 6 and especially in 6.1 (and therefore also in 6.5), whereby not only the applications have more virtual memory available, but also the entire operating system is more stable as a whole. I don’t want to repeat here what can be found in documentation and on various blogs: in contrast, I’d like to actually show the theory, because only by looking at data like the following you can realize what a good programmer a Developer for Windows Mobile has to be! The following is the output of a tool developed by Symbol (later acquired by Motorola), which allowed the manufacturer to understand how the device.exe process was responsible (or not) for various problems related to memory. Why? Because The result was something like the following (I obfuscated possibly sensitive data of the software-house I worked with during the Service Request): So, above all, what did Symbol\Motorola mean by “Code” (blue) and “Data” (green)? • "Code": these are the *RAM* DLLs loaded or mapped into a process-slot. They start from the top of the 32MB slot and goes down. If several processes use the same DLL, the second one maps it to the same address where the first one had loaded it. • "Data" is the executable’s compiled code + Heap(s) + Stack. It starts from the bottom and grows. Finally, the red vertical line represents the "DLL Load Point", i.e. the address where a DLL is loaded in case it hadn’t yet by any other process. That is the situation of only the process slots, not the whole virtual memory - in particular the contents of the Large Memory Area is not shown: Why did I specify *RAM* DLLs? Because those placed by the OEM in the ROM (= firmware) are executed directly there, without the need of the process loading their "compiled" code into its Address Space (they’re XIP DLL, i.e. "Executed in Place" – in Slot 1). That picture also shows that the green part (code + heap + stack) may exceed the DLL Load Point. Indeed, the problems related to lack of available virtual memory is usually of 2 types: That's also because in general one of the advices to avoid memory problems had always been to load all DLLs used by the application itself at application startup, through an explicit call to LoadLibrary(). Another visual example is the following: We’ll later discuss in detail the particularities of NETCF, but it's worth at this point noting a detail: apart from the actual CLR DLLs (mscoree*.dll, netcfagl*.dll), every other assembly doesn’t waste address space in the Process slot, but is loaded into the Large Memory Area. Even more, if you are using the version of the runtime included in the ROM by the OEM, also the runtime DLLs do not affect the process’ virtual memory space. Obviously it is different when the application P/Invoke native DLLs: these will be loaded in the process slot. Moreover, if you look at the picture showing all the alive processes, you’ll notice that in the upper bound of all the slots there’s a portion of "blue" virtual memory, which is the same for all the processes. This is the memory blocked by the Operating System whose size is equal to the sum of the binaries (the simple .EXE files) active at any given moment. So large monolithic EXEs (large for example because containing “many” resources) are not recommended at all on Windows CE 5.0! And in general supporting NETCF developers I can say I’ve seen many “big” applications... that is not a good practice for this reason! Through those pictures it is also easy to understand why the whole system stability is a function of all active processes, and in particular it is easy to see that very often DEVICE.EXE can be a source of headaches! Think of those Windows Mobile-based devices that have the radio stack (i.e. the phone), Bluetooth, WiFi, Camera, barcode-scanner, etc. .. each of these drivers is a DLL that device.exe has to load (blue line), and each can also create its own stack and heap (green line). Some OEMs allowed developers to programmatically disable some drivers (to reduce pressure done by device.exe), but obviously we can not take for granted for example that a user manually restarts that feature (or this is done by another application...). So, what has been done to fight device.exe’s power? In many cases, the driver-DLLs were loaded by services.exe, which is the host process for Service-DLLs on Windows CE. But very often it was not enough... What Windows Mobile 6.1 introduced is that native DLLs with size > 64KB are typically loaded into the so-called slots 60 and 61, which are part of the Large Memory Area. Another improvement in Windows Mobile 6.1 was to dedicate another slot (slot 59) to the driver stack (part of the Green Line to device.exe). Of course, this means that the memory-mapped files have now less space available (and I have recently handled a request for exactly this purpose, coming by a software company that was developing a GPS navigation software that could not load some map files in WinMo6.1), but in general the whole operating system has gained a stability that hadn’t before... To conclude, the tool I mentioned was developed by Symbol and I don’t think it’s publicly available. But a similar tool has recently been published on CodePlex (source code included!) through the article Visualizing the Windows Mobile Virtual Memory Monster. The term "Virtual Memory Monster" was invented years ago by Reed Robison… (part 1 e part 2). I've already been using it in a couple of requests and highly recommend it! TROUBLESHOOTING MEMORY LEAKS FOR NETCF APPLICATIONS Instead of explaining how things work in theory, which is a task I leave to more authoritative sources like the documentation itself and various blogs – one for all that of Abhinaba Basu, who is precisely the GC Guru inside the NETCF Dev Team (Back to basic: Series on dynamic memory management), I’d like to follow the troubleshooting flow I run through when a new Service Request arrives, about for example the following issues: Firstly, we must determine whether the problem is specific to an OEM. The best approach, when possible, is to verify if the error occurs even on emulators contained in the various Windows Mobile SDK. If not, the help that Microsoft Technical Support can provide is limited, as it is possible that the error is due to a customization of the Windows Mobile platform by the OEM. In this case, it may be helpful to know about what I wrote about device.exe above. Another initial step, in the case of applications NETCF v2 SP2, is to check if just running the application on NETCF v3.5 gives any improvement. There is no need to recompile the application with the Visual Studio 2008 - just like for .NET Desktop applications, add a configuration XML file in the same folder that contains the file TheApplication.exe, named TheApplication.exe.config and whose content is simply (I mentioned here): <configuration> <startup> <supportedRuntime version="v3.5.*"/> </startup> </configuration> <configuration> <startup> <supportedRuntime version="v3.5.*"/> </startup> </configuration> So, after having considered possible “trivial” causes, you can proceed to the analysis... Historically NETCF developers haven’t had an easy time in troubleshooting due to lack of appropriate tools – unlike Desktop cousins! – but over the years Microsoft has released tools that have gradually evolved over the current Power Toys for .NET Compact Framework 3.5. Apart from these you must know the(freeware!) EQATEC’s ones (Tracer and Profiler) and recently a tool on CodeProject that I mentioned earlier, that displays the status of virtual memory (VirtualMemory, with source code). Regarding power-toys, when you are dealing with a problem around memory, you have 2 of them that are of great help: the "CLR Profiler" and "Remote Performance Monitor (RPM)”. The first one is useful in visually making problems with objects’ allocation almost immediate and allows you to notice the problem in a visual way. Info on how using it are available through The CLR Profiler for the .Net Compact Framework Series Index. The second one provides, both in real time and through an analysis a posteriori, counters about the usage of MANAGED memory; also, through the "GC Heap Viewer” it allows not only to study the exact content of the managed heap, but also allows you to compare contents of the heap in different moments, in order to bring out a possible unexpected growth of a certain type of objects. Some images are available on Finding Managed Memory leaks using the .Net CF Remote Performance Monitor, which is useful also to get an idea about which counters are available, while a list and related explanations are provided on Monitoring Application Performance on the .NET Compact Framework - Table of Contents and Index. What I'd like to do here is not to repeat the same explanations, already detailed in the links above, but share some practical experience... For example, in the vast majority of cases I have handled about memory leaks, the problem was due to Forms (or Controls) that unexpectedly were NOT removed by the Garbage Collector. The instances of the Form class of the application are therefore the first thing to check through the Remote Performance Monitor and GC Heap Viewer. For this reason, where appropriate (e.g. if the total form are "not so many"), to avoid memory problems with NETCF applications it may be useful to adopt the so-called "Singleton Pattern": this way a single managed instance of a given form will exist throughout the application life cycle. So, supposing to be in the following situation: I used the Remote Performance Monitor and saved different .GCLOG files during normal use of the application, and thanks to the GC Heap Viewer I noticed that an unexpected number of forms stays in memory, and also that this increases during the life of the application, although there have been a certain number of Garbage Collections. Why the memory of a Form is not cleaned up by the garbage collector? Thanks to the GC Heap Viewer you can know exactly who maintains a reference to what, in the "Root View" on the right pane. Obviously knowing application’s architecture will help in identifying unexpected roots. A special consideration must be done for MODAL Forms in .NET (the dialogs, those that on Windows Mobile have the close button "Ok" instead of "X", and which permits a developer to prevent the user to return to the previous form). In many cases I have handled, the problem was simply due to the fact that the code was not invoking Close() (or. Dispose ()) after .ShowDialog(): Form2 f2 = new Form2(); f2.ShowDialog(); f2.Close(); Form2 f2 = new Form2(); f2.ShowDialog(); f2.Close(); Why should it matter? Because often (not always, for example, not when you expect a DialogResult) on Windows Mobile the user clicks on 'Ok' in the top right "close" the dialog. Also on Desktop, when a dialog is "closed" in this way the window is not closed, but "hidden"! And it could happen that the code creates a new instance of the form, without removing the old one in memory. It’s documented in “Form..::.ShowDialog Method” (the doc talks about “X” but of course for Windows Mobile refers to the 'Ok' referred to above): […]. Anyway, we assumed so far that the memory leak is MANAGED, but in reality it may be that leak is with the NATIVE resources that are used by a .NET instance, which have not been successfully released by implementing the so-called "IDisposable Pattern. And around this there are some peculiarities in NETCF, that Desktop-developers don’t need to worry about, particularly with respect to SQL Compact objects and "graphical" objects, i.e. classes of the System.Drawing namespace. In NETCF the Font, Image, Bitmap, Pen, Brush objects are simple wrappers around their native resources, which in the Windows CE-based operating systems are handled by the GWES (Graphics, Windowing and Event Subsystem). What does this mean? It means that in their own .Dispose() they effectively release their native resources, and therefore one *must invoke .Dispose() for Drawing objects* (or invoke methods that indirectly call it, for example .Clear() in ImageList.ImageCollection – which has not the .Dispose() itself). Note that among the counters provided by the Remote Performance Monitor, the category "Windows.Forms" contains indeed: Note that I’m not talking about only objects directly “born” as Brush, Pen, etc.. I’m talking also about those objects whose properties contain graphic objects, such as a PictureBox or ImageList (or indirectly, an ImageList of a ToolBar). So, when you close a form, remember to: this.ImageList1.Images.Clear(); this.ToolBar1.ImageList.Images.Clear(); this.PictureBox1.Image.Dispose(); //etc... this.ImageList1.Images.Clear(); this.ToolBar1.ImageList.Images.Clear(); this.PictureBox1.Image.Dispose(); //etc... Finally, still about Forms, a simple technique I often used to identify possible problems with items not properly released at the closing of a form has been to emulate user interaction by “automatic” opening and closing of the form. I’m purely talking about a test code like this:."); After running the loop N times, the Remote Performance Monitor will be of considerable help to see what is going wrong... A final note before concluding this paragraph. It may be that an application is so complex to require "a lot of" virtual memory. This would not be a problem, as long as there is room for the lines "green" in my previous post. But requiring "a lot of" memory means that the Garbage Collector will get kicked more frequently, thus impacting general application performance (because the GC must first “lock” the threads in a safe state). The point is that if the application is so complex to require too frequent use of garbage collection (and therefore performance may not be acceptable by end-users), then it might be worthwhile to split the application into 2 parts, such as one for memory dog-guard and another for the user interface. This process at the cost of an additional process slot, but often it is something that can be paid. Or, since the managed DLLs are loaded in the Large Memory Area without wasting precious process’ address space, an idea would be to place all classes, even those of the form, not in the EXE but in the DLLs! A simple yet very effective idea, which Rob Tiffany has discussed about in his post MemMaker for the .NET Compact Framework. Enjoy! ~raffaele
http://blogs.msdn.com/b/raffael/archive/2009/11.aspx
CC-MAIN-2014-41
en
refinedweb
celNopActivationFunc< T > Class Template Reference An identity activation function for the neural network property class. More... #include <propclass/neuralnet.h> Detailed Description template<typename T> class celNopActivationFunc< T > An identity.none", where TYPE is either int8, int16, int32, uint8, uint16, uint32 or float. Definition at line 327 of file neuralnet.h. Member Function Documentation The callback method. Performs the activation function on the data. Implements celNNActivationFunc. Definition at line 330 of file neuralnet.h. Returns the type of data upon which this function operates. Implements celNNActivationFunc. Definition at line 331 of file neuralnet.h. The documentation for this class was generated from the following file: - propclass/neuralnet.h Generated for CEL: Crystal Entity Layer 2.1 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api/classcelNopActivationFunc.html
CC-MAIN-2014-41
en
refinedweb
Paul Russell wrote: > > * Berin Loritsch (bloritsch@apache.org) wrote : > > I discovered something incredible. The XSP system is our major > > performance sink-hole. This I find to be amazing. > > > I have my suspiscions as to where the problems may lie: Class > > validation (is it current?) and sending too many namespace events. I > > am going to try running reading a normal file through the > > LogTransformer, and then an XSP file through the same LogTransformer. > > I have a feeling that those two areas are are major performance > > bottlenecks. > > Interestingly, we discovered something similar a long time ago in > Luminas. Probably because we are *very* heavy on namespaces (a lot of > our pages have 10-15 namespaces floating around in them). The current > XSP implementation does an awful lot of prefix mapping changes. In fact, > we discovered that in a number of instances, _over half_ of the > generated code was concerned with adding and removing prefix mappings. > This is clearly not sensible. I'm not yet sure how to avoid this - I > think we may have to use extension functions to keep track of which > namespaces we've already defined. I just noticed that the ServerPagesGenerator caches the SAX results with a Stack. Is this really necessary? If an exception occurs, we should just throw a SAXException or ProcessingException like the rest of the system.
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200102.mbox/%3C3A8B0375.8273E5A2@apache.org%3E
CC-MAIN-2014-41
en
refinedweb
16 November 2009 17:20 [Source: ICIS news] LONDON (ICIS news)--A study which found that young boys exposed to high doses of certain phthalates, common chemicals present in PVC, would show less male-typical behaviour should be treated with extreme caution, an industry body said on Monday. The research, carried out at the University of Rochester Medical Center (URMC) in ?xml:namespace> “I don't think anyone should jump to conclusions without some much more sophisticated research being carried out," said Tim Edgar, an advisor at the The European Council for Plasticisers and Intermediates (EPCI). The URMC research team, led by Dr Shanna Swan, had previously shown that phthalate exposure during pregnancy could affect the development of genitals in baby boys. The new research found that certain phthalates could impact the developing brain by knocking out the action of the male hormone testosterone. "Dr Swan has chosen a very simplistic approach and has used the same relatively small sample of children she used in previous non-replicated and non verified studies purporting to show different effects," Edgar added. Phthalates have been banned in toys and childcare items in the EU for some years but have been still widely used in many different household items. Swan and her team tested urine samples from mothers in the 28th week of pregnancy for traces of phthalates and then tested the same women, who gave birth to 74 boys and 71 girls, when their children were aged 3.5 to 6.5 and asked about the toys that their youngsters had played with. The findings was published in the International Journal of Andrology. "This study shows once more that Dr Swan uses unproven methods to compile questionable data to reach conclusions that are consistent with her well-publicised opinion, which is not based on the weight of the scientific evidence surrounding the safety of phthalates. Dr Swan's recognition that the study results are 'not straightforward' is an understatement,” said Steve Risotto, senior director, phthalate esters at the American Chemistry Council. "It appears that the researchers selectively excluded data, eliminating certain subjects from the analysis, in order to strengthen their conclusion. Even the phraseology of the paper is more sensationalistic than scientific." Chemicals campaign group CHEM Trust, said that it was concerned with the results. "We now know that phthalates, to which we are all constantly exposed, are extremely worrying from a health perspective, leading to disruption of male reproduction health and, it appears, male behaviour too, said Elizabeth Salter-Green, director of the CHEM Trust. "This feminising capacity of phthalates makes them true 'gender benders,"
http://www.icis.com/Articles/2009/11/16/9264298/new-phthalates-health-study-to-be-treated-with-caution-epci.html
CC-MAIN-2014-41
en
refinedweb
My name is Kathy Kam and I am the newest addition to the Common Language Runtime (CLR) Program Management (PM) team. Like another PM on my team, JoelPob, I also grew up in the "Land Down Under". I left Sydney to pursue a degree in Computer Engineering and Mathematics at the University of Michigan - Ann Arbor. Upon graduation, I joined Microsoft as a developer for Microsoft Office Outlook. Four years later, after shipping Microsoft Office System 2003, a handful of Service Packs and working on Office 12 for two years, I decided to become a PM on the CLR team and here I am, writing my first post! This blog will be a record of my insights to the .NET world. In the computing community, the first thing any developer writes is a "Hello World" program. Since this is a blog for all the computer geeks in us. Here it is, "Hello World" Reflection style: using System; using System.Reflection; namespace HelloWorld { class Program { static void Main(string[] args) { foreach (FieldInfo fi in typeof(HelloObj).GetFields()) Console.Write(fi.Name + " "); } } class HelloObj { // My output public string Hello; public int World; public bool from; public int[] Kathy; public float Kam; } } The output will be: > Hello World from Kathy Kam lol. That is one of the cleverest ways I’ve seen to do "Hello World". Hi! Just to be the wiseguy that I am, I have to point out that your code is broken. It depends on Type.GetFields() to return the fields in a particular order, but the order is not guaranteed. Say hi to Joel for me 🙂 Hi Kathy, The geek world welcomes you. BTW, do you really need the public fields ? And the lower case in "from".. hum.. And "HelloObj" ? It is an object already ? 🙂 The FxCop team will ban this hello, no ? Just kidding 🙂 Nice way to do a hello world Kathy,All the best at the CLR team Hello! =) Happy to see that you’ve started a blog, another one for my list and more information to gather 🙂 Marlun, Sweden Hi Hugo, Yeah, I was not following the Design Guidelines. My bad. I made "from" lower case so that it spells it out like a sentence correctly. Regarding "HelloObj"… old habits die hard. Cheers, Kathy Hi Kathy, you’re welcome. The first question that came to my mind when I was looking at your program is why ‘Kathy’ field is an array? Is there any special meaning in this? Sorry if I’m missing something obvious 🙂 Regards, Dmitry Kathy, aren’t you heared about new Lambda-style programming? Here is more modern way to do things 😉 static void Main(string[] args) { List<FieldInfo> fields = new List<FieldInfo>(typeof(HelloObj).GetFields()); List<string> fieldNames = fields.ConvertAll<string>( delegate(FieldInfo f) { return f.Name+" "; }); fieldNames.ForEach(Console.Write); } Or even shorter: static void Main(string[] args) { new List<FieldInfo>(typeof(HelloObj).GetFields()) .ConvertAll<string>(delegate(FieldInfo f) { return f.Name+" "; }) .ForEach(Console.Write); } Hi Kathy, Congrats on your new position. That was certainly a creative way to Hello the world. By the way, since you just joined the team, the interview experience must be fresh in your mind. Can you blog about your PM interview experience in the CLR team? Thanks SN Hello, sorry, but how can one really get the order in which FieldInfoS are returned by Type.GetFields(), as <a href="">Jeroen Frijters</a> pointed? This is an interesting question and msdn keeps silent … Thank you Cool ! Could we force the field layout by: [StructLayout(LayoutKind.Sequential)] class HelloObj { … } Or does StructLayout only have effect only when the object is actually marshaled to the unmanaged code?
https://blogs.msdn.microsoft.com/kathykam/2005/09/21/hello-world-reflection-style/
CC-MAIN-2017-43
en
refinedweb
I wouldn't be surprised to find that the "void" has been inserted by some well meaning IDE and has been overlooked by some lenient compiler, and Jikes is about the least lenient compiler I've seen (although that's not a bad thing). My advise is to remove the keyword "void" and see if the problem goes away. The other problem is one I've noticed before too. Quite simply the class "Attributes" has been created in both of the imported packages and the compiler doesn't know what to do. One reason for this could be that you are using a different version of the APIs involved that happen to have a name clash. Another reason is that Sun's Javac tends to pick out one of the candidates and run with it, whereas more careful compilers such as IBM's Jikes spot the problem and leave it to you to sort out. The solution to either of these is to import the class explicitly with one of the following lines near the bottom of your import statements. import javax.naming.directory.Attributes; import org.xml.sax.Attributes; Which one to use involves a bit or guess work, although since there's only two options trial and error should do the trick... Good luck Rob __________________________________________________ Do You Yahoo!? Yahoo! Mail - Free email you can access from anywhere!
http://mail-archives.apache.org/mod_mbox/tomcat-users/200009.mbox/%3C20000904095709.26667.qmail@web2106.mail.yahoo.com%3E
CC-MAIN-2015-18
en
refinedweb
NAME vga_screenoff, vga_screenon - turn generation of the video signal on or off SYNOPSIS #include <vga.h> int vga_screenoff(void); int vga_screenon(void); DESCRIPTION The functions turn the generation of the video signal on or off. On some video boards, SVGA memory access is faster when no video signal is generated. On the other hand, not all boards allow to disable it (at least not in all modes). The functions always return 0 (on which you should probably not really rely). SEE ALSO svgalib(7), vgagl(7), libvga.config(5), vga_init(3), vga_setmode(3), vga_clear.
http://manpages.ubuntu.com/manpages/karmic/man3/vga_screenon.3.html
CC-MAIN-2015-18
en
refinedweb
On Tue, Dec 02, 2008 at 01:17:37PM +0000, Justin B Rye wrote: > Christian Perrier wrote: > >. > > (This seems like namespace-hogging to me - thanks to cron-apt I've > been happily doing periodic unattended upgrades since before Ubuntu > even existed...) Fair enough, it would be easy enough to have a apt::periodic::unattended-upgrade-command option in apt that can be used to set the command if that is a concern. Unattended-upgrades was designed as a simple and secure way to do unattended upgrades (with the main use-case being security upgrades), its not by far as flexible as cron-apt. It has some nice features (like detection of conffile prompts and ensuring that only stuff from pre-configured origins gets upgraded etc). But I guess this is not the right place to discuss the merits of the two implementations .) Cheers, Michael
https://lists.debian.org/debian-l10n-english/2008/12/msg00009.html
CC-MAIN-2015-18
en
refinedweb
Hello, :: TERMINOLOGY :: watch : data that describes a file or directory that should be audited watchlist : a linked list of watchlist entries residing on a directory watchlist entry (wentry): an entry to a watchlist that contains a watch :: INTRODUCTION :: In an effort to make the mainline kernel's audit subsystem Controlled Access Protection Profile (CAPP)/Evaluation Assurance Level (EAL) 4 compliant, this patch adds file system auditing support to the audit subsystem. Such support is essential in meeting certification requirements because it allows the evaluator to confirm all claims made about the Target of Evaluation (TOE) regarding the behavior of file system objects (which are outlined in the Security Target for the given evaluation) by consulting the audit log. To achieve such results, it's necessary for the audit subsystem to identify and keep track of such objects. Due to the abstract nature of "identity" with regards to file system objects and how that "identity" translates between the user's perspective and the kernel's perspective and visa-versa, a fairly strict definition is devised. This implementation uses a scheme by which parent directories have a "watchlist" that qualifying children may point into at the "watchlist entry" that holds their "watch". Pointing at a "watchlist entry" translates into "being watched". It is also important to keep in mind that in a CAPP environment, we assume the administrator to be benign and that we are preventing subversion of the audit subsystem for the purpose of evaluation and not user/process malice. This component is not designed for filesystem notifications, process/user snooping, intrusion detection, etc. :: DESCRIPTION :: Below is a basic description of this patches capabilities. These capabilities are enabled by the user space program, auditctl, which is available in the audit package (found at:). 1. Insertions When the administrator targets a file system object for audit, they do so by <path> name. This is an absolute target -- meaning, the administrator targets the file system object by name, on a given device, in a given namespace. Provided the parent directory of the targeted object exists, we add to it's "watchlist" the "watch" for our targeted object. Thus, all information about the watched object is stored on inodes, in memory, and not on disk. When adding a "watch" at <path>, the terminating file or directory at <path> need not exist. (ie: if we wish to watch /tmp/foo, /tmp must exist, but 'foo' does not have too). This is reasonable in a CAPP environment. 2. Removal Likewise to inserting watches, we may remove a watch in the same fashion. If the terminating file or directory name was found in its parent's watchlist, the corresponding "watchlist entry" is unhashed. Once this "watchlist entry" is unhashed, it becomes invalid (ie: it may be overwritten and will no longer generate audit records). 3. Listings It'd be helpful for the administrator to be able to determine what watches already exist directly under a directory, on a given device, in a given namespace. To do so, the admistrator must target a specific directory via a path (using the given device, in the given namespace) and a list of any watches in that directory's watchlist will be returned. 4. Hooks To make this all work, there are three sets of hooks the audit subsystem uses. 1. The first set of hooks manage the inode's audit_data memory. Two hooks: one to allocate memory and one to free memory. 2. The second set of hooks is used in the dcache to attach watches to a dentry's inode->i_audit->wentry field (ie: these hooks are responsible for *watching* a file system object). Creation: We use hooks in d_instantiate() and d_splice_alias() to immediately attach watches, if they exist, to newly created / spliced dentries. Watch/removal: We use the __d_lookup() hook for two reasons: to assign a new "watch", if one exists at this location (ie: a hardlink that's just become "unwatched" exists in a location that has a "watch") and to detach unhashed (invalid) watchlist entries (wentries) on inodes. Deletion: The d_delete() hook is used to drain watchlists and detach from a "watch". We've effectively left the "watch". Movement: The d_move() hook is used to remove the "watch" and drain the "watchlist" from a dentry prior to "moving" it (leaving the "watch"), and then attach to it, a new "watch", if the location it's now at is being "watched". 3. The third set of hooks are all used to notify the audit subsystem about access to a "watched" object. These hooks tell the audit subsystem to generate a record. Permissions: This is a good junction to place a hook that generates audit records. These functions are consulted before we commit to action, thus, whether we fail or not, we get records. Not always can we map a permissions check one-to-one with a watched file system object (ie: unlink), thus other hooks are required. An added benefit of hooking permission functions, is the ability to "watch" the parent directory of a "watched" file, to see how it was consulted when attempting access of the "watched" file. We have hooks at permission() and exec_permission_lite(). Creation: For creation, we have hooks in vfs_link()/symlink()/create()/mkdir()/mknod(). Once we have the inode (post creation), and we're attached (post audit_attach_watch), we want to generate a record. Deletion: For deletion, we hook may_delete(). We do so because vfs_unlink()/rmdir() both make use of this function; it is a good junction. Rename: For renaming, we hook vfs_rename_other()/rename_dir() to genereate audit records describing the rename in to a "watched" location, and rely on the may_delete() hook to give us an audit record describing the rename out of a "watched" location. Open: I think these hooks can be dropped. Will do before we send out to linux-fsdevel. 5. Notable Behavior This system allows for only one type of implicit watch; hardlinks. One may create a hardlink to a "watched" file and it too, will be watched. They can "move" this hardlink around, and it will remain watched. This is because both the watched object and the hardlink share the same inode. However, should the "watched" object (ie: the dentry belonging to this inode that meets the aforementioned criteria) no longer meet this criteria, the hardlink will no longer be attached to this "watch" -- In fact, the next time the inode is accessed, should a hardlink exist in another "watched" location, the inode would attach to this "watch" (See, 4. Hooks). This makes sense, but in a subtle way. If we create a dentry, such that we become watched again, even though there are now at least two files on the system that could contain the same content, our one time hardlink, has effectively become a separate object to us. Thus it is important to realize that we are not auditing access to specific content. This being said, if we decide to "move" in any way, out of a "watched" location, we lose the "watch" -- Thus, if we: mv, cp, rm (or use their underlying syscalls), we'll lose the "watch" and thus, we will no longer be audited. It's important, however, to keep in mind that we will get final records based on these actions (ie: if we do mv /tmp/foo to /tmp/bar and /tmp/foo is being watched, we will see a record for the rename out of /tmp/foo. And, if we do mv /tmp/bar /tmp/foo, we will see a record for the rename into /tmp/foo). -tim
https://www.redhat.com/archives/linux-audit/2005-March/msg00237.html
CC-MAIN-2015-18
en
refinedweb
Neat :-) I was wanting to apply this to a field, which sorts on INT.?? Is there a case for... public class Sorting { public static SortField getSortField(String fieldName, int type, boolean reverse, boolean nullLast, boolean nullFirst) { // ... } } ....and handling all feasible SortField types? -----Original Message----- From: Yonik Seeley [mailto:yseeley@gmail.com] Sent: 14 July 2006 18:30 To: java-user@lucene.apache.org Subject: Re: MissingStringLastComparatorSource and MultiSearcher On 7/14/06, Rob Staveley (Tom) <rstaveley@seseit.com> wrote: > Chris Hostetter and Yonik's MissingStringLastComparator looks like a > neat way to specify where to put null values when you want them to > appear at the end of reverse sorts rather than at the beginning, but I spotted the note... > > // Note: basing lastStringValue on the StringIndex won't work > // with a multisearcher. > > Is that a show-stopper for MultiSearchers, or does it just mean that > it is a bit less efficient? Short answer: it should work for 99.99999% of indicies :-) That comment just related to the original code that's now commented out that based the sort-value for missing values on the largest item in the index. To fix that, missingValueProxy was added and defaulted to bigString. That's what will be used to collate results in a multisearcher when the field value is missing. So this scheme will only fail if you have field values that compare bigger than bigString (or whatever you pass in as missingValueProxy). See the code below: public static final String bigString="\uffff\uffff\uffff\uffff\uffff\uffff\uffff\uffffNULL_VAL"; private final String missingValueProxy; public MissingStringLastComparatorSource() { this(bigString); } /** * Returns the value used to sort the given document. The * object returned must implement the java.io.Serializable * interface. This is used by multisearchers to determine how to collate results from their searchers. * @see FieldDoc * @param i Document * @return Serializable object */ /** Creates a {@link SortComparatorSource} that uses <tt>missingValueProxy</tt> as the value to return from ScoreDocComparator.sortValue() * which is only used my multisearchers to determine how to collate results from their searchers. * * @param missingValueProxy The value returned when sortValue() is called for a document missing the sort field. * This value is *not* normally used for sorting, but used to create */ public MissingStringLastComparatorSource(String missingValueProxy) { this.missingValueProxy=missingValueProxy; } -Yonik Solr, the open-source Lucene search server --------------------------------------------------------------------- To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org For additional commands, e-mail: java-user-help@lucene.apache.org
http://mail-archives.apache.org/mod_mbox/lucene-java-user/200607.mbox/%3C20060714175035.43DC0187BE@mail.seseit.com%3E
CC-MAIN-2015-18
en
refinedweb
Opened 6 years ago Last modified 18 months ago #10244 new Bug FileFields can't be set to NULL in the db Description Saving FileFields with a none value sets the field to a empty string in the db and not NULL as it should. Attachments (4) Change History (19) comment:1 Changed 6 years ago by oyvind - Needs documentation unset - Needs tests unset - Patch needs improvement unset comment:2 Changed 6 years ago by oyvind Example to test this issue: class File(models.Model): file = models.FileField(null=True, upload_to='files') from models import File f = File() f.file = None f.save() from django.db import connection print connection.queries Changed 6 years ago by oyvind Patch that solves this issue, FieldFile's name can now be None also "is" and "==" is not the same Changed 6 years ago by Alex comment:3 Changed 6 years ago by Alex - Triage Stage changed from Unreviewed to Design decision needed needs to be decided whether the CharField behavior should be used throughout, or NULL where explicitly requested Changed 6 years ago by oyvind Meade FieldFile set name to u on delete, made test show filefields and charfields act the same comment:4 Changed 6 years ago by kmtracey Dev list discussion: Also, #10749 reported surprise at searching for None not working when looking for empty ImageFields. I still think changing this now breaks backwards-compatibility, though. comment:5 Changed 6 years ago by thejaswi_puthraya - Component changed from Uncategorized to Database layer (models, ORM) comment:6 Changed 4 years ago by SmileyChris - Severity set to Normal - Type set to Bug comment:7 Changed 3 years ago by aaugustin - Easy pickings unset - Triage Stage changed from Design decision needed to Accepted - UI/UX unset I see two possible solutions: - 1. document that FileField ignores the null keyword argument; this is the most backwards compatible solution; - 2. make FileField behave exactly like CharField wrt. null; this is the most consistent solution. Option 2. will be backwards incompatible only for people who set null=True on a FileField. The docs discourage that heavily, and it doesn't behave as expected, actually it doesn't even do anything, so there isn't much reason to have such code. That's why I think the change would be acceptable. I have a slight preference for option 2, which is better in the long term. comment:8 Changed 18 months ago by siroky@… Hi! What is the status of this ticket? I see it is 5 years old ticket and the problem is still present at least in 1.5.1. comment:9 Changed 18 months ago by aaugustin It's waiting for someone to write a patch. Changed 18 months ago by siroky@… None/NULL for FileField (v1.5.1) comment:10 Changed 18 months ago by siroky@… How about the filefield_null.patch? It follows solution 2. comment:11 Changed 18 months ago by claudep - Has patch set - Needs documentation set - Needs tests set Thanks, it's a first step :-) However, tests are crucial with such a change. That will be the next step. See FileFieldTestsin tests/model_fields/tests.py. comment:12 Changed 18 months ago by anonymous How can I run the FileFieldTests without the whole suite? comment:13 Changed 18 months ago by timo comment:14 Changed 18 months ago by siroky@… I bumped into a problem. If instance.null_file_field is None then methods like save() will not work (AttributeError: 'NoneType' object has no attribute 'save'). I'm thinking about 2 solutions: 1) Make the descriptor return value comparable to None (via __eq__). This is not very clean because the best practise is to use operator is (is_none = x is None). 2) Keep the empty string ("") on the Python side as a representation of file's database NULL. Not very consistent but it has better backward compatibility if someone already uses the string comparison (has_no_file = filefield.name == ""). Any suggestions? comment:15 Changed 18 months ago by anonymous So? :-) Patch does not seem to work quite like I expected. So the problem persists.
https://code.djangoproject.com/ticket/10244
CC-MAIN-2015-18
en
refinedweb
). The accessor and mutator methods read from and write to the NSUserDefaults object directly, with setPreferencesTwo using an assertion to verify that the supplied value is legal. ). Build and run the NewPreferencesExample application in Xcode. If everything goes to plan, a window will appear in screen. Hit Command- Q to exit the application. There's nothing more to see here--please move along. The first step is to add the SS_PrefsController source files, SS_PrefsController.h, SS_PrefsController.m, and SS_PreferencePaneProtocol.h, to our XCode project. I prefer to put these in the Other Sources group. Now open the SS_PreferencePaneProtocol.h file. It contains seven methods, with the following signatures: + (NSArray *)preferencePanes; - (NSView *)paneView; - (NSString *)paneName; - (NSImage *)paneIcon; - (NSString *)paneToolTip; - (BOOL)allowsHorizontalResizing; - (BOOL)allowsVerticalResizing; SS_PrefsController requires each preference pane to implement the SS_PreferencePaneProtocol protocol. In the example that comes with SS_PrefsController, each preference pane implementation (controller) class implements this protocol anew, with the values for the pane name, icon, tooltip, and other properties hardcoded in the pane controller implementation. Inspecting the source for the SS_PrefsController class, we see that it enforces the rule that preference pane controllers must implement the SS_PreferencePaneProtocol only in the designated initializer ( - (id)initWithPanesSearchPath:(NSString*)path bundleExtension:(NSString *)ext). Instead of implementing the SS_PreferencePaneProtocol protocol in our PreferencePaneControllers, we will provide a parent class that replicates the functionality of the SS_PreferencePaneProtocol, from which our PreferencePaneControllers will inherit. Here is the header for our NPEPreferencePaneController class: #import <Cocoa/Cocoa.h> #define PANE_NAME_KEY @"paneName" #define ICON_PATH_KEY @"iconPath" #define TOOL_TIP_KEY @"toolTip" #define HELP_ANCHOR_KEY @"helpAnchor" #define ALLOWS_HORIZONTAL_RESIZING_KEY @"allowsHorizontalResizing" #define ALLOWS_VERTICAL_RESIZING_KEY @"allowsVerticalResizing" @interface NPEPreferencePaneController : NSObject{ id _controller; NSString *_nibName; NSString *_paneName; NSString *_iconPath; NSString *_toolTip; NSImage *_paneIcon; BOOL _allowsHorizontalResizing; BOOL _allowsVerticalResizing; NSString *_helpAnchor; IBOutlet NSView *_prefsView; } - (id) initWithNib:(NSString *)nibName dictionary:(NSDictionary *)dictionary controller:(id)controller; - (void) dealloc; - (id) controller; - (void) showWarningAlert:(NSError *) error; - (IBAction) showHelp:(id) sender; // SS_PreferencePaneProtocol // we don't need this. // + (NSArray *)preferencePanes; - (NSView *)paneView; - (NSString *)paneName; - (NSImage *)paneIcon; - (NSString *)paneToolTip; - (BOOL)allowsHorizontalResizing; - (BOOL)allowsVerticalResizing; @end As you can see, we have class variables for each of the properties that a pane controller requires. We also have a pointer to a controller instance, and to an NSView object--the NSView that will contain the controls for our preference pane. The methods for our NPEPreferencePaneController closely mimic those of the SS_PreferencePaneProtocol, with the following exceptions: we have eliminated the preferencePanes method, as we will not need it, and we have added showWarningAlert and showHelp methods. We will describe these later on. We have also added an initializer that takes as its arguments the name of the nib for the preference pane, a dictionary (from which we will read the other parameters), and a pointer to a controller--in our case, this will be our NPEController instance. Our implementation for NPEPreferencePaneController is shown below: #import "NPEPreferencePaneController.h" @implementation NPEPreferencePaneController - (id) initWithNib:(NSString *)nibName dictionary:(NSDictionary *)dictionary controller:(id)controller{ if(! (self = [super init])) return nil; _nibName = nibName; _paneName = [dictionary objectForKey:PANE_NAME_KEY]; _iconPath = [dictionary objectForKey:ICON_PATH_KEY]; _toolTip = [dictionary objectForKey:TOOL_TIP_KEY]; NSAssert(_nibName && _paneName && _iconPath && _toolTip, @"Dictionary does not contain a nibName, paneName, iconPath, or toolTip entry."); [_nibName retain]; [_paneName retain]; [_iconPath retain]; [_toolTip retain]; _helpAnchor = [[dictionary objectForKey:HELP_ANCHOR_KEY] retain]; _allowsHorizontalResizing = [@"YES" isEqualToString:[dictionary objectForKey:ALLOWS_HORIZONTAL_RESIZING_KEY]]; _allowsVerticalResizing = [@"YES" isEqualToString:[dictionary objectForKey:ALLOWS_VERTICAL_RESIZING_KEY]]; _controller = [controller retain]; return self; } - (void) dealloc{ [_nibName release]; [_paneName release]; [_iconPath release]; [_toolTip release]; [_paneIcon release]; [_helpAnchor release]; [_controller release]; [super dealloc]; } - (id) controller{ return _controller; } - (void) showWarningAlert:(NSError *) error{ NSAssert(_prefsView != nil, @"prefsView was nil"); NSAlert *alert = [NSAlert alertWithError:error]; if(_helpAnchor != nil){ [alert setShowsHelp:YES]; [alert setHelpAnchor:_helpAnchor]; } [alert setAlertStyle:NSWarningAlertStyle]; [alert beginSheetModalForWindow:[_prefsView window] modalDelegate:self didEndSelector:nil contextInfo:nil]; } - (IBAction) showHelp:(id) sender{ NSAssert(_helpAnchor, @"Help anchor was not set"); [[NSHelpManager sharedHelpManager] openHelpAnchor:_helpAnchor inBook:nil]; } - (NSView *)paneView{ BOOL loaded = YES; if(! _prefsView) loaded = [NSBundle loadNibNamed:_nibName owner:self]; if(loaded) return _prefsView; return nil; } - (NSString *)paneName{ return _paneName; } - (NSImage *)paneIcon{ if(_paneIcon == nil){ _paneIcon = [[NSImage alloc] initWithContentsOfFile:[[NSBundle bundleForClass:[self class]] pathForImageResource:_iconPath]]; } return _paneIcon; } - (NSString *)paneToolTip{ return _toolTip; } - (BOOL)allowsHorizontalResizing{ return _allowsHorizontalResizing; } - (BOOL)allowsVerticalResizing{ return _allowsVerticalResizing; } @end With the exception of the initializer, and the showHelp and showWarningAlert methods, this is lifted almost wholesale from the example SS_PreferencePaneProtocol implementations that come with the SS_PrefsController examples. Our NPEPreferencePaneController class will not work with SS_PrefsController. To get it to work, we will subclass SS_PrefsController, and add the necessary code in our subclass. We do need to make one tiny hack to the SS_PrefsController code directly, though. The init method for SS_PrefsController calls the designated initializer: - (id)initWithPanesSearchPath:(NSString*)path bundleExtension:(NSString *)ext. We want to avoid this, so we will comment out the init method in SS_PrefsController. Our NPEPrefsController class header looks like this: #import <Cocoa/Cocoa.h> #import "SS_PrefsController.h" #import "NPEPreferencePaneController.h" @interface NPEPrefsController : SS_PrefsController { } - (id)initWithPanesSearchPath:(NSString*)path bundleExtension:(NSString *)ext controller:(id)controller; - (void)activatePane:(NSString*)path controller:(id)controller; - (NSArray *)toolbarSelectableItemIdentifiers:(NSToolbar *)toolbar; - (void)showPreferencesWindow; - (void)showPreferencePane:(NSString *) paneName; @end The underlying implementation looks like this: #import "NPEPrefsController.h" @implementation NPEPrefsController // Designated initializer - (id)initWithPanesSearchPath:(NSString*)path bundleExtension:(NSString *)ext controller:(id)controller; { if (self = [super init]) { [self setDebug:NO]; preferencePanes = [[NSMutableDictionary alloc] init]; panesOrder = [[NSMutableArray alloc] init]; [self setToolbarDisplayMode:NSToolbarDisplayModeIconAndLabel]; #if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_2 [self setToolbarSizeMode:NSToolbarSizeModeDefault]; #endif [self setUsesTexturedWindow:NO]; [self setAlwaysShowsToolbar:NO]; [self setAlwaysOpensCentered:YES]; if (!ext || [ext isEqualToString:@""]) { bundleExtension = [[NSString alloc] initWithString:@"preferencePane"]; } else { bundleExtension = [ext retain]; } if (!path || [path isEqualToString:@""]) { searchPath = [[NSString alloc] initWithString:[[NSBundle mainBundle] resourcePath]]; } else { searchPath = [path retain]; } // Read PreferencePanes - this is where we differ from SS_PrefsController if (searchPath) { NSEnumerator* enumerator = [[NSBundle pathsForResourcesOfType:bundleExtension inDirectory:searchPath] objectEnumerator]; NSString* panePath; while ((panePath = [enumerator nextObject])) { [self activatePane:panePath controller:controller]; } } return self; } return nil; } - (void)activatePane:(NSString*)path controller:(id)controller{ NSBundle* paneBundle = [NSBundle bundleWithPath:path]; NSAssert1(paneBundle != nil, @"Could not initialize bundle: %@", paneBundle); NSDictionary* paneDict = [paneBundle infoDictionary]; NSString* paneClassName = [paneDict objectForKey:@"NSPrincipalClass"]; NSAssert1(paneClassName != nil, @"Could not obtain name of Principal Class for bundle: %@", paneBundle); Class paneClass = NSClassFromString(paneClassName); NSAssert2(paneClass == nil, @"Did not load bundle: %@ because its Principal Class %@ was already used in another Preference pane.", paneBundle, paneClassName); paneClass = [paneBundle principalClass]; NSAssert2([paneClass isSubclassOfClass:[NPEPreferencePaneController class]], @"Could not load bundle %@ because it Principal Class %@ is not a subclass of NPEPreferencePaneController", paneBundle, paneClassName); NSString *nibName = [paneDict objectForKey:@"NSMainNibFile"]; NSAssert1(nibName, @"Could not obtain name of nib for bundle: %@", paneBundle); NPEPreferencePaneController *aPane = [[paneClass alloc] initWithNib:nibName dictionary:paneDict controller:controller]; if(aPane != nil){ [panesOrder addObject:[aPane paneName]]; [preferencePanes setObject:aPane forKey:[aPane paneName]]; [aPane release]; } } #if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_3 - (NSArray *)toolbarSelectableItemIdentifiers:(NSToolbar *)toolbar{ return panesOrder; } #endif // make sure we get a highlighted icon first time around. - (void)showPreferencesWindow{ [super showPreferencesWindow]; [prefsToolbar setSelectedItemIdentifier:[prefsWindow title]]; } // make sure we get a highlighted icon when activating a pane programmatically. - (void)showPreferencePane:(NSString *) paneName{ [self loadPreferencePaneNamed:paneName]; [prefsToolbar setSelectedItemIdentifier:paneName]; } @end The initWithPanesSearchPath and activatePane methods are lifted almost wholesale from SS_PrefsController. The only significant changes that we have made are to ensure that the preference pane controller classes are subclasses of our NPEPreferencePane controller class, instead of conforming to the SS_PreferencePaneProtocol, and to add an additional controller parameter to each method, so that we can pass a handle to the controller down to the preference pane controller implementations. The remaining three functions are all related to icon highlighting: toolbarSelectableItemIdentifiers is an NSToolBar delegate method, that returns an array of pane name, which should be highlighted when they are selected (all of them, in our case), and we override the showPreferencesWindow method of SS_PrefsController to ensure that our initially selected pane's icon is highlighted. Finally, the showPreferencesPane method is unique to NPEPrefsController--it allows us to display a pane programmatically, and handles the correct icon highlighting. Note that highlighting only works in Mac OS X 10.3 or later. To add NPEPrefsController to our application, we will add a single variable and method to our NPEController class. The variable _prefsController is a pointer to an NPEPrefsController object (so we also need to import NPEPrefsController.h). The method, showPrefs, is listed below: - (IBAction) showPreferences:(id) sender{ if(! _prefsController){ NSString *path = nil; NSString *ext = nil; _prefsController = [[NPEPrefsController alloc] initWithPanesSearchPath:path bundleExtension:ext controller:self]; // so that we always see the toolbar, even with just one pane [_prefsController setAlwaysShowsToolbar:YES]; [_prefsController setPanesOrder:[NSArray arrayWithObjects:@"General", @"Advanced", nil]]; } [_prefsController showPreferencesWindow]; } Oops. One more thing. We need to add the line [_prefsController release]; to NPEController's dealloc method. Finally, opening MainMenu.nib in Interface Builder, we re-read the NPEController.h file (make sure you declare showPreferences in the header), and connect the Preferences menu item to the showPreferences action in NPEPrefsController. Build and run the NewPreferencesExample application in XCode. If everything goes to plan, a window will appear onscreen. Select the "Preferences..." item from the NewApplication menu (you can rename this in Interface Builder if you like), and, as if by magic, a dialog box will appear: "Preferences are not available for NewPreferencesExample" Figure 2. The NewPreferencesExample application, with no preferences. All of the scaffolding for our new preferences window is now in place. In part two of this article, I will cover the actual creation of the preference panes. Martin Redington is a long-time Mac user who recently started writing Mac shareware. His first product, MySync, provides Mac-to-Mac syncing without .Mac and is currently in public beta. Return to the Mac DevCenter
http://www.macdevcenter.com/lpt/a/6440
CC-MAIN-2015-18
en
refinedweb
I went from C to C++, but I am sure that isn't the problem. Perhaps strcmp() cannot compare two char's? Thanks! Code:#include <iostream> #include <stdio.h> #include <stdlib.h> #include <string.h> using namespace std; int main() { string chain; string input; int done = 0; int compare; char YN; while (done==0) { cout << "Input next element in chain: " << endl; scanf("%s", input); chain = chain + input; cout << "Done? (Y/N) " << endl; scanf("%c", YN); if (compare = strcmp(YN,'Y') == 0) done = 1 } cout << Chain << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/141913-simple-problem-i%27m-sure.html
CC-MAIN-2015-18
en
refinedweb
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also #include <sys/mman.h> int posix_memalign(void **memptr, size_t alignment, size_t size); The posix_memalign() function allocates size bytes aligned on a boundary specified by alignment, and returns a pointer to the allocated memory in memptr. The value of alignment must be a power of two multiple of sizeof(void *). Upon successful completion, the value pointed to by memptr will be a multiple of alignment. If the size of the space requested is 0, the value returned in memptr will be a null pointer. The free(3C) function will deallocate memory that has previously been allocated by posix_memalign(). Upon successful completion, posix_memalign() returns zero. Otherwise, an error number is returned to indicate the error. The posix_memalign() function will fail if: The value of the alignment parameter is not a power of two multiple of sizeof(void *). There is insufficient memory available with the requested alignment. See attributes(5) for descriptions of the following attributes: free(3C), malloc(3C), memalign(3C), attributes(5), standards(5) Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i0999m/index.html
CC-MAIN-2015-18
en
refinedweb
[Java3D] New to java3D and need some help here Hey guys, I'm still learning java3D and not even run any examples program yet. This is because tehre some error occured when I run the .class file. After compile the example program that I get from net, it has no error and successed to create the .class file. But when I run the program, it shows this: Exception in thread "main" java.lang.NoClassDefFoundError: Tetrahedron (wrong name: org/jdesktop/j3d/examples/appearance/Tetrahed not sure whether I installed the java3D properly or not. Here are my steps in setting up java3D. 1) Install jdk-1_5_0_06-nb-5_0-win 2)Install java3d-1_3_1-windows-i586-opengl-rt 3)install java3d-1_3_1-windows-i586-opengl-sdk 4)java3d-1_4_0-windows-i586 -copy the file in C:\j3d-140-win that I extracted out from a zip in java3d-1_4_0-windows-i586 to : C:\Program Files\Java\jre1.5.0_06\bin, C:\Program Files\Java\jre1.5.0_06\lib\ext, C:\Program Files \Java\jdk1.5.0_06\jre\bin, C:\Program Files\Java\jdk1.5.0_06\jre\lib\ext as what stated in the instruction in java3d-1_4_0 And I want to import my vrml file to java3D, I already texture mapped my model in VRML and I want to ask, after imported to java3D, the texture mapping will be corrupted and not same with what I get in VRML? Because I texture mapped the model in 3DsMax and export to VRML file, then import again to java3D, I'm affraid of the texture mapping will be changed when importing to java3D. Please help me, I'm new to java3D(just read some tutorial from net) and need to complete my project as soon as possible. I'm frustating.... Thanks for helping and spending time read my question. -- View this message in context:... Sent from the java.net - java3d interest forum at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: interest-unsubscribe@java3d.dev.java.net For additional commands, e-mail: interest-help@java3d.dev.java.net
https://www.java.net/node/654735
CC-MAIN-2015-18
en
refinedweb
HELLO I wrotr this code. It's meant to accept all the 8 inputs the user gives as an element in the array. I was just wondering if somebody could please help me do that. Code: #include <stdio.h> #define s 20 #define sof 10 typedef struct { int day; int month; int year; } Date; typedef struct { float cost; } Price; typedef struct { char make[s]; Date purchaseDate; Date manufactureDate; Price purchasePrice; car[sof]; } Car; int main() { Date date; Price price; Car car; int i; for(i=0; i<=10); printf(" The name of the car is %s \n", car.make); printf(" The car was purchased on the %d of the %d month of the year %d \n", car[i].purchaseDate.day, car[i].purchaseDate.month, car[i].purchaseDate.year); printf(" The car was manufactured on the %d of the %d month of the year %d \n", car[i].manufactureDate.day, car[i].manufactureDate.month, car[i].manufactureDate.year); printf(" The car was bought for %lf \n",car[i].purchasePrice.cost); } return 0; }
http://cboard.cprogramming.com/c-programming/70735-almost-there-just-need-little-help-printable-thread.html
CC-MAIN-2015-18
en
refinedweb
Mastering FXML 3 FXML—What's New in JavaFX 2.2 This page contains a list of FXML enhancements added in JavaFX 2.2. <fx:constant> tag The <fx:constant> tag has been added to FXML to facilitate lookup of class constants. For example, the NEGATIVE_INFINITYconstant defined by the java.lang.Doubleclass can now be referenced as follows: <Double fx: Improved access to sub-controllers in FXML In JavaFX 2.1 and earlier, it was not easy to access subcontrollers from a root controller class. This made it difficult to use a controller to open and populate a dialog window whose contents were defined in an include statement, for example. JavaFX 2.2 maps nested controller instances directly to member fields in the including document's controller, making it much easier to interact with nested controllers. Consider the following FXML document and controller: file, and the dialogControllerobject will contain the include statement's controller. The main controller can then invoke methods on the include statement's controller, to populate and show the dialog, for example. Support for controller initialization via reflection In JavaFX 2.1 and earlier, controller classes were required to implement the Initializableinterface to be notified when the contents of the associated FXML document had been completely loaded. In JavaFX 2.2, this is no longer necessary. An instance of the FXMLLoaderclass simply looks for the initialize()method on the controller and calls it, if available. Note that, similar to other FXML callback methods such as event handlers, this method must be annotated with the @FXMLannotation if it is not public. It is recommended that developers use this approach for new development. The Initializableinterface has not been deprecated, but might be in a future release. Simplified creation of FXML-based custom controls In previous releases, it was fairly cumbersome to create custom controls whose internal structure was defined in FXML. JavaFX 2.2 includes some subtle but powerful enhancements that significantly simplify this process. The new setRoot()and setController()methods enable the calling code to inject document root and controller values, respectively, into the document namespace, rather than delegate creation of these objects to FXMLLoader. This enables a developer to create reusable controls that are internally implemented using markup, but (from an API perspective) appear identical to controls implemented programmatically. For example, the following code markup defines the structure of a simple custom control containing a TextFieldand a Buttoninstance. The root container is defined as an instance of the javafx.scene.layout.VBoxclass: <> The <fx:root> tag, which was added for JavaFX 2.2, specifies that the element's value will be obtained by calling the getRoot()method of the FXMLLoaderclass. Prior to calling the load()method, the calling code must specify this value by calling the setRoot()method. The calling code can also provide a value for the document's controller by calling the setController()method. For more information, see Creating a Custom Control with FXML.
http://docs.oracle.com/javafx/2/fxml_get_started/whats_new2.htm
CC-MAIN-2015-18
en
refinedweb
- Author: - wolfram - Posted: - January 16, 2008 - Language: - Python - Version: - .96 - template tag trim spaceless - Score: - 2 (after 2 ratings) This tag is meant to override the current implementation of '{% spaceless %}' tag and remove spaces at the beginning of a line too. I.e. a template like this: <div> <div>useless space up front</div> </div> will become this <div> <div>useless space up front</div> </div> All the other behaviour of spaceless stays the same! Put this in your app/name/templatetags/tags.py And if you want it to override the default "spaceless"-tag to the following from django.template import add_to_builtins add_to_builtins('app.name.templatetags.tags') More like this - Smart Spaceless by btaylordesign 4 years ago - Plaintext format (advanced spaceless) by hoverhell 4 years ago - smart spaceless by nedbatchelder 7 years, 1 month ago - Updated version of StripWhitespaceMiddleware (v1.1) by sleepycal 3 years, 10 months ago - Skip only specified spaces by axil 3 years, 2 months ago Note that this will erase all leading spaces in pre-Tags too. # You need to add in some import statements at the top there... # Please login first before commenting.
https://djangosnippets.org/snippets/547/
CC-MAIN-2015-18
en
refinedweb
Remedy::ARSTools - a perl wrapper to the ARSperl project, providing a simplified object interface with field definition caching. use Remedy::ARSTools; #create a new object with a new field definition data cache my $Remedy = new Remedy::ARSTools( Server => $server_host_or_ip, User => $username, Pass => $password, ConfigFile => $file_to_cache_field_definition_data, Schemas => [ 'list', 'of', 'schema names', 'to get', 'field data for' ] ) || die ($Remedy::ARSTools::errstr); #create a ticket my $ticket_number = $Remedy->CreateTicket( Schema => $schema_name, Fields => { 'fieldName1' => "value1", 'fieldName2' => "value2, ... etc ... } ) || die $Remedy->{'errstr'}; #merge ticket my $ticket_number = $Remedy->MergeTicket( Schema => $schema_name, MergeCreateMode => "Overwrite", Fields => { 'fieldName1' => "value1", 'fieldName2' => "value2, ... etc ... } ) || die $Remedy->{'errstr'}; #modify a ticket $Remedy->ModifyTicket( Schema => $schema_name, Ticket => $ticket_number, Fields => { 'fieldName1' => "value1", 'fieldName2' => "value2, ... etc ... } ) || die $Remedy->{'errstr'}; #query for tickets $tickets = $Remedy->Query( Schema => $schema_name, QBE => $qbe_string, Fields => ['array', 'of', 'fieldNames', 'to', 'retrieve'] ) || die $Remedy->{'errstr'}; #delete a ticket $Remedy->DeleteTicket( Schema => $schema_name, Ticket => $ticket_number ) || die $Remedy->{'errstr'}; #parse a raw diary entry $parsed_diary = $Remedy->ParseDBDiary( Diary => $raw_diary_data_from_database, ConvertDate => 1, DateConversionTimeZone => -6 ) || die $Remedy->{'errstr'}; #construct a raw diary entry from a perl data structure $big_diary_string = $Remedy->EncodeDBDiary( Diary => [ #entry #1 { 'timestamp' => "Mon Jan 27 11:16:47 CST 2014", 'user' => "ahicox", 'value' => "it's the end of the world as we know it" }, #entry #2 { 'timestamp' => "Mon Jan 27 11:17:50 CST 2014", 'user' => "mstipe", 'value' => "I feel fine" }, #entry #3 { 'timestamp' => "Mon Jan 27 11:18:41 CST 2014", 'user' => "lbruce", 'value' => "well, I'm not afraid" } ] ) || die $Remedy->{'errstr'}; #import an ARS object definition $Remedy->ImportDefinition( Definition => $string_containing_def DefinitionType => "xml", ObjectName => "Remedy:ARSTools:CrazyActiveLink", ObjectType => "active_link", UpdateCache => 1 ) || die $Remedy->{'errstr'}; #export an ARS object definition $definition = $Remedy->ExportDefinition( ObjectName => "Remedy:ARSTools:CrazyActiveLink", ObjectType => "active_link", DefinitionType => "xml", ) || die $Remedy->{'errstr'}; #delete an ARS Object $Remedy->DeleteObjectFromServer( ObjectName => "Remedy:ARSTools:CrazyActiveLink", ObjectName => "active_link" ) || die $Remedy->{'errstr'}; #tunnel an sql query over the api my $data = $Remedy->TunnelSQL( SQL => "select viewname from arschema where name = 'User'" ) || die $Remedy->{'errstr'}; #log out of remedy $Remedy->Destroy(); First things first, you need ARSperl installed for this module to work. ARSperl is the perl interface to the Remedy C API, and provides all the "magic" of talking to your Remedy server. This module is a perl wrapper that sits atop ARSperl. The purpose of this module is to provide a nice, simplified interface to ARSperl that is independent of the particular version of ARSperl and the Remedy C API that you have installed. You will need the following items to be installed prior to attempting to use this module: This comes as part of your Remedy server installation. This API is proprietary, and owned by the Remedy corporation (or BMC, or Peregrin or whom ever owns them this week). You can usually find this under the 'api' directory under the remedy installation directory on your remedy server. The Remedy C API is required by the ARSperl installation. as mentioned earlier, this is the perl interface to the Remedy C API. You can download ARSperl from your local CPAN mirror, or also from the sourceforge project page: this perl module is available from your local CPAN mirror. It is used to serialize field definition data into a configuration file. Remedy assigns a unique 'field_id' to each field in a schema. In order to do pretty much anything with that field in the Remedy API, you must know the field_id rather than the name. For instance 'entry_id' is typically field_id '1', however it gets a lot more complicated from there. Additionally, Remedy implements fields with enumerated values in a unique way, assigning an integer to each enumerated value starting at 0. For instance, 'Status' = "New" = 0. One must also know the enum value corresponding to the 'human readable' value when performing operations using the API. This module attempts to hide all of that, allowing you to reference fields directly by name, and enumerated field values by their 'human readable' (string) value (rather than by integer). However, to do so, the module needs to maintain a mapping of field id's and enumerated values. The mapping can be loaded from the remedy server when you create a Remedy::ARSTools object, however, this is a rather time-consuming task, and is also network intensive. As an alternative, you can specify a special file in which the object will store field definition data. This file acts as a field definition data cache, and it's contents are automatically updated. Use of an external file in which to cache field definition data is highly recommended for speed improvments, but is not completely necessary. The 'penalty' for not using the file, is that it takes much longer to instantiate new objects. Remedy has three date/time data types: DATETIME (specifies a date and time), DATE (specifies a single day), and TIME_OF_DAY (specifies a specific time within a 24 period). Remedy models these date fields as an integer (either a number of seconds or number of days -- more on that below). When you get or set the value of a datetime, date, or time_of_day field on the Remedy C API, the value is specified in the Remedy-native format (so, an integer representing either days or seconds). As you can imagine, this is a hassle. As such, starting with version 1.06, Remedy::ARSTools will automatically attempt to translate datetime, date & time_of_date values for you. You can override this behavior by setting the 'DateTranslate' option on the Remedy::ARSTools option to "0" (it is "1" by default -- see "new" method below for more information). For calls to CreateTicket, ModifyTicket & MergeTicket, Remedy::ARSTools will automatically convert string values on datetime, date and time_of_day to their integer equivalents. For calls to Query, Remedy::ARSTools will automatically convert integer datetime, date & time_of_day values to their human-readable string equivalents. More info on how we handle each type: Datetime values represent a complete date & time. Remedy stores this in what is commonly referred to as "epoch" or "unix" time format, which is an integer representing the number of seconds elapsed since 1/1/1970 00:00:00 GMT (for instance "7/29/1985 14:36:00 CDT" = 491513760) On the CreateTicket, ModifyTicket & MergeTicket methods, Remedy::ARSTools will translate string values submitted on DATETIME fields into the unix "epoch" format, any format accepted by the Date::Parse module can be specified. On the Query method, Remedy::ARSTools will translate the integer value returned by the Remedy API into a human-readable string representing time in the GMT (aka "UTC") timezone. You can specify an alternate timezone by specifying a GMT offset in number of hours on the 'DateConversionTimeZone' option to the Query mehod. For instance CST would be 'DateConversionTimeZone' => -6. For more information see documentation for the Query method (below). The Date type specifies a specific day (for instance "7/29/1985"). Remedy stores this as the number of days elapsed since 1/1/4713 00:00:00 GMT, B.C.E (seriously, can't make this kinda thing up). The time_of_day type specifies a specific time-coordinate within a 24 hour period (for instance: 14:36:00). Remedy stores this as the number of seconds elapsed since 00:00:00 (midnight, the first second of the day). For TIME_OF_DAY fields, Remedy::ARSTools knows but one string format. Calls to Query will translate the integer value into this format. Calls to CreateTicket, ModifyTicket & MergeTicket will translate strings in this format into the integer equivalent: Zero-padding of single digits is not necessary (so "2:15:36 PM" will work as well as "02:15:36 PM"). If AM/PM is NULL, we presume you are specifying 24-hour ("military time") notation. Specifying "zero" values is completely necessary. So "2:15:00 PM" will pass muster, "2:15 PM" will generate an error. if you prefer 24 hour (aka "military time") output from Query, you can set the object global TwentyFourHourTimeOfDay option to a nonzero value. This is, of course, the object constructor. Upon failure, the method will return the undef value, and an error message will be written to $Remedy::ARSTools::errstr. At object creation, the field definition data is loaded either directly from the remedy server, or from the provided config file. my $Remedy = new Remedy::ARSTools([ options ]) || die $Remedy::ARSTools::errstr; the following options are accepted by the new() function: this is the hostname or ip address of the remedy server to which access is desired. the 'Login Name' of the Remedy account to be used for access to 'Server' the password for 'User' this is the full path and filename of the file in which field definition data should be cached (and which may already contain field definition data). if a non-zero value is specified for this option, the function will not attempt to login to the remedy server until a function requiring it is called. if specified, will instruct the C API to communicate with the Remedy server only on the specified TCP port. if specified, will instruct the C API to communicate with the Remedy server using only the specified RPC port (note only supported where ARSPerl > 8.001 is installed, also note, RPCNumber and Port are mutually exclusive). The language the user is using. If not specified, the default will be used. It's here because it's in ARSPerl, and it's in ARSperl because it's in the C API. It "has something to do with the Windows Domain", according to the ARSperl documentation. You can specify it here, and it'll be passed on to ARSperl, if you know what to do with it. if 0 or the undef value are supplied on this argument, the module will not attempt to update cached field definition data in the specified config, if it is found to be out of date. if 0 or the undef value are supplied on this argument, an error is generated if the specified ConfigFile does not already contain field definition data. The specified ConfigFile will not be created if it does not already exist. If a non-zero value is specified on this argument, functions which write data into remedy will silently truncate field data values if they are too long to fit in their specified fields. This is a short cut to setting this option individually on every function call. If NULL, this option will default to a value of 1. To override, you must explicity set a value of "0". If a non-zero value is specified (again, the default value), then Remedy::ARSTools will attempt to automaticlly convert date, datetime, and time_of_day field values to and from human-readable strings (see "A NOTE ON DATE, DATETIME & TIME OF DAY VALUES" above). This function loads field definition data from the 'ConfigFile' specified in the object, or directly from the Remedy server (if 'ConfigFile' dosen't exist, or the internal 'staleConfig' flag is set). Normally, this function is called only internally, but it can be used externally, to force an object to reload it's field definition data. $Remedy->LoadARSConfig() || die $Remedy->{'errstr'}; This function connects to the remedy server specified by the 'Server' in the object, obtaining a "control token" from the remedy server. If the object is already logged into the Remedy server, this function will return without doing anything. If the object's internal 'staleLogin' flag is set true, or if the object is not yet connected to the Remedy server (such as when 'loginOverride' is specified at object instantiation), the function will connect. $Remedy->ARSlogin() || die $Remedy->{'errstr'}; This is the object destructor. This function releases the "control token" back to the Remedy server, clearing the user's session. This also completely destroys the object. $Remedy->Destroy(); This function checks a hash containing field name and value pairs against field definition data. If a field value is too long, it is truncated (if the object's TruncateOK is set), otherwise an error is returned. Also string values provided for enum fields are converted to their integer values. This function is unique, in that if no errors are found, the undef value is returned, with the string "ok" on the object's errstr. If errors are found a string containing a concatenation of all errors found in the field list is returned. If a more serious error is encountered (not relating to field values), then the undef value is returned with a string other than "ok" on the object's errstr. This is most definitely called internally, though it can be useful externally for data validation. my $errors = $Remedy->CheckFields( [ options ] ) || do { die $Remedy->{'errstr'} if ($remedy->{'errstr'} ne "ok"); }; if ($errors !~/^\s*/){ print $errors, "\n"; } the CheckFields function accepts the following options the name of the schema in which the fields that values should be checked for exist. a hash reference in the form of { 'field_name' => $value ... }, where each 'field_name' refers to a field in 'Schema', and each $value represents a value for the field. NOTE: the referenced hash will be modified (values truncated, or strings translated to integers for enum fields) Create a new record in the specified Schema containing the specified field values. A WORD ABOUT DIARY FIELDS. You are executing a create transaction with this function, meaning any value you specify for a diary field on the 'Fields' option, will be interpreted as the first *entry* in the diary field as opposed to creating an entire diary in one go (if you want to do that, see the MergeTicket function). As such, send a string value on Diary fields for this function. If you send a diary data structure (see output of ParseDBDiary function) on a diary field here, Remedy::ARSTools *will* try to serialize it into a whole diary entry ... which will likely create some seriously amuzing diary entries. my $ticket_number = $Remedy->CreateTicket( [ options ] ) || die $Remedy->{'errstr'}; the following options are accepted by the CreateTicket function the name of the schema in which the record should be created a hash reference in the form of { 'field_name' => $value ... }, where each 'field_name' is the name of a field in 'Schema' and each $value is a value to place in that field. Change the specified field values in the specified record, in the specified Schema The same caveat about diary fields (see CreateTicket above) applies here. This is a modify transaction on the API, meaning ARS will interpret any value sent for a diary field as the N'th *entry* in the diary rather than an attempt to replace the entire diary (see MergeTicket function to do that). So send string values for your diary fields here. $Remedy->ModifyTicket( [ options ] ) || die $Remedy->{'errstr'}; the following options are accepted by the ModifyTicket function: the 'ticket number' (or 'entry id', or 'record number' ... field id number 1, that is) of the record that we wish to modify. the name of the schema in which 'Ticket' exists a hash reference in the form of { 'field_name' => $value ... }, where each 'field_name' is the name of a field in 'Ticket' and $value is the value to set on that field. Remove the specified record from the specified Schema. Obviously, this will fail if the 'User' specified at instantiation, does not have administrator permissions. $Remedy->DeleteTicket( [ options ] ) || die $Remedy->{'errstr'}; the 'ticket number' (or 'entry id', or 'record number' ... field id number 1, that is) of the record that we wish to delete. the name of the schema in which 'Ticket' exists Return selected field values from Tickets matching the specified query string in the specified Schema. It should be noted that having external processes query through the ARS API presents a lot of overhead on the server, is slower than a direct SQL query to the underlying database. However, If you're here, I'll presume you have your reasons ;-). Data is returned as an array reference. Each element of the array is a hash reference, representing a ticket which matched the specified query string. The hash reference is in the form of { 'field_name' => $value ... }, where each 'field_name' is the name of a field in the ticket and $value is the value for that field. my $tickets = $Remedy->Query( [ options ] ) || die $Remedy->{'errstr'}; the Query function accepts the following arguments: the name of the schema that you want to return matching records from this is the "Query By Example" string, or 'query string' or "that thing you type in the 'Search Criteria' line when you click the 'Advanced' button in the client". You know what I'm talking about probably. Just remember, it's not exactly the same thing as an SQL 'where' clause. An array contianing the list of field names corresponding to selected field values we'd like returned from matching records in 'Schema'. You may find it helpful to build the array reference inline with the function call like so: Fields => ['field1','field2','field3'] if the object's 'DateTranslate' option is active (it is by default), this will cause the Query function to attempt to translate DATE, DATETIME and TIME_OF_DAY type'd fields to a human-readable strings (see "A NOTE ON DATE, DATETIME & TIME OF DAY VALUES" above). For DATETIME strings, we must translate from the unix epoch (number of seconds elapsed since 1/1/1970 00:00:00 GMT) into the human-readable string in a specific timezone. By default, that timezone is GMT. If you want Query to return datetimes in a different timezone, you can specify a number of hours to offset GMT (so for instance, if I wanted dates in CST (American Central Standard Time), I would speficy a DateConversionTimeZone value of -6. Remedy::ARSTools is not aware of Daylight Savings Time in your geographic area. You'll have to keep track of that one yourself and apply the correct offset for your time of year (if you're into that kinda thing). Remedy stores diary fields as a CLOB (i.e. a big text field) in the database. As you are probably aware, diary fields are separated into multiple entries which have a timestamp and user associated with them. So what you get when you select a diary field from your database, is each diary entry separated by some trash. This 'trash' is the username and timestamp. This function parses a raw dairy entry from the database (for instance such as may be returned from the TunnelSQL() method above, and translates it into the same perl data structure as would be returned by ARS::getField. This data structure is an array reference. Each element in the array is, in turn, a hash reference. Each nested hash contains three fields 'timestamp', 'user', and 'value'. The array is sorted chronologically, with the earliest entries first. Here's another look at what the data-structure looks like: \@DIARY = [ { 'timestamp' => $date, 'user' => $user, 'value' => $diary_entry } ... ]; my $diary_entries = $Remedy->ParseDiary( [ options ] ) || die $Remedy->{'errstr'}; -or- my $diary_entries = ParseDiary( [ options ] ) || die $Remedy::ARSTools:errstr; the following options are accepted by the ParseDiary function: a big ol' text string from the database containing an unparsed diary if a non-zero value is specified on this option, the timestamp field of each diary entry will be converted from 'epoch' time to a human readable date-time string in the GMT timezone if specified, this is a number of hours to offset the GMT date-time conversion for diary entries. For instance, if I wished to see diary dateteimes in US/Central Standard Time, I'd use 'DateConversionTimeZone' => -6 this is the inverse of ParseDBDiary. Given an array of hashes, where each nested hash contains the 'timestamp', 'user' and 'value' keys, this function will serialize a text data structure suitable for insertion directly into a database table. This function is exported for procedural calls (as is ParseDBDiary), but it is also used internally by the MergeTicket function to set an entire diary field at once versus making a new entry in a diary field (as would be the case on a merge transaction as opposed to a modify or create transaction). See also: additional notes on the MergeTicket, CreateTicket and ModifyTicket functions in relation to diary fields. my $diary_string = $Remedy->EncodeDBDiary( [ options ] ) || die $Remedy->{'errstr'}; -or- my $diary_string = EncodeDBDiary( [ options ] ) || die $Remedy::ARSTools:errstr; the following options are accepted by the EncodeDBDiary function: this is an array of hashes where each hash must contain the 'timestamp', 'user' and 'value' keys. See also the output data structure of ParseDBDiary and the output of the Query function (when returning a diary field). Pretty much the same thing as CreateTicket, but with a merge transaction, allowing you all of the freedoms and responsibilities that come with that (for heaven's sake: be careful, mmmkay?). The returned value is one of two things. If you are in MergeMode = 'Create' or 'Error', this will return the entry_id (i.e. "ticket number, aka "field id = 1") of the record you just merged. If you are in MergeMode = "Overwrite", AND the entry_id value you specified already exists this will contain the string "overwritten" (you may now see my point about being careful). A WORD ABOUT DIARY FIELDS. You are merge-ing records into Remedy with this function, which means you are replacing the *entire* database record at once. ARS will literally delete and re-insert the entire row. This means you've got to write the entire diary at once, versus creating each individual entry. To do this, create a perl data structure representing the entire diary, and send a reference to it as the value of the diary field in the 'Fields' option. The data structure should be an array of hash references, where each nexted hash has the 'timestamp', 'user' and 'value' keys (this is the same format sent back by Query when returning a diary field, or the output of ParseDBDiary). my $ticket_number = $Remedy->MergeTicket( [options] ) || die $Remedy->{'errstr'} the name of the schema (i.e. "form") in which the record should be merged. a hash reference in the form of { 'field_name' => $value ... }, where each 'field_name' is the name of a field in 'Schema' and each $value is a value to place in that field. Just like CreateTicket this controls what happens when the record you want to merge has the same value for field_id = 1 as an existing record. That is to say, what happens in the situation where you specify an existing ticket number on Fields. There are three values: Throw an error and exit if there is an existing record in the spefified Schema with the same ticket number Create a new record with a different ticket number. So, basically ignore your speficied field_id = 1 (ticket number) value and create a new record with the rest of the field values, and return the new ticket number. This is the funny biznezz. If you specify this option, it will overwrite the existing ticket number with your values, and the function will return "overwritten" instead of a ticket number. You have been warned :-). if you specify a non-null value on this option, it will give the API permission to bypass required fields (excepting of course the ARS system fields ... 'status', 'short description', yadda yadda). if you specify a non-null value on this option, it will give the API permission to bypass field pattern checking (menus, etc). Obviously, it will not let you set out of range enums and the like, but you'd never get that far anyhow ... this module would throw a field check error before that, but I digress. Set this if you want to set goofy field values and get away with it. import a serialized ARS Object definition onto the ARServer. Serialized ARS Object definitions may be in *.def or *.xml format. be careful m'kay? $Remedy->ImportDefinition( [options] ) || die $Remedy->{'errstr'} a string containing the serialized object definition in XML or DEF format a value of either "xml" or "def" identifies the format of the serialized object definition indicates the name of the object to import indicates the type of object to import. This is one of the following: if a non-zero value is set on this option, Remedy::ARSTools will insert the newly created object into it's cache after import is compelted. if a non-zero value is set on this option, we will NOT generate an error if an object with the same name & type already exists (in that case we will simply overwrite it with the new version), otherwise we're gonna throw an error. This defaults to 0 (off -- i.e. throwing errors if the object already exists). Turn off with caution. export a serialized object definition from the ARServer. $Remedy->ImportDefinition( [options] ) || die $Remedy->{'errstr'} indicates the name of the object to export indicates the type of object to export. This is one of the following: a value of either "xml" or "def" identifies the format to export the serialized object definition into NOTE: as of ARS 7.6.04 XML definition export of forms with overlays applied does NOT work (though def format does). BMC Support Ticket: ISS04238696 is open on this issue. this will delete an ARS Object from the ARServer. It probably goes without saying but you know ... indescriminate use of this function can turn a perfectly good day of gainful employment into a hellacious nightmare that ends with standing in line at the unemployment office ... so ... be careful m'kay? BE AWARE: deleting schemas causes ARS to cascade delete all the workflow associated to that form that isn't shared. $Remedy->DeleteObjectFromServer( [options] ) || die $Remedy->{'errstr'} indicates the name of the object to DELETE from the ARServer. indicates the type of object to DELETE. This is one of the following: this will tunnel an SQL statement over the API. The SQL statement will execute as the 'aradmin' database user. Like DeleteObjectFromServer this is a function you can EASILY hork an ARServer with, if you're not careful. With great power, comes great responsibility and all that. Data is returned in an array of arrays. Each nested array represents a row of data returned. Fields are returned in the order they were specified in the SQL query. For instance: my $data = $Remedy->TunnelSQL(SQL => "select schemaid, viewname from arschema where name = 'User'"); $data->[0]->[0] == the value of schemaid column $data->[0]->[1] == the value of viewname column $data = $Remedy->TunnelSQL( [options] ) || die $Remedy->{'errstr'} the sql you wish to execute as ARADMIN. #create a new ticket in Users schema my $ticket_number = $Remedy->CreateTicket( Schema => "User", Fields => { 'Login Name' => "sbsqrpnts", 'Password' => "tar-t4r-s4us3", 'Group List' => "fryCooks jellyFishers", 'Full Name' => "Squarepants, Sponge B.", 'Email Address' => 'sbsqrpnts@krustykrab.com', 'License Type' => "Fixed", 'Assigned To' => "sbsqrpnts" } ) || die ($Remedy->{'errstr'}); #query for tickets my $tickets = $Remedy->Query( Schema => "Users", QBE => "'Login Name' = \"sbsqrpnts\"", Fields => [ "Request ID", "Login Name" ] ) || die ($Remedy->{'errstr'}); #modify a ticket $Remedy->modifyTicket( Ticket => $tickets->[0]->{'Request ID'}, Schema => "User", Fields => { 'Full Name' => "SpongeBob Squarepants" } ) || die ($Remedy->{'errstr'}); #delete a ticket $Remedy->DeleteTicket( Schema => "User", Ticket => $tickets->[0]->{'Request ID'} ) || die ($Remedy->{'errstr'}); #log out $Remedy->Destroy(); Andrew N. Hicox <andrew@hicox.com> Studio BootyQuake This module is released under the licensing terms of Perl itself.
http://search.cpan.org/~ahicox/Remedy-ARSTools-1.07/ARSTools.pod
CC-MAIN-2015-18
en
refinedweb
The nis_cachemgr command starts the NIS+ cache manager program, which should run on all NIS+ clients. The cache manager maintains a cache of location information about the NIS+ servers that support the most frequently used directories in the namespace, including transport addresses, authentication information, and a time-to-live value. At start-up the cache manager obtains its initial information from the client's cold-start file, and downloads it into the /var/nis/NIS_SHARED_DIRCACHE file. The cache manager makes requests as a client workstation. Make sure the client workstation has the proper credentials, or instead of improving performance, the cache manager will degrade it. To.
http://docs.oracle.com/cd/E19455-01/806-1387/6jam692aa/index.html
CC-MAIN-2015-18
en
refinedweb
On Fri, 2006-11-03 at 10:27 +0000, David Howells wrote:> Anyway, it's not just vfs_mkdir(), there's also vfs_create(), vfs_rename(),> vfs_unlink(), vfs_setxattr(), vfs_getxattr(), and I'm going to need a> vfs_lookup() or something (a pathwalk to next dentry).> > Yes, I'd prefer not to have to use these, but that doesn't seem to be an> option.It is not as if we care about an extra context switch here, and wereally don't want to do that file i/o in the context of the rpciodprocess if we can avoid it. It might be nice to be able to do thosecalls that only involve lookup+read in the context of the user's processin order to avoid the context switch when paging in data from the cache,but writing to it both can and should be done as a write-behind process.IOW: we should rather set up a separate workqueue to write data to disk,and just concentrate on working out a way to lookup and read data withno fsuid/fsgid changes and preferably a minimum of selinux magic.> > > Also I should be setting security labels on the files I create.> > > > To what end? These files shouldn't need to be made visible to userland> > at all.> > But they are visible to userland and they have to be visible to userland. They> exist in the filesystem in which the cache resides, and they need to be visible> so that cachefilesd can do the culling work. If you were thinking of using> "deleted" files, remember that I want it to be persistent across reboots, or> even when an NFS inode is flushed from memory to make space and then reloaded> later.No. I was thinking of keeping the cache on its own partition and usingkernel mounts. cachefilesd could possibly mount the thing in its ownprivate namespace.Trond-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/11/3/66
CC-MAIN-2015-18
en
refinedweb
On 26.4.12 17:22, Angela Schreiber wrote: > hi all > > i would like to start the discussion again regarding > exception handling in oak-core and the api. > > the current situation is as follows: > - runtimeException every here and there > - invalidArgumentException > - unspecific commitFailedExceptions > > now that we have quite some JCR functionality present > those exceptions are pretty cumbersome and result in a > lot of test-case failure. > > my preference was to just throw the jcr-exceptions where > ever this was appropriate and unambiguous. for example > namespaceexception, versionexception, constraintviolation... +1 if the thrower of the exception is pretty sure that no one upstream in the call chain would be forced to catch the exception without knowing what to do with it. Like in the case where you have to implement an interface where you are not allowed to throw an exception which might be thrown by some of the methods you are calling. In the other case why not throw an exception which extends from RuntimeException and carries an checked exception? Something along the lines of: public class OakException extends RuntimeException { private final RepositoryException repositoryException; [...] public void throwRepositoryException() throws RepositoryException { throw repositoryException; } } Since we are having these wrappers anyway we can unwrap these exceptions there and throw the original one. Michael > > kind regards > angela
http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201204.mbox/%3C4F9A862F.5070309@apache.org%3E
CC-MAIN-2015-18
en
refinedweb
08 August 2012 11:41 [Source: ICIS news] LONDON (ICIS)--Oxea will invest in additional hydrogenation capacities at the group’s ?xml:namespace> Higher alcohols are components used in the fragrance and flavour industry, and required for the production of varnish, paint and lubricant additives. Investment and capacity details were not disclosed. In the third quarter of 2012, the product n-heptanol will be commercially available, followed by 3-methyl butanol, 2-methyl butanol and n-pentanol, Oxea said. “With our new products, we are meeting the globally increasing demand for higher alcohols,” said Christoph Balzarek, global marketing manager for amines & oxo specialties at Oxea. “Our new special alcohols ideally supplement our value chain, since they are manufactured from precursors that are also produced by Oxea. This backward integration allows for a high level of production efficiency,” said Miguel Mantas, responsible for sales and marketing on Oxea’s executive board. “Therefore, increasing our hydrogenation capacities
http://www.icis.com/Articles/2012/08/08/9584948/oxea-expands-higher-alcohols-portfolio-with-germany.html
CC-MAIN-2015-18
en
refinedweb
- Code: Select all <binding key="ctrl+m,/(\d)/" command="moveTab $1"/> class MoveTab(sublimeplugin.TextCommand): def run(self, view, args): win = view.window() group, tab = win.getViewPosition(view) win.setViewPosition(view, group, int(args[0])) print "MoveTab from %d to %s: end position is %d" % (tab, args[0], win.getViewPosition(view)[1]) I opened a new empty editor window, created four new tabs and named them 1, 2, 3 and 4. Then I selected tab 4 and typed ctrl+m,2,ctrl+m,1,ctrl+m,0. Now all is good -- the tab is moved as expected (one step left each time) and console shows: - Code: Select all MoveTab from 3 to 2: end position is 2 MoveTab from 2 to 1: end position is 1 MoveTab from 1 to 0: end position is 0 Here comes the problem: continue moving the tab with ctrl+m,1,ctrl+m,2,ctrl+m,3. With the first command, tab does not move and after that the tab is always one position 'behind'. You can also see this from the console: - Code: Select all MoveTab from 0 to 1: end position is 0 MoveTab from 0 to 2: end position is 1 MoveTab from 1 to 3: end position is 2
http://www.sublimetext.com/forum/viewtopic.php?f=3&t=925
CC-MAIN-2015-18
en
refinedweb
Pluggable foundation blocks for building loosely coupled distributed apps. Includes implementations in Redis, Azure, AWS, RabbitMQ, Kafka and in memory (for development).: To summarize, if you want pain free development and testing while allowing your app to scale, use Foundatio! Foundatio can be installed via the NuGet package manager. If you need help, please open an issue or join our Discord chat room. Were always here to help if you have any questions! This section is for development purposes only! If you are trying to use the Foundatio libraries, please get them from NuGet. Foundatio.slnVisual Studio solution file. The sections below contain a small subset of what's possible with Foundatio. We recommend taking a peek at the source code for more information. Please let us know if you have any questions or need assistance! Caching allows you to store and access data lightning fast, saving you exspensive operations to create or get data. We provide four different cache implementations that derive from the ICacheClient interface: MaxItemsproperty. We use this in Exceptionless to only keep the last 250 resolved geoip results.. HybridCacheClientthat uses the RedisCacheClientas ICacheClientand the RedisMessageBusas IMessageBus. ICacheClientand a string scope. The scope is prefixed onto every cache key. This makes it really easy to scope all cache keys and remove them with ease. using Foundatio.Caching; ICacheClient cache = new InMemoryCacheClient(); await cache.SetAsync("test", 1); var value = await cache.GetAsync<int>("test"); Queues offer First In, First Out (FIFO) message delivery. We provide four different queue implementations that derive from the IQueue interface: using Foundatio.Queues; IQueue<SimpleWorkItem> queue = new InMemoryQueue<SimpleWorkItem>(); await queue.EnqueueAsync(new SimpleWorkItem { Data = "Hello" }); var workItem = await queue.DequeueAsync(); Locks ensure a resource is only accessed by one consumer at any given time. We provide two different locking implementations that derive from the ILockProvider interface:(); Allows you to publish and subscribe to messages flowing through your application. We provide four different message bus implementations that derive from the IMessageBus interface: using Foundatio.Messaging; IMessageBus messageBus = new InMemoryMessageBus(); await messageBus.SubscribeAsync<SimpleMessageA>(msg => { // Got message }); await messageBus.PublishAsync(new SimpleMessageA { Data = "Hello" }); Allows you to run a long running process (in process or out of process) without worrying about it being terminated prematurely. We provide three different ways of defining a job, based on your use case:JobBase<T>class. You can then run jobs by calling RunAsync()on the job or passing it to the JobRunnerclass. The JobRunner can be used to easily run your jobs as Azure Web Jobs.; message bus. The job must derive from the WorkItemHandlerBaseclass. You can then run all shared jobs via JobRunnerclass. The JobRunner can be used to easily run your jobs as Azure Web Jobs." }); We provide different file storage implementations that derive from the IFileStorage interface: We recommend using all of the IFileStorage implementations as singletons. using Foundatio.Storage; IFileStorage storage = new InMemoryFileStorage(); await storage.SaveFileAsync("test.txt", "test"); string content = await storage.GetFileContentsAsync("test.txt") We provide five implementations that derive from the IMetricsClient interface: We recommend using all of the IMetricsClient implementations as singletons. IMetricsClient metrics = new InMemoryMetricsClient(); metrics.Counter("c1"); metrics.Gauge("g1", 2.534); metrics.Timer("t1", 50788); We have both slides and a sample application that shows off how to use Foundatio.
https://awesomeopensource.com/project/FoundatioFx/Foundatio.Kafka
CC-MAIN-2022-33
en
refinedweb
Plugin Structure Your project can become an Act! plugin through the following key files: - A manifest describing your plugin - The use of the @act/pluginslibrary for any data access or application hooks - The use of the @act/web-componentslibrary to give your application the look and feel of Act! Let's discuss the two major components for your Act! plugin. Manifest The manifest defines the plugin to the Act! application. It contains any information necessary to display/link the Act! application to your plugin. Here is an example manifest: { "actions": "", "company": "Your Company", "description": "DESCRIPTION", "extensions": [{ "type": "view", "payload": { "src": "", "name": "plugin-docs", "label": "PLUGIN_DOCS", "icon": "note" } }], "name": "Your Plugin Name", "permissions": { "dialog": "false" }, "settings": "", "supportEmail": "", "translations": { "de": { "DESCRIPTION": "Description of your plugin" }, "en-GB": { "DESCRIPTION": "Description of your plugin" }, "en-US": { "DESCRIPTION": "Description of your plugin" }, "DESCRIPTION": "Description of your plugin" } }, "version": { "major": 0, "minor": 0, "patch": 0 }, "views": [] } Properties may be used to show the item within the Plugin Marketplace for Act!. Code Your project will consist of HTML/CSS/JS elements. Anything that requires data access or is dynamic by nature will need to be a JS based application. Your project can take whatever structure desired, but it must contain the necessary code to link your plugin to the Act! application. This is done through the @act/plugins library through the following code snippet: import { ActPlugin } from '@act/plugins'; const actPlugin = new ActPlugin(); actPlugin.init(window); Please consider using the @act/web-components library for your front-end elements to make your plugins look and feel like Act! Actions If your plugin has any action extensions, then you will need to write a set of global functions to be exposed to the Act! application. Because your code will be ran in a WebWorker (see restrictions on importing third party libraries within WebWorkers), Act! automatically exposes the actPlugin global object for you. Please note that this means all code associated with the actions file should be standard JavaScript without any third party libraries. If you require use of a third party library, you will need to utilize a bundling tool such as Webpack to ensure that all necessary dependencies are bundled into a single actions.js file. Please ensure that the file referenced in your manifest points to the generated file.
https://plugindeveloper.actops.com/plugins/getting-started/structure/
CC-MAIN-2022-33
en
refinedweb
GREPPER SEARCH WRITEUPS DOCS INSTALL GREPPER All Languages >> Java >> Java Create a Scanner Object in Java “Java Create a Scanner Object in Java” Code Answer Java Create a Scanner Object in Java java by SAMER SAEID on May 25 2022 Comment 0 // read input from the input stream Scanner sc1 = new Scanner(InputStream input); // read input from files Scanner sc2 = new Scanner(File file); // read input from a string Scanner sc3 = new Scanner(String str); Add a Grepper Answer Answers related to “Java Create a Scanner Object in Java” scanner in java scanner class in java close scanner java how to use scanner class in java scanner in = new scanner(system.in) meaning in java how to use scanners in java read scanner java Java User Input (Scanner) java instantiate a scanner io fole scanner object syntax use scanner class in global scope java how scanner works and how to use it Queries related to “Java Create a Scanner Object in Java” scanner java scanner class in java scanner class java new scanner java scanner methods in java how to use a scanner in java import scanner class in java java scanner system in scanner object creation in java how to print scanner in java make scanner from function java scanner sc = new scanner(system.in) in java how to create a object in java by Scanner java scanner how to use scanner in java how to import scanner in java how to use scanner java scanner function in java scanner example java scanner example in java scanner class import in java use of scanner class in java java scanner what that does library for scanner class in java take in an object as a scanner java using scanner in objects java scanner in java java scanner class using scanner in java scanner methods java scanner class methods in java scanner system.in java new scanner(system.in) java java create scanner can you use a scanner with java method creating a scanner object in java java scanner class full tutorial how to create an object in java by Scanner using the scanner method in java Browse Java Answers by Framework Spring Vaadin More “Kinda” Related Answers View All Java Answers » Junit 5 console input test number of lines in file java javafx action event enter key reading in lines from a file java java reflection get field value java lowercase in a scanner prendere valore da tastiera java read int from keyboard java take a value from keyboard java java clear scanner java clear scanner2 java clear scanne read each lines in file Java Get Integer Input From the User java input input array through scanner in java java input character Input console using java input java what is the difference between sc.nextLine() and sc.next() in java how to count lines from txt java inputing number in java how to the text of an element in selenium java java scanner netLine how to get multiple integer input in java how to scan a string in java java read lines from file java get input unit test java intellij How to do press enter to continue in java java scanner string nextline after nextint java scanner input float java detect new line in string java input - how to read a string Java User Input (Scanner) Java Read a Line of Text Using Scanner input long in java get input in java sc.nextline skips java input string with spaces how to use input in java press enter in robot java Char in scanner) use scanner class in global scope java Accept Integer only in if else statement how to display an integer in a textfield in java Don't use a line-beased input after a token-based input. java how to write something on the console with scanner Resource leak: 'scanner' is never closed input 3 int 1 line in java Java Scanner nextDouble() java forcing user to input int java scanner tokens with withespace java date time string replace java java string builder system.out.println bubble sort java java bubble sort try catch java throw io exception java java gui sleep() java java sleep in code how to add java_home in mac how to check java path in mac java foreach java read file text random java foreach java java random w3 for loop java java loop java for jsonobject java java create file java delay reading csv file in java java checking for null how to set the java_home in mac get file path java java how to make a gui import math java import java math java cheat sheet what it means when create final variable in java java create inputstream from string ansi colors Console color text java read file in java how to read a text file in why java platform independent install java with brew hello world in java install java 8 on windows 10 how to configure alternative openjdk ubuntu change java version change java version command line debian java select version escolher versão java linux select java version linux ubuntu change default java path list java versions java timer java switch version wait java file existing check java test file java template string template literals java java create file in folder date add hours java download file java parse xml string static block in java read file java line java argument main how to install java runtime environment on centos 7 centos install openjdk 11 java time code random.choice in java export java home how to play sounds on java java 8 filter first console log java print * pattern in java print star pattern in java java for schleife priority queue reverse order java JAVA_HOME is not defined correctly. .
https://www.codegrepper.com/code-examples/java/Java+Create+a+Scanner+Object+in+Java
CC-MAIN-2022-33
en
refinedweb
Migrating Spring-based Application Configuration to Alibaba Cloud ACM Recently, some of my developer friends want to migrate their Java Spring application configuration to Alibaba Cloud Application Configuration Management (ACM). During the migration process, they asked some interesting questions. This article uses a simple example to explain their concerns during the migration process and gives the corresponding solutions. I hope you will find them helpful. What Configuration Items Are Required to Be Migrated to ACM? This is the first question for all users who want to migrate their configuration to ACM. Let’s analyze this question in two dimensions — timeliness and security. Timeliness: Comparison between Static and Dynamic Configuration Items Static configuration items refer to those that basically do not need to be modified after the application is released. For example: - Software version number: The version number does not usually need to be changed after it is determined. - Log style: The layout of the log, for example, the timestamp, file name, and log level, typically does not need major changes. - Third-party software LicenseKey: Basically, it does not change. It is possible that a third-party software license is updated during use. However, such a configuration change can usually be handled by republishing the software. - PaaS platform connection string: For example, a database connection string contains the database name, username and password. This configuration item does not change unless the password is modified for the sake of compliance, or the database is changed. Dynamic configuration refers to some configuration items which may change when the program is running. Such changes usually affect the running behavior of the program, for example: - Throttling parameters: Throttling parameters are generally not fixed. Throttling parameters, such as the response time (RT) threshold and peak transaction per second (TPS), are adjusted dynamically according to the actual workload pattern when the system is running. - Thresholds for monitoring and alerts: For example, the system generates an error alert when the transaction volume decreases by 20% in comparison with the previous period, and generates a critical alert when it decreases by 50%. For a monitoring system, the online service characteristics change frequently, so the thresholds are generally not fixed. - Log print level: For example, after something strange happens, we want to change the log print level from error to debug. We’d prefer dynamically adjusting this configuration item without restarting the application. - Multi-active disaster recovery: After a site suffers from a disaster, we definitely hope the service can be failed over as soon as possible. Therefore, this configuration item must take effect in seconds to minimize the asset loss. In terms of timeliness, we recommend that users save their own copies of static configuration items, which should be as simple as possible. Dynamic configuration items need to be saved to the ACM to increase the flexibility and the timeliness of dynamic changes. Security: Comparison between Non-Sensitive and Sensitive Configuration Items Non-sensitive configuration items generally refer to technology-oriented configuration items. Exposing them does not cause security risks. For example: - Software version number: It iterates with the product and contains no business attributes. Therefore, it is not sensitive. - Log style: This configuration item is associated with the future diagnostics of a program, and it is not sensitive. - Log print level: This configuration item determines the content of a log to be printed, and it is not sensitive. - Throttling parameters: Throttling parameters are mainly used to maintain internal application stability, and they are not sensitive. - Thresholds for monitoring and alerts: They mainly specify the alert precision for the business, and they are not sensitive. - Multi-active disaster recovery: This configuration item is generally associated with the primary-backup configuration and service sharding. It is not sensitive. Sensitive configuration items are often associated with business data and can cause security risks if they are disclosed to unauthorized persons, for example: - Third-party software LicenseKey: The disclosure of the license key may cause unauthorized use of it. It is sensitive. - PaaS platform connection string: For example, both internal and external users can easily log on to the business database and access sensitive business information with a database connection string. It is sensitive. In terms security, we recommend that users save their own copies of the non-sensitive configuration items, which should be as simple as possible. Sensitive configuration items need to be saved to the ACM. For sensitive configuration items, encryption and authentication are required, and these items must be secured from unauthorized persons. The summary for the timeliness and security analysis How Can I Migrate Spring-based Java Application Configuration? Java developers who use the Spring framework usually use the @value function to automatically insert configuration. The Original Pure Static File Scenario For example, this configuration contains two configuration items, the software version number and the database connection string: We can automatically insert the configuration items by using the @PropertySource and @value annotations. @Configuration @ComponentScan("com.alibaba") @PropertySource("classpath:myApp.properties") public class AppConfig { @Value(value="${url}") private String URL; @Value(value="${dbuser}") private String USER; @Value(value="${driver}") private String DRIVER; @Value(value="${dbpassword}") private String PASSWORD; @Value(value="${appVersion}") private String version; } Operations such as the connection and initialization of the related database are omitted in the above code. The Mixed Configuration Scenario after the Start of the Configuration Migration For the sake of security compliance or configuration timeliness, we need to migrate our configuration to the ACM. After the analysis, we found that we’d better migrate some database configurations to ACM. These configuration items are marked in red. The red part will be migrated to the ACM. Next, we need to make the following three modifications. - Add a record for the relevant configuration on the ACM console. - Add ACM SDK dependencies to the Java engineering package. - Slightly modify the code — add annotations to enable ACM to retrieve configurations. First, directly create a configuration item on ACM with the name of myapp.dbconfig.properties, and edit the configuration content in the corresponding edit box. For detailed instructions, see the ACM Quick Start document. The operation screenshots are as follows: Second, add dependencies into maven’s pom.xml file: <dependency> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-spring-context</artifactId> <version>0.2.1- RC1</version> </dependency> Third, add API annotations to the corresponding AppConfig.java code, to enable ACM to retrieve dynamic configurations. Add the red part to the code. @Configuration @ComponentScan("com.journaldev") @PropertySource("classpath:myApp.properties") @EnableNacosConfig(globalProperties = @NacosProperties(endpoint = "acm.aliyun.com", namespace = "xxx", accessKey = "xxx", secretKey = "xxx")) @NacosPropertySource(dataId = "myApp.dbconfig.properties", autoRefreshed = true) public class AppConfig { @Value(value="${url}") private String URL; @Value(value="${dbuser}") private String USER; @Value(value="${driver}") private String DRIVER; @Value(value="${dbpassword}") private String PASSWORD; @Value(value="${appVersion}") private String version; public String getVersion() { return version; } } Now, the modification is finished. Because the ACM SDK supports Spring’s @value annotation function, we barely need to modify the code. Notes: In the above code example, note that: - The ACM SDK used in the code is the Nacos SDK. Nacos () is the open-source form of ACM, and ACM is fully compatible with all Nacos APIs. - In the code example, I used plaintext annotations to fix ACM parameters, such as endpoints, namespace, AK, and SK. However, in actual practice, they do not need to be fixed. - The parameters, such as endpoint and namespace can be passed-in by using ACM’s related file configuration or system variables. For details, see: - For sensitive information, such as AK and SK, we can use the ECS Ram Role function to enable the system to retrieve the information automatically, and we do not have to fix it in the code. For details, see: - The callback function for dynamic configuration listening is not included in the code. For how to use Spring-based dynamic configuration, see: More Information If you are interested in Nacos, the open source version of Alibaba Cloud ACM, visit our official website at Reference:
https://alibaba-cloud.medium.com/migrating-spring-based-application-configuration-to-alibaba-cloud-acm-b8ecfc4220f0
CC-MAIN-2022-33
en
refinedweb
Developers & Practitioners Scalable ML Workflows using PyTorch on Kubeflow Pipelines and Vertex Pipelines Introduction ML Ops is an ML engineering culture and practice that aims at unifying ML system development and ML system operation. An important ML Ops design pattern is the ability to formalize ML workflows. This allows them to be reproduced, tracked and analyzed, shared, and more. Pipelines frameworks support this pattern, and are the backbone of an ML Ops story. These frameworks help you to automate, monitor, and govern your ML systems by orchestrating your ML workflows. In this post, we’ll show examples of PyTorch-based ML workflows on two pipelines frameworks: OSS Kubeflow Pipelines, part of the Kubeflow project; and Vertex Pipelines. We are also excited to share some new PyTorch components that have been added to the Kubeflow Pipelines repo. In addition, we’ll show how the Vertex Pipelines examples, which require v2 of the KFP SDK, can now also be run on an OSS Kubeflow Pipelines installation using the KFP v2 ‘compatibility mode’. PyTorch on Google Cloud Platform PyTorch continues to evolve rapidly, with more complex ML workflows being deployed at scale. Companies are using PyTorch in innovative ways for AI-powered solutions ranging from autonomous driving to drug discovery, surgical Intelligence, and even agriculture. MLOps and managing the end-to-end lifecycle for these real world solutions, running at large scale, continues to be a challenge. The recently-launched Vertex AI is a unified ML Ops platform to help data scientists and ML engineers increase their rate of experimentation, deploy models faster, and manage models more effectively. It brings AutoML and AI Platform together, with some new ML Ops-focused products, into a unified API, client library, and user interface. Google Cloud Platform and Vertex AI are a great fit for PyTorch, with PyTorch support for Vertex AI training and serving, and PyTorch-based Deep Learning VM images and containers, including PyTorch XLA support. The rest of this post will show examples of PyTorch-based ML workflows on two pipelines frameworks: OSS Kubeflow Pipelines, part of the Kubeflow project; and Vertex Pipelines. All the examples use the open-source Python KFP (Kubeflow Pipelines) SDK, which makes it straightforward to define and use PyTorch components. Both pipelines frameworks provide sets of prebuilt components for ML-related tasks; support easy component (pipeline step) authoring and provide pipeline control flow like loops and conditionals; automatically log metadata during pipeline execution; support step execution caching; and more. Both of these frameworks make it straightforward to build and use PyTorch-based pipeline components, and to create and run PyTorch-based workflows. Kubeflow Pipelines The Kubeflow open-source project includes Kubeflow Pipelines (KFP), a platform for building and deploying portable, scalable machine learning (ML) workflows based on Docker containers. The open-source Kubeflow Pipelines backend runs on a Kubernetes cluster, such as GKE, Google’s hosted Kubernetes. You can install the KFP backend ‘standalone’ — via CLI or via the GCP Marketplace— if you don’t need the other parts of Kubeflow. The OSS KFP examples highlighted in this post show several different workflows and include some newly contributed components now in the Kubeflow Pipelines GitHub repo. These examples show how to leverage the underlying Kubernetes cluster for distributed training; use a TensorBoard server for monitoring and profiling; and more. Vertex Pipelines Vertex Pipelines is part of Vertex AI, and uses a different backend from open-source KFP. It is automated, scalable, serverless, and cost-effective: you pay only for what you use. Vertex Pipelines is the backbone of the Vertex AI ML Ops story, and makes it easy to build and run ML workflows using any ML framework. Because it is serverless, and has seamless integration with GCP and Vertex AI tools and services, you can focus on building and running your pipelines without dealing with infrastructure or cluster maintenance. Vertex Pipelines automatically logs metadata to track artifacts, lineage, metrics, and execution across your ML workflows, and provides support for enterprise security controls like Cloud IAM, VPC-SC, and CMEK. The example Vertex pipelines highlighted in this post share some underlying PyTorch modules with the OSS KFP example, and include use of the prebuilt Google Cloud Pipeline Components, which make it easy to access Vertex AI services. Vertex Pipelines requires v2 of the KFP SDK. It is now possible to use the KFP v2 ‘compatibility mode’ to run KFP V2 examples on an OSS KFP installation, and we’ll show how to do that as well. PyTorch on Kubeflow Pipelines: PyTorch KFP Components SDK In collaboration across Google and Facebook, we are announcing a number of technical contributions to enable large- scale ML workflows on Kubeflow Pipelines with PyTorch. This includes the PyTorch Kubeflow Pipelines components SDK with features for: - Data loading and preprocessing - Model Training using PyTorch Lightning as training loop - Model profiling and visualizations using the new PyTorch Tensorboard Profiler - Model deployment & Serving using TorchServe + KFServing with canary rollouts, autoscaling, and Prometheus monitoring - Model Interpretability using Captum - Distributed training using the PyTorch job operator for KFP - Hyperparameter tuning using Ax/BoTorch - ML Metadata for Artifact Lineage Tracking - Cloud agnostic artifacts storage component using Minio Computer Vision and NLP workflows are available for: - Open Source Kubeflow Pipelines deployed on any cloud or on-prem - Google Cloud Vertex AI Pipelines for Serverless pipelines solution Figure 1: NLP BERT Workflow on Open Source KFP with PyTorch profiler and Captum insights, (top left) Pipeline View (top right) PyTorch Tensorboard Profiler for the training node, (bottom) Captum model insights for the model prediction Start by setting up a KFP cluster with all the prerequisites, and then follow one of the examples under the pytorch-samples here. Sample notebooks and full pipelines examples are available for the following: - Computer Vision CIFAR10 pipeline, basic notebook, and notebook with Captum Insights - NLP BERT pipeline, and notebook with Captum for model interpretability. - Distributed training sample using the PyTorch job operator - Hyperparameter optimization sample using Ax/Botorch Note: All the samples are expected to run both on-prem and on any cloud, using CPU or GPUs for training and inference. Minio is used as the cloud-agnostic storage solution. A custom TensorBoard image is used for viewing the PyTorch Profiler. PyTorch on Kubeflow Pipelines : BERT NLP example Let’s do a walkthrough of the BERT example notebook. Training the PyTorch NLP model One starts by defining the KFP pipeline with all the tasks to execute. The tasks are defined using the component yamls with configurable parameters. All templates are available here. The training component takes as input a PyTorch Lightning script, along with the input data and parameters and returns the model checkpoint, tensorboard profiler traces and the metadata for metrics like confusion matrix and artifacts tracking. confusion_matrix_url = f"minio://{log_bucket}/{confusion_matrix_log_dir}" script_args = f"model_name=bert.pth," \ f"num_samples={num_samples}," \ f"confusion_matrix_url={confusion_matrix_url}" ptl_args = f"max_epochs={max_epochs},profiler=pytorch,gpus=0,accelerator=None" train_task = ( train_op( input_data=prep_task.outputs["output_data"], script_args=script_args, ptl_arguments=ptl_args ).after(prep_task).set_display_name("Training") ) If you are using GPUs for training, set the gpus to value > 0 and use ‘ddp’ as the default accelerator type. You will also need to specify the gpu limit and node selector constraint for the cluster: set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator','nvidia-tesla-p4') For generating traces for the PyTorch Tensorboard profiler, “ profiler=pytorch” is set in script_args. The confusion matrix gets logged as part of the ML metadata in the KFP artifacts store, along with all the inputs and outputs and the detailed logs for pipeline run. You can view these from the pipeline graph and the lineage explorer (as shown in Figure 2 below). Caching is enabled by default, so if you run the same pipeline again with the same inputs, the results will be picked up from the KFP cache. The template_mapping.json config file is used for generating the component yaml files from the templates and setting the script names and docker container with all the code. You can create a similar Docker container for your own pipeline. Debugging using PyTorch Tensorboard Profiler The PyTorch Tensorboard Profiler provides insights into the performance bottlenecks like inefficiency for loading data, underutilization of the GPUs, SM efficiency, and CPU-GPU thrashing, and is very helpful for debugging performance issues. Check out the Profiler 1.9 blog for the latest updates. In the KFP pipeline, the Tensorboard Visualization component handles all the magic of making the traces available to the PyTorch Tensorboard profiler; therefore it is created before starting the training run. The profiler traces are saved in the tensorboard/logs bucket under the pipeline run ID and are available for viewing after the training step completes. You can access TensorBoard from the Visualization component of the pipeline after clicking the “Start Tensorboard” button. Full traces are available from the PyTorch Profiler view in the Tensboard as shown below: Figure 3: PyTorch Profiler Trace view A custom docker container is used for the PyTorch profiler plugin, and you can specify the image name by setting the TENSORBOARD_IMAGE parameter. Model Serving using KFServing with TorchServe PyTorch model serving for running the predictions is done via the KFServing + TorchServe integration. It supports prediction and explanation APIs, canary rollouts with autoscaling, and monitoring using Prometheus and Grafana. For the NLP BERT model, the bert_handler.py defines the TorchServe custom handler with logic for loading the model, running predictions, and doing the pre-processing and post processing. The training component generates the model files as a model-archiver package, and this gets deployed onto TorchServe. The minio op is used for making the model-archiver and the TorchServe config properties available to the deployment op. For deploying the model, you simply need to set the KFServing Inference yaml with the relevant values, e.g. for the GPU inference you will pass the model storage location, and the number of GPUs: gpu_count = "1" accelerator = "nvidia-tesla-p4" isvc_gpu_yaml = """ apiVersion: "serving.kubeflow.org/v1beta1" kind: "InferenceService" metadata: name: {} namespace: {} spec: predictor: serviceAccountName: sa pytorch: storageUri: {} resources: requests: cpu: 4 memory: 8Gi limits: cpu: 4 memory: 8Gi nvidia.com/gpu: {} nodeSelector: cloud.google.com/gke-accelerator: {} """.format(deploy, namespace, model_uri, gpu_count, accelerator) deploy_task = ( deploy_op(action="apply", inferenceservice_yaml=isvc_gpu_yaml ).after(minio_mar_upload).set_display_name("Deployer") ) Using Captum for Model InterpretabilityCaptum.ai is the Model Interpretability library for PyTorch. In the NLP example we use the explanation API of KFserving and TorchServe to get the model insights for interpretability. The explain handler defines the IntegratedGradient computation logic which gets called via the explain endpoint and returns a json response with the interpretability output. The results are rendered in the notebook using Captum Insights. from captum.attr import visualization vis_data_records =[] vis_data_records.append(visualization.VisualizationDataRecord( attributions, pred_prob, pred_class, true_class, attr_class, attributions.sum(), tokens, delta)) vis = visualization.visualize_text(vis_data_records) This renders the color-coded visualization for the word importance. Distributed training using PyTorch job operatorThe Kubeflow PyTorch job operator is used for distributed training and it takes as inputs the job spec for the master and worker nodes along with the option to customize other parameters via the pytorch-launcher component. pytorch_job_op = load_component_from_file("../../../components/kubeflow/pytorch-launcher/component.yaml") train_task = pytorch_job_op( name="pytorch-bert", namespace=namespace, master_spec= { "replicas": 1, "imagePullPolicy": "Always", "restartPolicy": "OnFailure", ... "command": ["python3", "bert/agnews_classification_pytorch.py"], "args": [ "--dataset_path", dataset_path, "--checkpoint_dir", checkpoint_dir, "--script_args", f"model_name=bert.pth,num_samples={num_samples}", "--tensorboard_root", tensorboard_root, ... }, worker_spec= { "replicas": 1, "imagePullPolicy": "Always", "restartPolicy": "OnFailure", .... }, delete_after_done=False ) PyTorch on Kubeflow Pipelines : CIFAR10 HPO example Hyperparameter optimization using Ax/BoTorch Ax is the adaptive experimentation platform for PyTorch, and BoTorch is the Bayesian Optimization library. They are used together for Hyperparameter optimization. The CIFAR10-HPO notebook describes the usage for this. We start off by generating the experiment trials with the parameters that we want to optimize using the ax_generate_trials component. generate_trails_op = components.load_component_from_file( "yaml/ax_generate_trials_component.yaml" ) parameters = [ {"name": "lr", "type": "range", "bounds": [1e-4, 0.2], "log_scale": True}, {"name": "weight_decay", "type": "range", "bounds": [1e-4, 1e-2]}, {"name": "eps", "type": "range", "bounds": [1e-8, 1e-2]}, ] gen_trials_task = generate_trails_op(total_trials, parameters, 'test-accuracy').after(prep_task).set_display_name("AX Generate Trials") get_keys_task = get_keys_op(gen_trials_task.outputs["trial_parameters"]).after(gen_trials_task).set_display_name("Get Keys of Trials") with dsl.ParallelFor(get_keys_task.outputs["keys"]) as item: get_element_task = get_element_op(gen_trials_task.outputs["trial_parameters"], item).after(get_keys_task).set_display_name("Get Element from key") train_task = ( train_op( trial_id=item, input_data=prep_task.outputs["output_data"], script_args=script_args, model_parameters=get_element_task.outputs["output"], ptl_arguments=ptl_args, results=results_path ).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(get_element_task).set_display_name("Training") ) And finally, the ax_complete_trials component is used for processing the results for the best parameters from the Hyperparameter search. complete_trials_task = complete_trails_op(gen_trials_task.outputs["client"], results_path).add_pvolumes({volume_mount_path: dsl.PipelineVolume(pvc=dist_volume)}).after(train_task).set_display_name("AX Complete Trials") The best parameters can be viewed under Input/Output section of ax_complete trials (as shown in the figure below): PyTorch on Vertex Pipelines: CIFAR10 image classification example The Vertex Pipelines examples in this post also use the KFP SDK, and include use of the Google Cloud Pipeline Components, which support easy access to Vertex AI services. Vertex Pipelines requires v2 of the KFP SDK. So, these examples diverge from the OSS KFP v1-based examples above, though the components share some of the same data processing and training base classes. It is now possible to use the KFP v2 ‘compatibility mode’ to run KFP V2 examples on an OSS KFP installation, and we’ll show how to do that as well. An example PyTorch Vertex Pipelines notebook shows two variants of a pipeline that: do data preprocessing, train a PyTorch CIFAR10 resnet model, convert the model to archive format, build a torchserve serving container, upload the model container configured for Vertex AI custom prediction, and deploy the model serving container to an endpoint so that it can serve prediction requests on Vertex AI. In the example, the torchserve serving container is configured to use the kfserving service envelope, which is compatible with the Vertex AI prediction service. Training the PyTorch image classification model The difference between the two pipeline variants in the notebook is in the training step. One variant does on-step-node single-GPU training— that is, it runs the training job directly on the Vertex pipeline step node. We can specify how the pipeline step instance is configured, to give the node instance the necessary resources. This fragment from the KFP pipeline definition shows that configuration, which specifies to use one Nvidia V100 for the training step in the pipeline: cifar_train_task = ( cifar_train( model_name=model_name, ... cifar_dataset=cifar_preproc_task.outputs["cifar_dataset"], ) .set_gpu_limit(1) .set_memory_limit("32G") ) cifar_train_task.add_node_selector_constraint( "cloud.google.com/gke-accelerator", "nvidia-tesla-v100", ) The other example variant in the notebook shows multi-GPU, single-node training via Vertex AI’s support for custom training, using the Vertex AI SDK. From the ‘custom training’ pipeline step, a custom job is defined, passing the URI of the container image for the PyTorch training code: custom_job = aiplatform.CustomContainerTrainingJob( display_name=display_name, container_uri=custom_container_uri) Then the custom training job is run, specifying machine and accelerator types, and number of accelerators: custom_model = custom_job.run( replica_count=1, args=trainer_args, sync=False, machine_type="n1-standard-8", accelerator_type=accelerator_type, accelerator_count=num_gpus ) PyTorch prebuilt training containers are available as well, though for this example we used PyTorch v1.8, which at time of writing is not yet available in the prebuilt set. Defining KFP Pipelines Some steps in the example KFP v2 pipelines are built from Python function-based custom components— these make it easy to develop pipelines interactively, and are defined right in the example notebook— and other steps are defined using a set of prebuilt components that make it easy to interact with Vertex AI and other services— the steps that upload the model, create an endpoint, and deploy the model to the endpoint. The custom components include pipeline steps to create a model archive from the trained PyTorch model and the model file, and to build a torchserve container image using the model archive file and the serving config.properties. The torchserve build step uses Cloud Build to create the container image. These pipeline component definitions can be compiled to .yaml files, as shown in the example notebook. The .yaml component definitions are portable: they can be placed under version control and shared, and used to create pipeline steps for use in other pipeline definitions. The KFP pipeline definition looks like the following, with some detail removed. (See the notebook for the full definition). Some pipeline steps consume as inputs the outputs of other steps. The prebuilt google_cloud_pipeline_components make it straightforward to access Vertex AI services (this example is using v. 0.1.7 of the components). Note that the ModelDeployOp step is configured to serve the trained model on a GPU instance. from kfp import dsl from google_cloud_pipeline_components import aiplatform as gcc_aip @dsl.pipeline( name="pytorch-cifar-customtrain-pipeline", pipeline_root=PIPELINE_ROOT, ) def pytorch_cifar_pipeline( ...input args... ): cifar_config_task = cifar_config(...) cifar_preproc_task = cifar_preproc() # data preprocessing cifar_train_task = cifar_vertex_train(...) # train the model cifar_mar_task = generate_mar_file( # generate the model archive ... cifar_train_task.outputs["cifar_model"], ) build_image_task = build_torchserve_image( # build the serving image mar_model_name, cifar_mar_task.outputs["cifar_mar"], cifar_config_task.outputs['cifar_config'], ... ) gcc_aip.ModelUploadOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7" model_upload_op = gcc_aip.ModelUploadOp( # upload the model serving container project=project, display_name=model_display_name, serving_container_image_uri=build_image_task.outputs['serving_container_uri'], serving_container_predict_route="/predictions/{}".format(MAR_MODEL_NAME), serving_container_health_route="/ping", serving_container_ports=[PORT] ) gcc_aip.EndpointCreateOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7" endpoint_create_op = gcc_aip.EndpointCreateOp( # create a model endpoint project=project, display_name=model_display_name, ) gcc_aip.ModelDeployOp.component_spec.implementation.container.image = "gcr.io/ml-pipeline/google-cloud-pipeline-components:0.1.7" model_deploy_op = gcc_aip.ModelDeployOp( # deploy the model to the endpoint project=project, endpoint=endpoint_create_op.outputs["endpoint"], model=model_upload_op.outputs["model"], deployed_model_display_name=model_display_name, machine_type="n1-standard-4", accelerator_type='NVIDIA_TESLA_P100', accelerator_count=1 ) Here’s the pipeline graph for one of the Vertex Pipelines examples: The pipeline graph for one of the KFP v2 example pipelines, running on Vertex Pipelines As a pipeline runs, metadata about the run, including its Artifacts, executions, and events, is automatically logged to the Vertex ML Metadata server. The Pipelines Lineage Tracker, part of the UI, uses the logged metadata to render an Artifact-centric view of pipeline runs, showing how Artifacts are connected by step executions. In this view, it’s easy to track where multiple pipeline runs have used the same artifact. (Where a pipeline is able to leverage caching, you will often notice that multiple pipeline runs are able to use the same cached step outputs.) Vertex Pipeline artifact lineage tracking. Using KFP ‘v2 compatibility mode’ to run the pipelines on an OSS KFP installation It is now possible to run the same KFP v2 pipelines in the Vertex example above on an OSS KFP installation. Kubeflow Pipelines SDK v2 compatibility mode lets you use the new pipeline semantics in v2 and gain the benefits of logging your metadata to ML Metadata. Compatibility mode means that you can develop a pipeline on one platform, and run it on the other. Here is the pipeline graph for the same pipeline shown above running on Vertex Pipelines, but running on an OSS KFP installation. If you compare it to the Vertex Pipelines graph in the figure above, you can see that they have the same structure. The example’s README gives more information about how to do the installation, and the example PyTorch Vertex Pipelines notebook includes sections that show how to launch an OSS KFP pipeline run once you’ve done the setup. The pipeline graph for one of the KFP v2 example pipelines, running on an OSS KFP installation. Next steps This post showed some examples of how to build scalable ML workflows using PyTorch, running on both OSS Kubeflow Pipelines and Vertex Pipelines. Kubeflow and Vertex AI make it easy to use PyTorch on GCP, and we have announced some new PyTorch KFP components that make creating PyTorch-based ML workflows even easier. We also showed how the Vertex Pipelines examples, which require v2 of the KFP SDK, can now also be run on an OSS Kubeflow Pipelines installation using the KFP v2 ‘compatibility mode’.Please check out the samples here and here, and let us know what you think! You can provide feedback on the PyTorch Forums or file issues on the Kubeflow Pipelines Github repository. Acknowledgements The authors would like to thank the contributions from the following people for making this work possible: Pavel Dournov, Henry Tappen, Yuan Gong, Jagadeesh Jaganathan, Srinath Suresh, Alexey Volkov, Karl Weinmeister, Vaibhav Singh, and the Vertex Pipelines team.
https://cloud.google.com/blog/topics/developers-practitioners/scalable-ml-workflows-using-pytorch-kubeflow-pipelines-and-vertex-pipelines
CC-MAIN-2022-33
en
refinedweb
Simply copy and paste the following command line in your terminal to create your first Strapi project. npx create-strapi-app my-project --quickstart Strapi and Laravel are both powerful tools that are making developers' lives easier. Both are different in their domains. We will look at these two technologies and see how they can be used together. It would be nice to talk about these technologies independently before we dive into how we can use Strapi and Laravel. To follow this, here are some requirements: Laravel is a free, open-source PHP web framework intended to develop web applications following the model–view–controller(MVC) architectural pattern. Laravel is used mainly for full-stack web development. You can learn Laravel using Laravel tutorial: The Ultimate Guide 2021. Strapi is based on Node.js( Javascript runtime). It is a headless content management system(CMS). It is 100% Javascript also open-source. First, these two technologies have different approaches: Strapi is concerned mainly with data management and seamless distribution to any frontend application. While Laravel is usually referred to as The Full-Stack Framework, it can route requests and serve them through its Blade template. However, Laravel is an API Backend for frontend applications. This is where I personally like Strapi. With Strapi, you can get a secure REST API and GRAPHQL(v4) out of the box without coding because it has been made easy for you. This article is not a Strapi over Laravel or the other way round. It can never be for me. I want to talk about the database. Both technologies are powerful. Laravel currently supports MySQL, Postgres, SQLite, and SQL Server, and this is configured after installation of your project, the .env that is copied from .env.example is where this is done. In Strapi, you get to choose and configure your database during installation. Now, can they be used together? Yes. Strapi Graphical User Interface for managing content with collections types makes your code less and visually understand your data. There are many more things to explore between these two technologies. So we are going to install and Laravel and Strapi and connect a Strapi to the Laravel project. That is, Strapi houses and manages data. Laravel will use this data. To get started, run this: npx create-strapi-app strapilara --quickstart We will use strapilara as our folder's name where the Strapi will be installed. The --``quickstart flag automatically uses the SQLite database option(That's is fine for this use case). Navigate to move to strapilara on a successful installation and run: yarn develop The command will build the project, and automatically launch it at, where you can need to register yourself as a super admin. Next, we will add data by creating collection types. Let’s say we want to build a blog. We are going to have these as the schema: author's name, blog title, blog post, date. These are the field we are going to add to the Blog collection that we will create. On the admin page, Click on Content-Types Builder → Create a new collection type It will pop up this: Input Blog as the display name Next, add your fields. Click on Text, Input Title as the the Name and click on Add another field, and repeat this step for author Now the blog post has to be Rich Text Finally, click on the Date field Then input date as the Name and select the Type to be date. Click Finish button. All the fields that we need for our blog schema has been created. It is time save. Hitting on Save will restart the server. We now have our collection type Blogs(Strapi uses pluralize) and we can now add new contents. This is how Strapi folder structure looks like: All collections created are added in the api folder. So I will go ahead and two blog posts, save and publish them. For security reasons, this blog won't be available to the public. We are going to fix that by making it publicly available. That is, we want the public users to be able to view. On your admin, click on Settings→Roles→Public Check the following and Save: Here we want the public to able to find an item, find all items and even insert item(s). Now let us set up our Laravel Project that will communicate our Strapi instance soonest. I will use composer to install mine and run this in a different folder other than your Strapi instance. composer create-project laravel/laravel lara Where lara is the project folder’s name. After this is done, change directory cd lara then: php artisan serve This will serve the application at. This is how Laravel folder structure looks like: Now to interact with Strapi through Laravel, we need to install this package laravel-strapi in our application. So in the root of our application, stop the Laravel running and run: composer require dbfx/laravel-strapi This will install the package. In the .env file, add this: STRAPI_URL= STRAPI_CACHE_TIME=3600 Add config to your project by creating a strapi.php file inside the config folder and copy and paste this into it. //php <?php return [ 'url' => env('STRAPI_URL'), 'cacheTime' => env('STRAPI_CACHE_TIME', 3600), ]; This file is essential as it points to your URL and cache time. Remember that we defined this in the .env file. I will go ahead inside the routes folder, in the web.php file, and do the following: Paste this at the top of the file: //php use Dbfx\LaravelStrapi\LaravelStrapi; Then below the home route that displays the welcome.blade.php file in the resources→views, we created another route known as /test. //php Route::get('/test', function () { $strapi = new LaravelStrapi(); return $blogs = $strapi->collection('blogs'); }); Notice how we created a variable that instantiated the class LaravelStrapi? Now, if we run in our browser, we will get this: This is the whole collection of the blog. We can also get an entry of a collection like so: //php Route::get('/test/{id}', function ($id) { $strapi = new LaravelStrapi(); return $blogs = $strapi->entry('blogs', $id); }); Now, if we run in our browser, we get just a single entry: What we did is that we made the id that is unique to every post dynamic in the route. Changing the to will fetch to the second post and so forth. Laravel is great at web development, and we could do so much more with it. Serving data from Strapi makes sense as you get to easily and visually manage your data. What we have done so far is get the response from the Strapi and displayed it in our Laraval project. Now, what’s more? We could go ahead and integrate this in an actual view. There are some things one can do with these two techs aside from a blog schema we are using. I will go ahead and make a controller for the blog. You can stop the project from serving and run this: php artisan make:controller BlogController After this, you can serve your project again using: php artisan serve The make controller command that we just ran creates a BlogController.php file inside the app→Http→Controllers folder. This file has a boilerplate code like so: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; class BlogController extends Controller { // } Now, inside the BlogController.php, I will create a function that I can call along with the controller. public function blog() { return view('blog'); } This does nothing other than return a view that we will soon create inside the resources→views folder. Let us create a route inside routes→web.php: Route::get('/blog', [BlogController::class, 'blog']); Let us go back to the BlogController to use and instantiate the LaravelStrapi package. //php <?php namespace App\Http\Controllers; use Dbfx\LaravelStrapi\LaravelStrapi; use Illuminate\Http\Request; class BlogController extends Controller { public function blog() { $strapi = new LaravelStrapi(); $blogs = $strapi->collection('blogs'); return view('blog', compact(['blogs'])); } } Inside the resources→views folder, create a file blog.blade.php,copy this code, and paste it there. Go to this route: , you should have this: You can take it further by styling it to your taste. We have made it to the end of this tutorial. We have also seen how to connect a Laravel application to Strapi using the Laravel-strapi package. We explained how to set up Strapi and Laravel projects. They are straightforward to understand because a lot of work has been lifted away from us. We were able to focus on the essential parts of the project without worrying about security issues. Precious Luke likes to explore new technologies. When he's not doing technical writing, you can find him doing Youtube videos or learning more about backend infrastructures with a keen interest in NodeJs. Get all the latest Strapi updates, news and events.
https://strapi.io/blog/strapi-and-laravel
CC-MAIN-2022-33
en
refinedweb
, What’s CRIU? Snapshotting of virtual machines is used on a daily basis now. Red Hat Enterprise Linux 7.4 beta comes with Checkpoint/Restore In Userspace (CRIU,) version 2.12 which allows snapshot/restore of userland processes, so let’s have a look at that. Did you ever start a process on a remote system, just to remember seconds later that the process will run for a long time, and will stop in an unpleasant state when you close the remote connection? Of course, having run it in screen/tmux, or with nohup would have helped if you had known that before starting! CRIU has the potential to help us in this situation with snapshots of long running computing jobs (so you can recover that state in case the system crashes), moving processes to other systems and more. There were approaches to process migration on Linux in the past, for example: Berkeley Lab Checkpoint/Restart (BLCR): aimed at HPC workloads, required an extra kernel module which was not upstream, required application code to be prepared for checkpointing, the sockets used by the application (i.e. TCP) get closed on process restore and the project looks inactive since some years. The HTCondor framework supports process checkpoint/migration: it’s aimed at balancing compute workloads over farms of compute nodes and no source code change is required for snapshot/restore. This project seems to be active. Snapshot/Restore of a process We will use a Red Hat Enterprise Linux 7.4 beta (available on the Red Hat Customer Portal) system to illustrate. Here we install CRIU, which is part of the normal Red Hat Enterprise Linux repo, and perform a check: [root@rhel7u4 ~]# yum install criu [root@rhel7u4 ~]# criu check Looks good. [root@rhel7u4 ~]# Let’s look at the ‘long running backup script’ situation mentioned before. We logged onto a remote system and then executed a command which is not running in the background but producing output in our terminal, for example, a backup script. Just after starting we notice that closing the terminal will terminate the script. In our example, we will now create a new directory, store a simple script there which produces output, start the script, and then use CRIU to ‘move’ the script into a screen session. Let’s first create and run the script. [root@rhel7u4 ~]# vi /tmp/criutest.sh [root@rhel7u4 ~]# cat /tmp/criutest.sh #!/bin/bash for i in {0..2000}; do echo foobar, step $i sleep 1 done [root@rhel7u4 ~]# chmod +x /tmp/criutest.sh [root@rhel7u4 ~]# /tmp/criutest.sh foobar, step 0 [..] At this point we log onto the system from a different terminal, ensure that screen is installed, start a screen session and perform further commands from the screen session. Next, we find out the PID of our script and execute ‘criu dump’ to initiate the snapshot. As we use no additional options, this will remove the original process. File dumplog.txt will contain details from the dump procedure. Using ‘criu restore’ inside the screen session we will then continue the process - having input/output now directed to the screen session. [root@rhel7u4 ~]# yum install screen [...] [root@rhel7u4 ~]# mkdir /tmp/criu && cd /tmp/criu [root@rhel7u4 criu]# screen -S criu [screen starts] [root@rhel7u4 criu]# PID=$(pgrep criutest.sh) [root@rhel7u4 criu]# criu dump -o dumplog.txt -vvvv -t $PID --shell-job && echo OK OK [root@rhel7u4 criu]# criu restore -o restorelog.txt -vvvv --shell-job Foobar, step 352 [..] When executing ‘criu dump’, we instructed to produce an output logfile ‘-o dumplog.txt’, be extra verbose ‘-vvvv’, which PID we want to snapshot (the child PIDs below were also snapshot) and that our process uses the terminal and thus needs to be considered differently ‘--shell-job’. Modifying the process Using gdb, processes can be inspected and modified. The snapshot is stored in files, in our example in directory /tmp/criu. Modifying these files is another way to influence the process. Let’s kill the example process which we moved into the screen session and investigate the files: [root@rhel7u4 criu]# killall criutest.sh [root@rhel7u4 criu]# ls core-2056.img ids-2056.img pages-1.img stats-dump core-2491.img ids-2491.img pages-2.img stats-restore dumplog.txt inventory.img pstree.img tty-info.img fdinfo-2.img mm-2056.img reg-files.img tty.img fdinfo-3.img mm-2491.img restorelog.txt fs-2056.img pagemap-2056.img sigacts-2056.img fs-2491.img pagemap-2491.img sigacts-2491.img The process state is stored in these files. pagemap* files contain details regarding the virtual regions, pages* files contain the process memory. As this is just a test, let’s try a simple modification of the process: [root@rhel7u4 criu]# cp pages-1.img pages-1.img.orig [root@rhel7u4 criu]# sed -e 's,foobar,barfoo,g' pages-1.img.orig >pages-1.img [root@rhel7u4 criu]# criu restore -o restorelog.txt -vvvv --shell-job barfoo, step 352 barfoo, step 353 [..] After restoring the modified process, ‘barfoo’ is printed instead of ‘foobar’. Live migration of a process With the commands seen so far, we can already snapshot a process. After making the snapshot files available on a different Linux system, for example using NFS or rsync, we can then restore the process on that system. CRIU already implements the ‘page-server’-mode, which sets up a listener, waits for a connection from a ‘criu dump’ over the network, can then receive the process memory and finally runs the process on the destination system. The effective process downtime depends mostly on: the amount of memory which is used how quickly the memory changes and the network connectivity between both involved systems. How long is the effective downtime of a process which is getting migrated? I wrote a small script which writes a timestamp into a logfile in 200ms intervals. This process was then migrated to a further Red Hat Enterprise Linux 7.4 system, a second KVM guest on the same hypervisor. Latency changes of up to 800ms were seen while the process was migrated. What can we do with this, what are the limits? Red Hat Enterprise Linux 7.4 is now in beta. CRIU has Technology Preview status at the moment and is not intended to be used on production systems. While playing with this technology, one quickly understands that it’s not yet in production state like KVM live migration. What are the most important restrictions and characteristics? The restored process has the same PID as the original process - even when the process gets restored on a different system. This prevented some of my attempts to migrate processes - a process with that PID already existed on the destination. Multiple namespaces can help around this limitation. Migrations of processes using unknown socket types fail (‘ping’ using ICMP, ‘hping3’ using RAW socket mode) Shared memory areas are not bound to a single process and are not snapshot by CRIU. IPs remain untouched by CRIU. As of today, snapshotting the original process and resuming the process (possibly on a different system) are 2 separate processes, to be executed for example by a script. If resuming fails, the script has to unfreeze the original script. As of today, this process seems more error prone than for example KVM live migration. CRIU does plain snapshot/restore of the process - there is no rundown of the application, no graceful disconnection of network connections to clients and no closing of files. This is not always a downside: when a system needs to go down for maintenance, one could consider to use CRIU instead of the time-consuming process of shutdown/startup. Further points are mentioned here: How can this help us in the future? Debugging: instead of just taking an application core and finishing the process, we can also snapshot it for debugging on a different system, while keeping it running. Long running computation jobs: snapshotting them in time intervals allows us to later restore the computation, for example after a system crash. System upgrades: now there is kpatch to patch live systems. A further idea is to snapshot a process, kexec into a new kernel, and then restore the process. Assume we have a process which at the beginning takes data via network and then does a lot of computations. We might not trust the memory of the system. Why not snapshot the process and have it finish the calculation on multiple systems, comparing the results? Further usage scenarios: Where do I get more information? An article with more technical details from Adrian, our CRIU package maintainer Container Live Migration using runC and CRIU, also from Adrian Our kbase around CRIU:, Virtualization, operations and performance tuning are among the returning topics of this daily.
https://www.redhat.com/zh/blog/how-can-process-snapshotrestore-help-save-your-day
CC-MAIN-2022-33
en
refinedweb
There’s a lot of new material in Python Latest version and much of it is pretty simple. For example, str, bytes and bytearray now have an isascii() function that returns ‘true’ if they only include ASCII characters. THE BELAMY Sign up for your weekly dose of what's up in emerging technology. In this article, we list down 7 essential features of the latest version which Python aspirants must get updated with. Postponed Evaluation Of Annotations The appearance of type hints in Python revealed two glaring usability concerns with the functionality of annotations which were added in the latest Python 3.7.2 version. - annotations could only handle names which were previously available in the current scope, in other words, they didn’t support foremost references of any kind, and - annotating source code had opposing impacts on startup time of Python programs. Both of these problems are fixed by delaying the evaluation of annotations. Instead of compiling code which executes expressions in annotations at their rendering time, the compiler stores the annotation in a string form comparable to the AST of the expression in question. If needed, commentaries can be resolved at runtime using typing.get_type_hints(). In the common case where this is not needed, the annotations are inexpensive to store and make startup time quicker. Usability-wise, annotations now carry forward references, making the subsequent syntax valid: Code: class D: @classmethod def from_string(cls, source: str) -> D: ... def validate_b(self, obj: C) -> bool: … class C: … Since this change breaks adaptability, the new behaviour requires to be enabled on a per-module basis in Python 3.7 using a __future__ import: Forced UTF-8 Mode On programs other than Windows, there has always been a difficulty with the locale supporting UTF-8 or ASCII (7-bit) characters. The regular C locale is typically ASCII. This influences how text files are read (for instance). UTF-8 comprises all 8-bit ASCII characters and 2-4 length characters, as well. This change applies the locale to be one of the promoted UTF-8 locales. users can change what happens by introducing a new environment variable, PYTHONCOERCELOCALE. To practice ASCII requires both this setting and PYTHONUTF8 to be disabled. Setting PYTHONUTF8 to 1 force the Python interpreter to manage UTF-8, but if this isn’t described, it defaults to the locale setting; only if PYTHONCOERCELOCALE is disabled does it use ASCII mode. To explain technically, UTF-8 is not the default on Unix systems unless the user explicitly disables these two settings. Built-in breakpoint() With previous versions of Python, user’s could get a breakpoint by using the pdb debugger and this code, which breaks after function1() is described and before function2(): function1() import pdb; pdb.set_trace() function2() The creator of PEP 553 found this a bit fiddly, and two statements on a line bothered some Python developers. So now there’s a unique breakpoint() function: function1() breakpoint() function2() This works in combination with the Python environment variable PYTHONBREAKPOINT. When Set to 0 the breakpoint does nothing. Address it a function and module value, and it will import the module and call it. The users can modify this programmatically at runtime; that’s very handy for having conditional breakpoints. Data Classes If users are storing data in classes, this latest feature clarifies things by generating boilerplate code for users.A data class is a normal Python class with the extension of a @dataclass decorator. It makes handling of Type hints, a new feature since Python 3.5 where users explain variables to provide a reference as to the type of variable. Users have to use this for domains in a data class.In the example below, we can see the: str after the variable name. That’s a type reference.If users want their data class to be permanent, just add (frozen=true) to the decorator. Here’s a simple example explicating an immutable data class. There’s not a lot of code in the class declaration; all the normal stuff is generated for users. The last two lines use the class and print out the occurrences: Code: from dataclasses import dataclass @dataclass(frozen=True) class ReadOnlyStrings(object): field_name : str field_address1 : str test = {ReadOnlyStrings(1,’Damodar’), ReadOnlyStrings(2,’Bhagat’)} print(test) Output: {ReadOnlyStrings(field_name=2, field_address1=’Bhagat’), ReadOnlyStrings(field_name=1, field_address1=’Damodar’)} Core Support for typing module and Generic Types The previous Python Version was composed in such a way that it would not include any changes to the core CPython interpreter. Now type hints and the typing module are widely used by the community, so this restriction is eliminated. The PEP introduces two special methods __class_getitem__() and mro_entries, these classifications are now used by most classes and special constructs in typing. As a consequence, the speed of different operations with types extended up to 7 times, the nonexclusive types can be used without metaclass conflicts, and various long-standing bugs in typing module are fixed. Development Runtime Mode The -X parameter of CPython which is the standard implementation of Python which lets the users set multiple implementation specifications, and has been extended to incorporate the word ‘dev.’ This activates new runtime checks such as debug hooks on memory allocators, lets the faulthandler module dump the Python traceback, and allows asyncio debug mode a mode which attaches extra logging and warnings but reduces asyncio performance. users program can identify if it’s running dev mode by checking sys.flags.dev_mode. Code: python -X dev Python 3.7.0 (default, Jan 21 2019, 16:40:07) [GCC 7.3.0] on (any operating system) Type "help", "copyright", "credits" or "license" for additional information. >> import sys >> print (sys.flags.dev_mode) True Hash-Based .pyc Files Python has traditionally terminated the up-to-dateness of bytecode cache files (i.e., .pyc files) by connecting the source metadata with authorisation metadata saved in the cache file header when it was produced. While active, this invalidation method has its disadvantages. When filesystem timestamps are too inferior, Python can miss source updates, leading to user confusion. Additionally, having a timestamp in the cache file is uncertain for build reproducibility and content-based build regularities. The new Python feature extends the pyc format to concede the hash of the source file to be accepted for invalidation instead of the source timestamp. Such .pyc files are described as hash-based. By default, Python still practices timestamp-based invalidation and does not generate hash-based .pyc files at runtime. Hash-based .pyc files may be created with py_compile or compileall. Hash-based .pyc files appear in two variants: checked and unchecked. Python validates checked hash-based .pyc files versus the identical source files at runtime but doesn’t do so for unchecked hash-based pycs. Unchecked hash-based .pyc files are a valuable performance optimization for environments where a system outside to Python is responsible for managing .pyc files up-to-date.
https://analyticsindiamag.com/whats-new-in-python-3-7-2-look-at-7-features-that-you-should-know/
CC-MAIN-2022-33
en
refinedweb
guys i need help im stuck on the Grande Finale lesson here’s the code im using from datetime import datetime now = datetime.now() print ‘%s/%s/%s’ % (now.month, now.day, now.year) print ‘%s:%s:%s’ % (now.hour, now.minute, now.second) but it says. Your printed date and time do not seem to be in the right format: mm/dd/yyyy hh:mm:ss where is the problem ???
https://discuss.codecademy.com/t/grand-finale-i-need-help/41382
CC-MAIN-2022-33
en
refinedweb
Hi, I am having trouble trying to extend a non-abstract (though non-final) java class. I get the following exception from JRuby: Exception in thread “main” java.lang.RuntimeException: java_object returned null for NewEdge at org.jruby.javasupport.JavaUtil.convertRubyToJava(JavaUtil.java:825) at org.jruby.gen.InterfaceImpl343660132.create(org/jruby/gen/InterfaceImpl343660132.gen:13) … The object has a constructor that takes two arguments. Due to the bug/feature that the number of arguments in the ruby class must be the same as the number of arguments in the java class I am also overriding self.new. Note that the types of asset1 and asset2 are java types of the same type expected by the java ctor. So in Java I have a class called Edge and in Ruby: class NewEdge < Pairs::Edge def initialize(asset1,asset2) @asset1, @asset2 = asset1, asset2 @passed = true end def self.new(asset1, asset2, ranges, mincor, maxadf) obj = self.allocate obj.send :initialize, asset1, asset2 obj.instance_variable_set(:@ranges, ranges) obj.instance_variable_set(:@mincor, mincor) obj.instance_variable_set(:@maxadf, maxadf) puts "created object", obj obj end ... end NewEdge.new(…) The new object is being passed back to java, at which point the above exception occurs. Why would convertRubyToJava ever fail? On another note I do hope that the ctor issue is resolved as well at some point. Writing self.new factories are ugly. Thanks Jonathan
https://www.ruby-forum.com/t/exception-thrown-from-jruby-when-attempting-to-extend-java-class/184387
CC-MAIN-2022-33
en
refinedweb
MicroSD TinyShield Tutorial This TinyShield allows you to add a lot more memory to your TinyDuino projects. The Adapter TinyShield works directly with the Arduino SD Card libraries that are already installed in the Arduino IDE. Learn more about the TinyDuino Platform Description and including 5V. Note: This does not include the microSD card (sold separately). You can get a compatible microSD card here. Technical DetailsMicroSD Specs - Uses standard Arduino SD Card Library - Supports standard microSD cards and SDHC cards - Voltage: 3.0V - 5.5V - Current: 100mA or more during SD card writes, depends on the microSD card being used. Because of this high current, the TinyDuino processor cannot be used with a coin cell. SPI Interface used -. - 20mm x 20mm (.787 inches x .787 inches) Note: microSD card overhangs the edge by approx 3mm for easy removal - Max Height (from lower bottom TinyShield Connector to upper top TinyShield Connector): 5.11mm (0.201 inches) - Weight: 1.36 gram (.05 ounces) To see what other TinyShields this will work with or conflict with, check out the TinyShield Compatibility Matrix Notes - This does not include the microSD card (sold separately). You can get a compatible microSD card here. - A microSD card consumes a large amount of power when it is being accessed(100-200mA), so your system needs to have a power supply or battery that is large enough to power this. The coin cell option on the TinyDuino is not sufficient to power this. We recommend a battery if you will be using the project wirelessly. Materials Hardware - Micro USB Cable - TinyDuino and USB TinyShield OR - MicroSD Card TinyShield - MicroSD Card and Adapter - Optional: Battery (You will need a power source if you intend to take your boards somewhere without a microUSB attachment) Software - Arduino IDE - TinyCircuits adapted SD Library CardInfo example - (optional) TinyCircuits adapted SD Library Data Logger example - SD Card Formatting Utility Hardware Assembly Connect your processor board of choice to the SD Card TinyShield using the tan 32-pin connector. Connect your TinyDuino stack to your computer using the microUSB cable. Make sure the power switch on your processor board is flipped to ON. Software Setup Fortunately, the SD card TinyShield is compatible with the Arduino SD card library. So instead of having to download an external library, all you need to do in order to use the SD library utilities in your future programs is to include the library at the top of your program: #include <SD.h> You will also need to make sure that the chipSelect variable at the beginning of every program is set to 10 instead of 4 in order to be compatible with the wiring on the SD Card TinyShield: const int chipSelect = 10; Formatting the microSD Card The Arduino SD Card Library supports cards formatted with FAT16 and FAT32 filesystems. For best results, we recommend formatting your microSD card before use by using the official SD card formatting utility as supplied by the SD Card Association: Testing it out Plug in the formatted microSD card into the connector. Open the Arduino IDE, and go to the SD CardInfo sketch. Select File -> Examples -> SD -> CardInfo. If you're using the TinyDuino as your processor board, all you will have to do is change the chipSelect = 10 to make this example program work. If you are using a TinyZero or TinyScreen+, you will have to change every instance of SerialUSB, or you can download this zip file we edited to work for both the TinyDuino and SAMD processor boards (TinyZero and TinyScreen+): Now click the Upload button on the Arduino IDE to build the sketch and upload it to the TinyDuino stack. Open up the Serial Monitor by navigating to it under the Tools tab, or selecting the magnifying glass icon in the top right of the IDE. Type something in and press "Send". You should receive some data back on the card and a list of files: If this example program is having a problem recognizing the card, make sure chipSelect = 10. Try to reformat your SD card using the official SD Card Utility mentioned above. Using the microSD Card There are a number of examples included in the Arduino IDE that will help you get started using the microSD card. One of the best basic examples to get you started is the Datalogger example (in the Arduino ID under File...Examples...SD...Datalogger). This example will read several ADC values and store them to a file on your microSD card. Just be sure to remember to set the ChipSelect variable at the top of the file to a "10". Before uploading the sketch, the chipSelect variable must be assigned a value of 10. Upload the sketch to the TinyDuino and open the Serial Monitor. Type something into the monitor and press Enter or Send. Code /* SD modified 24 May 2019 by Laveréna Wienclaw for TinyCircuits */ //, and TinyCircuits SD shields and modules: pin 10 // Sparkfun SD shield: pin 8 // MKRZero SD: SDCARD_SS_PIN const int chipSelect = 10; // Make program compatibile for all TinyCircuits processor boards #if defined(ARDUINO_ARCH_SAMD) #define SerialMonitor SerialUSB #else #define SerialMonitor Serial #endif void setup() { // Open serial communications and wait for port to open: SerialMonitor.begin(9600); while (!SerialMonitor) { ; // wait for serial port to connect. Needed for native USB port only } SerialMonitor.print("\nInitializing SD card..."); // we'll use the initialization code from the utility libraries // since we're just testing if the card is working! if (!card.init(SPI_HALF_SPEED, chipSelect)) { SerialMonitor.println("initialization failed. Things to check:"); SerialMonitor.println("* is a card inserted?"); SerialMonitor.println("* is your wiring correct?"); SerialMonitor.println("* did you change the chipSelect pin to match your shield or module?"); while (1); } else { SerialMonitor.println("Wiring is correct and a card is present."); } // print the type of card SerialMonitor.println(); SerialMonitor.print("Card type: "); switch (card.type()) { case SD_CARD_TYPE_SD1: SerialMonitor.println("SD1"); break; case SD_CARD_TYPE_SD2: SerialMonitor.println("SD2"); break; case SD_CARD_TYPE_SDHC: SerialMonitor.println("SDHC"); break; default: SerialMonitor.println("Unknown"); } // Now we will try to open the 'volume'/'partition' - it should be FAT16 or FAT32 if (!volume.init(card)) { SerialMonitor.println("Could not find FAT16/FAT32 partition.\nMake sure you've formatted the card"); while (1); } SerialMonitor.print("Clusters: "); SerialMonitor.println(volume.clusterCount()); SerialMonitor.print("Blocks x Cluster: "); SerialMonitor.println(volume.blocksPerCluster()); SerialMonitor.print("Total Blocks: "); SerialMonitor.println(volume.blocksPerCluster() * volume.clusterCount()); SerialMonitor.println(); // print the type and size of the first FAT-type volume uint32_t volumesize; SerialMonitor.print("Volume type is: FAT"); SerialMonitor.println(volume.fatType(), DEC); volumesize = volume.blocksPerCluster(); // clusters are collections of blocks volumesize *= volume.clusterCount(); // we'll have a lot of clusters volumesize /= 2; // SD card blocks are always 512 bytes (2 blocks are 1KB) SerialMonitor.print("Volume size (Kb): "); SerialMonitor.println(volumesize); SerialMonitor.print("Volume size (Mb): "); volumesize /= 1024; SerialMonitor.println(volumesize); SerialMonitor.print("Volume size (Gb): "); SerialMonitor.println((float)volumesize / 1024.0); SerialMonitor.println("\nFiles found on the card (name, date and size in bytes): "); root.openRoot(volume); // list all files in the card with date and size root.ls(LS_R | LS_DATE | LS_SIZE); } void loop(void) { } This TinyShield adds a significant amount of memory to the TinyDuino (when accompanied by a MicroSD card) and can be used in projects that need to store or interpret much more data. If you'd like to look further into the features of the SD Arduino Library, check out the SD Library Reference Page. Downloads If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it. Thanks for making with us!
https://learn.tinycircuits.com/Memory/MicroSD_TinyShield_Tutorial/
CC-MAIN-2022-33
en
refinedweb
apache_beam.typehints.typehints module¶ Syntax & semantics for type-hinting custom-functions/PTransforms in the SDK. This module defines type-hinting objects and the corresponding syntax for type-hinting function arguments, function return types, or PTransform object themselves. TypeHint’s defined in the module can be used to implement either static or run-time type-checking in regular Python code. Type-hints are defined by ‘indexing’ a type-parameter into a defined CompositeTypeHint instance: - ‘List[int]’. Valid type-hints are partitioned into two categories: simple, and composite. Simple type hints are type hints based on a subset of Python primitive types: int, bool, float, str, object, None, and bytes. No other primitive types are allowed. Composite type-hints are reserved for hinting the types of container-like Python objects such as ‘list’. Composite type-hints can be parameterized by an inner simple or composite type-hint, using the ‘indexing’ syntax. In order to avoid conflicting with the namespace of the built-in container types, when specifying this category of type-hints, the first letter should capitalized. The following composite type-hints are permitted. NOTE: ‘T’ can be any of the type-hints listed or a simple Python type: - Any - Union[T, T, T] - Optional[T] - Tuple[T, T] - Tuple[T, …] - List[T] - KV[T, T] - Dict[T, T] - Set[T] - Iterable[T] - Iterator[T] - Generator[T] Type-hints can be nested, allowing one to define type-hints for complex types: - ‘List[Tuple[int, int, str]] In addition, type-hints can be used to implement run-time type-checking via the ‘type_check’ method on each TypeConstraint. - class apache_beam.typehints.typehints. TypeVariable(name)[source]¶ Bases: apache_beam.typehints.typehints.AnyTypeConstraint
https://beam.apache.org/releases/pydoc/2.21.0/apache_beam.typehints.typehints.html
CC-MAIN-2022-33
en
refinedweb
AWS News Blog Extending AWS CloudFormation with AWS Lambda Powered Macros Today I’m really excited to show you a powerful new feature of AWS CloudFormation called Macros. CloudFormation Macros allow developers to extend the native syntax of CloudFormation templates by calling out to AWS Lambda powered transformations. This is the same technology that powers the popular Serverless Application Model functionality but the transforms run in your own accounts, on your own lambda functions, and they’re completely customizable. CloudFormation, if you’re new to AWS, is an absolutely essential tool for modeling and defining your infrastructure as code (YAML or JSON). It is a core building block for all of AWS and many of our services depend on it. There are two major steps for using macros. First, we need to define a macro, which of course, we do with a CloudFormation template. Second, to use the created macro in our template we need to add it as a transform for the entire template or call it directly. Throughout this post, I use the term macro and transform somewhat interchangeably. Ready to see how this works? Creating a CloudFormation Macro Creating a macro has two components: a definition and an implementation. To create the definition of a macro we create a CloudFormation resource of a type AWS::CloudFormation::Macro, that outlines which Lambda function to use and what the macro should be called. Type: "AWS::CloudFormation::Macro" Properties: Description: String FunctionName: String LogGroupName: String LogRoleARN: String Name: String The Name of the macro must be unique throughout the region and the Lambda function referenced by FunctionName must be in the same region the macro is being created in. When you execute the macro template, it will make that macro available for other templates to use. The implementation of the macro is fulfilled by a Lambda function. Macros can be in their own templates or grouped with others, but you won’t be able to use a macro in the same template you’re registering it in. The Lambda function receives a JSON payload that looks like something like this: { "region": "us-east-1", "accountId": "$ACCOUNT_ID", "fragment": { ... }, "transformId": "$TRANSFORM_ID", "params": { ... }, "requestId": "$REQUEST_ID", "templateParameterValues": { ... } } The fragment portion of the payload contains either the entire template or the relevant fragments of the template – depending on how the transform is invoked from the calling template. The fragment will always be in JSON, even if the template is in YAML. The Lambda function is expected to return a simple JSON response: { "requestId": "$REQUEST_ID", "status": "success", "fragment": { ... } } The requestId needs to be the same as the one received in the input payload, and if status contains any value other than success (case-insensitive) then the changeset will fail to create. Now, fragment must contain the valid CloudFormation JSON of the transformed template. Even if your function performed no action it would still need to return the fragment for it to be included in the final template. Using CloudFormation Macros To use the macro we simply call out to Fn::Transform with the required parameters. If we want to have a macro parse the whole template we can include it in our list of transforms in the template the same way we would with SAM: Transform: [Echo]. When we go to execute this template the transforms will be collected into a changeset, by calling out to each macro’s specified function and returning the final template. Let’s imagine we have a dummy Lambda function called EchoFunction, it just logs the data passed into it and returns the fragments unchanged. We define the macro as a normal CloudFormation resource, like this: EchoMacro: Type: "AWS::CloudFormation::Macro" Properties: FunctionName: arn:aws:lambda:us-east-1:1234567:function:EchoFunction Name: EchoMacro The code for the lambda function could be as simple as this: def lambda_handler(event, context): print(event) return { "requestId": event['requestId'], "status": "success", "fragment": event["fragment"] } Then, after deploying this function and executing the macro template, we can invoke the macro in a transform at the top level of any other template like this: AWSTemplateFormatVersion: 2010-09-09 Transform: [EchoMacro, AWS::Serverless-2016-10-31] Resources: FancyTable: Type: AWS::Serverless::SimpleTable The CloudFormation service creates a changeset for the template by first calling the Echo macro we defined and then the AWS::Serverless transform. It will execute the macros listed in the transform in the order they’re listed. We could also invoke the macro using the Fn::Transform intrinsic function which allows us to pass in additional parameters. For example: AWSTemplateFormatVersion: 2010-09-09 Resources: MyS3Bucket: Type: 'AWS::S3::Bucket' Fn::Transform: Name: EchoMacro Parameters: Key: Value The inline transform will have access to all of its sibling nodes and all of its children nodes. Transforms are processed from deepest to shallowest which means top-level transforms are executed last. Since I know most of you are going to ask: no you cannot include macros within macros – but nice try. When you go to execute the CloudFormation template it would simply ask you to create a changeset and you could preview the output before deploying. Example Macros We’re launching a number of reference macros to help developers get started and I expect many people will publish others. These four are the winners from a little internal hackathon we had prior to releasing this feature: Here are a few ideas I thought of that might be fun for someone to implement: - Automatic R53 domain registration + AWS Certificate Manager (ACM) certificate provisioning - Automatic S3 static website or Amazon CloudFront distribution with a custom domain - Extending CloudFormation mappings to read from a DynamoDB table - Automatic IPv6 setup for Amazon Virtual Private Cloud (Amazon VPC) - Automatic webhook subscription for Slack, Twitter, Messenger integrations If you end up building something cool I’m more than happy to tweet it out! Available Now CloudFormation Macros are available today, in all AWS regions that have AWS Lambda. There is no additional CloudFormation charge for Macros meaning you are only billed normal AWS Lambda function charges. The documentation has more information that may be helpful. This is one of my favorite new features for CloudFormation and I’m excited to see some of the amazing things our customers will build with it. The real power here is that you can extend your existing infrastructure as code with code. The possibilities enabled by this new functionality are virtually unlimited.
https://aws.amazon.com/blogs/aws/cloudformation-macros/
CC-MAIN-2022-33
en
refinedweb
Re: Working out what face is used Norman Walsh writes: > In this case, I can’t put the cursor on the text. I’ve made a quick > skim of org-sticky-header.el and don’t see any obvious use of a > specific face. So I’m a bit stumped. FWIW, I made a rainbow out of the background colors and worked out what face it was, but I am still curious if there was a more direct approach. Be seeing you, norm -- Norman Walsh | Four things come not back: the spoken word, the sped arrow, time past, the | neglected opportunity.--Omar Ibnal-Halif signature.asc Description: PGP signature Working out what face is used Hello, This is a tangentially Org-related question. If I enable org-sticky-headers, it works just fine. I get, for example, at the top of the buffer: ** Top level heading / Second level heading Which is correct. But “ / ” and the rest of the line after “Second level heading” are in some unknown face. It has an odd gray background and appears to be underlined in white. For ordinary text in a buffer, I can put my cursor on the text and use “C-u x =” to see what face is being used. In this case, I can’t put the cursor on the text. I’ve made a quick skim of org-sticky-header.el and don’t see any obvious use of a specific face. So I’m a bit stumped. Be seeing you, norm -- Norman Walsh | The future belongs to those who believe in the beauty of their dreams.--Eleanor | Roosevelt signature.asc Description: PGP signature to be ignored on SRC. > > Not for me. I have lstlisting for both cases. The trick turned out to be (setq org-latex-listings t) I’ve written up a few more remarks here: Be seeing you, norm -- Norman Walsh | People tend to feel strongest about those things they understand the least. signature.asc Description: PGP signature Re: Inline references in noweb? John Kitchin writes: > In order to engage the flux capacitor, you must set the > chronometer dial to src_emacs-lisp[:noweb yes :exports > results]{"<>"} > {{{results(=Jan 5\, 2020=)}}}. Ah, thank you. That was one level of indirection further than I got! Be seeing you, norm -- Norman Walsh | 'I have done that,' says my memory. 'I cannot have done that'—says my pride, | and remains adamant. At last—memory | yields.--Nietzsche signature.asc Description: PGP signature Confused about src vs example and LaTeX export Hello again, I’m confused about the distinction between BEGIN_EXAMPLE and BEGIN_SRC and LaTeX export. I tend to use BEGIN_SRC more often because I’m taking advantage of Babel output. Consider this file: #+TITLE: Test Org File This is an example. * Simple environments #+BEGIN_EXAMPLE emacs-lisp (+ 3 4) #+END_EXAMPLE #+BEGIN_SRC emacs-lisp (+ 3 4) #+END_SRC * LaTeX attributes #+ATTR_LATEX: :environment lstlisting #+BEGIN_EXAMPLE emacs-lisp (+ 3 4) #+END_EXAMPLE #+ATTR_LATEX: :environment lstlisting #+BEGIN_SRC emacs-lisp (+ 3 4) #+END_SRC In the simple case, both EXAMPLE and SRC export as {verbatim} environments; that’s fine as a default. In the second case, the ATTR_LATEX request for the {lstlisting} environment works on EXAMPLE but appears to be ignored on SRC. To confound me further, setting org-latex-custom-lang-environments doesn’t seem to have any effect in either case. (setq org-latex-custom-lang-environments '((emacs-lisp "lstlisting"))) I’m not sure what I’m overlooking. Be seeing you, norm -- Norman Walsh | There has never been a perfect government, because men have passions; | and if they did not have passions, | there would be no need for | government.--Voltaire signature.asc Description: PGP signature Inline references in noweb? Hello, I’ve read through most of the docs that turn up with the obvious web searches and I haven’t been able to figure this out. However, it seems like it should be possible… I’m quite pleased with how babel and noweb can be used to write “literate programs” for configuration files and scripts. Cross references between code blocks with <> work just fine. What I’d like, but can’t work out, is how to refer to, for example, a configuration value inline. Something like this: In order to engage the flux capacitor, you must set the chronometer dial to ~<>~. #+begin_src conf :noweb yes date: <> #+end_src I don’t care how or where “required_date” is defined, I just want the ability to refer to it in the documentation and in code blocks. Have I overlooked something obvious? Be seeing you, norm -- Norman Walsh | The stone fell on the pitcher? Woe to the pitcher. The pitcher fell on the | stone? Woe to the pitcher.--Rabbinic | Saying signature.asc Description: PGP signature Wholesale changes to LaTeX headers Hi, I want to make wholesale changes to the LaTeX preamble exported from Org mode. I want to put \RequirePackage and \PassOptionsToPackage calls before the \documentclass, I want to write a specific set of macros after the \documentclass, I want to craft a couple of \renewcommands, etc. Where should I begin? Be seeing you, norm -- Norman Walsh | The stone fell on the pitcher? Woe to the pitcher. The pitcher fell on the | stone? Woe to the pitcher.--Rabbinic | Saying signature.asc Description: PGP signature Re: “Literate” python? Diego Zamboni writes: > Hi Norm, > > As George said, the trick in this case is to use the =:noweb= and > =:noweb-ref= headers. The change is minimal from the script you > sent: Thanks. With your help (and Barry’s and George’s), I got over the initial hurdles. I wrote about it here: > If I may do a bit of self-promotion, feel free to check out my > "Literate Config" booklet, which I published just a few days ago > (available for free) and which contains some more tips for doing > literate programming: Purchased. Authors gotta get paid. :-) Be seeing you, norm -- Norman Walsh | The First Amendment is often inconvenient. But that is besides the | point. Inconvenience does not absolve | the government of its obligation to | tolerate speech.--Justice Anthony | Kennedy, in 91-155 signature.asc Description: PGP signature Re: “Literate” python? George Mauer writes: > I've used noweb references to actually assemble what will be tangled > all at once. See how I did it here. Thanks. > Also I'm pretty sure there's no :weave header arg...at least I > haven't seen it used and can't find it documented anywhere. No, that was just me guessing. Thanks for the pointers. Be seeing you, norm -- Norman Walsh | A child becomes an adult when he realizes he has a right not only to be | right but also to be wrong.--Thomas | Szasz signature.asc Description: PGP signature Re: “Literate” python? "Berry, Charles" writes: > A couple of things might help. > > First, use the :noweb-ref argument to mark each of the code blocks > you wish to tangle. […] > The remaining problem (as you will see) is the indentation. Fix this > by adding the `-i' flag to the penultimate code block, viz. […] > See 12.6 Literal Examples and 15.10 Noweb Reference Syntax in the manual. Thank you. I had failed to locate the relevant manual sections. Those changes did appear to resolve the issues and I’ll study the docs with a little more care! Be seeing you, norm -- Norman Walsh | To enjoy yourself and make others enjoy themselves, without harming yourself or | any other; that, to my mind, is the | whole of ethics.--Chamfort signature.asc Description: PGP signature “Literate” python? Hi, I’ve seen a couple of pointers recently to using Org mode and tangle to write more literate Emacs configurations. I use Org+babel all the time to write “interactive” documents, so I thought I’d try out tangle from Org. I didn’t want to start with something as comlicated as my Emacs config :-) so I figured I’d kick the tires with a small python program. That did not end well. Consider: #+TITLE: Python literate programming #+OPTIONS: html-postamble:nil It starts off as a completely standard Python3 program. ---%<-- #+BEGIN_SRC python :tangle yes :weave no #!/usr/bin/env python3 #+END_SRC It defines ~a~. #+BEGIN_SRC python :tangle yes def a(): print("a") #+END_SRC And ~b~. #+BEGIN_SRC python :tangle yes def b(): print("b") #+END_SRC Now ~c~ is a little more complicated: #+BEGIN_SRC python :tangle yes def c(): print("c") #+END_SRC Not only does ~c~ print “c”, it calls ~a()~ and ~b()~. #+BEGIN_SRC python :tangle yes b() a() #+END_SRC Finally, make it importable. Not that you’d want to. #+BEGIN_SRC python :tangle yes if __name__ == "__main__": main() #+END_SRC --->%-- That’s the script. It weaves into HTML more-or-less ok (there’s a weird black box at the front of indented lines, but I can come back to that later). It’s a complete mess when tangled. The extra blank lines between functions (to make pylint happy with some PEP guideline) have disappeared. I guess I could live with that, but the complete failure to preserve indention in the penultimate code block is a show stopper: #!/usr/bin/env python3 def a(): print("a") def b(): print("b") def c(): print("c") b() a() if __name__ == "__main__": main() (Also, why is there an extra blank line before the incorrectly indented block?) Is this user error on my part somehow? I suppose I could write my own version of tangle, though I’m not clear if the whitespace is lost in the tangle function or in the Org mode data model. Thoughts? Be seeing you, norm -- Norman Walsh | We discover in ourselves what others hide from us, and we recognize in | others what we hide from | ourselves.--Vauvenargues signature.asc Description: PGP signature [O] Org-contacts and extra spaces Hi, If I complete an email address with org-contacts, I get extra spaces after the email address: To: Jane Doe _ _ _ _[cursor] Is this just me? Before I go digging for a solution, has anyone else encountered and fixed this? Be seeing you, norm -- Norman Walsh | The Future is something which everyone reaches at the rate of sixty minutes an | hour, whatever he does, whoever he | is.--C. S. Lewis signature.asc Description: PGP signature Re: [O] Extending ob-plantuml keywords > On a casual inspection, it appears that ob-plantuml hardcodes the > plantuml keywords @startuml/@enduml around the source block. On closer inspection, I see that Org Mode 9.2.6 supports alternate @start/@end pairs in plantuml. Rather than using a keyword on #+BEGIN_SRC, it simply checks for @start… at the beginning of the body. That’s good enough, I think. Be seeing you, norm -- Norman Walsh | Four things come not back: the spoken word, the sped arrow, time past, the | neglected opportunity.--Omar Ibnal-Halif signature.asc Description: PGP signature [O] Extending ob-plantuml keywords Hi, On a casual inspection, it appears that ob-plantuml hardcodes the plantuml keywords @startuml/@enduml around the source block. There are at least a couple of other keywords that can be used (@startsalt and @startmindmap). I’m tempted to hack at obj-plantuml a bit to support keyword selection somehow. Before I do that, has anyone else already done it? Be seeing you, norm -- Norman Walsh | There has never been a perfect government, because men have passions; | and if they did not have passions, | there would be no need for | government.--Voltaire signature.asc Description: PGP signature Re: [O] Proposal for new document-level syntax Gustav Wikström writes: > I propose a "document" element in org-element, a property-drawer on > document-level, a setting-drawer on document-level and > property-keywords (slightly different than what already exist). And > would like your comments regarding that! When I started searching for ways to put arbitrary metadata on org documents, I was very surprised to find that there didn’t appear to be support for a top-level properties drawer. This seems like a good idea to me. Be seeing you, norm -- Norman Walsh | If you are losing your leisure, look out! You may be losing your | soul.--Logan Pearsall Smith signature.asc Description: PGP signature Re: [O] Porting Apple Calendars to org-mode Mohamed Wael Khobalatte writes: > Hi guys, I posted a question to the Emacs StackExchange > (), > but I believe it's better asked here. Does anyone know how I can get > my Apple calendar to show up in org-mode as readonly (preferably)? Coincidentally, I just went through this exercise myself. I fixed a couple of issues in icalevents[1] and then worked out how to get the calendar details from database that the Mac actually uses to track the calendars. I wrote icaldiary[2] to extract events from my calendar(s) and create an emacs “diary” file which Org reads quite happily. > I had no luck with org-mac-iCal. Only one calendar (the Birthdays > calendar) shows up, eventhough all other calendars are *checked*, as > the documentation from that package requires. It is also rather old, > so not sure if it works anymore? Yeah, I found the same problem. I think that Apple stopped storing the active calendar data in the plist files that org-mac-iCal checks. They moved it into the cache which they store with sqlite3. Hoping this is helpful… Be seeing you, norm [1] [2] -- Norman Walsh | Throughout history the world has been laid waste to ensure the triumph of | conceptions that are now as dead as the | men that died for them.--Henry De | Montherlant signature.asc Description: PGP signature [O] Supporting Exchange calendars in org-mac-iCal.el Hi folks, I don’t know if anyone else is using org-mac-iCal[*], but I find it useful. (I find it convenient to have calendar appointments in my agenda view.) In the course of setting it up, I discovered that it didn’t support Exchange calendars. And it had a regexp that didn’t match versions of MacOS post 10.8. I’ve fixed both of those issues, at least to the extent that they seem to work for me. I sent email to the named author, Christopher Suckling, but didn’t get a reply. (Hello Christopher, if you’re out there!) If anyone is interested, I can see about finding a place to put my patch. Be seeing you, norm [*] -- Norman Walsh | I often marvel that while each man loves himself more than anyone else, he | sets less value on his own estimate | than on the opinions of others.--Marcus | Aurelius signature.asc Description: PGP signature [O] Error in org-element-parse-buffer? Hi, I’ve just noticed that org-element-parse-buffer loses whitespace sometimes. Consider this org-mode file: This is a *bold* word. What org-element-parse-buffer returns is: (org-data nil (section (:begin 1 :end 24 :contents-begin 1 :contents-end 24 :post-blank 0 :post-affiliated 1 :parent #0) (paragraph (:begin 1 :end 24 :contents-begin 1 :contents-end 24 :post-blank 0 :post-affiliated 1 :parent #1) #("This is a " 0 10 (:parent #2)) (bold (:begin 11 :end 18 :contents-begin 12 :contents-end 16 :post-blank 1 :parent #2) #("bold" 0 4 (:parent #3))) #("word." 0 6 (:parent #2) Note that the whitespace between “bold” and “word.” has been lost. Well, possibly it’s recoverable from the various begin/end positions, I can’t quite decide although (1) it doesn’t seem obviously to be the case and (2) it’d be *way* more convenient if the space was in the data structure as text. I guess it’d have to be either a single space between them, or at the beginning of “ word.”. Or do I misunderstand something about the data structure? Be seeing you, norm -- Norman Walsh | The cure for boredom is curiousity. There is no cure for curiosity.--Ellen | Parr signature.asc Description: PGP signature [O] org-to-xml, v0.0.1 Hello, Years ago, literally, I asked about converting an org-mode file to XML. I finally got around to writing it. Probably very badly. Comments, etc. most welcome. Be seeing you, norm -- Norman Walsh | It is well to remember that the entire universe, with one trifling exception, | is composed of others.--John Andrew | Holmes signature.asc Description: PGP signature Re: [O] Making DocBook xml books from org mode? Joost Kremers <joostkrem...@fastmail.fm> writes: > There's also a `docbook' writer, > which has been part of Pandoc much longer and which outputs to (I > assume) DocBook v4. So I suspect you either need to upgrade your Pandoc > or make sure ox-pandoc sets the output format to `docbook'. Depending on what your down-stream processing is like, you could just go with V4 DocBook from org and then convert that to V5 with the upgrade stylesheet. Be seeing you, norm -- Norman Walsh <n...@nwalsh.com> | There is no monument dedicated to the memory of a committee.--Lester J. | Pourciau signature.asc Description: PGP signature [O] XML dump of org file? Hi, <n...@nwalsh.com> | When we are tired, we are attacked by ideas we conquered long ago.--Nietzsche signature.asc Description: PGP signature [O] Editing/evaluation of code blocks Hello, I’ve started using Org Mode more frequently and I thought it would be fun to extend org-babel support to evaluate XQuery, JavaScript, and SPARQL code blocks by sending them off to MarkLogic server. I was right, it was fun :-) If I type C-c C-c in this block: #+begin_src marklogic :var startDate="2017-04-19T12:34:57" xquery version "1.0-ml"; declare default function namespace ";; declare option xdmp:mapping "false"; declare variable $startDate external; let $date := $startDate cast as xs:dateTime let $diff := current-dateTime() - $date return current-dateTime() - $date #+end_src I get back a result! #+RESULTS: : -P348DT10H59M31.387138S Win! But C-c ' fails because “marklogic-mode” isn’t the mode for editing XQuery. There’s an xquery-mode for that. I found org-src-lang-modes which allows me to make ‘marklogic’ use xquery-mode, except that that’s wrong when I’m editing JavaScript or SPARQL. :-( Is there some way, in the source block itself, to specify *independently* the mode that should be used to edit and the language package that should be used for evaluation? Guessing not, I considered refactoring the code to support ‘xquery’, ‘javascript’, and ‘sparql’ languages. That would be fine, but I presume there are (or will eventually be) other backends for evaluating these languages. Maybe the answer then is simply not to load two different ones at the same time, but that doesn’t seem very satisfying. Am I overlooking something? Suggestions most welcome. Be seeing you, norm -- Norman Walsh <n...@nwalsh.com> | We are what we repeatedly do. Excellence, then, is not an act, but a | habit.--Aristotle signature.asc Description: PGP signature [O] org-contacts, multi-line properties, postal addresses Hello world, I'm just taking another look at org-contacts. I wonder what the best practice is for dealing with multi-line properties like postal addresses. I can just make them part of the entry, of course, not in a property, but that seems oddly different from the other properites. Have I overlooked something obvious? Be seeing you, norm -- Norman Walsh n...@nwalsh.com | A man may fulfill the object of his existence by asking a question he | cannot answer, and attempting a task he | cannot achieve.--Oliver Wendell Holmes signature.asc Description: PGP signature
https://www.mail-archive.com/search?l=emacs-orgmode%40gnu.org&q=from:%22Norman+Walsh%22&o=newest&f=1
CC-MAIN-2022-33
en
refinedweb
12 minute read Notice a tyop typo? Please submit an issue or open a PR. A common vulnerability that we are going to discuss is a buffer overflow. A buffer overflow occurs when the amount of memory allocated for a piece of expected data is insufficient (too small) to hold the actual received data. As a result, the received data "runs over" into adjacent memory, often corrupting the values present there. Specifically, stack buffer overflows are buffer overflows that exploit data in the call stack. During program execution, a stack data structure, known as the call stack, is maintained. The call stack is made up of stack frames. When a function is called, a stack frame is pushed onto the stack. When the function returns, the stack frame is popped off of the stack. The stack frame contains the allocation of memory for the local variables defined by the function and the parameters passed into the function. A function call involves a transfer of control from the calling function to the called function. Once the called function has completed its work, it needs to pass control back to the calling function. It does this by holding a reference to the return address, also present in the stack frame. Stack buffer overflows can be exploited through normal system entry points that are called legitimately by non-malicious users of the system. By passing in carefully crafted data, however, an attacker can trigger a stack buffer overflow, and potentially gain control over the system's execution. The following program - which roughly resembles a standard password checking program - is vulnerable. #include <stdio.h> #include <strings.h> int main(int argc, char *argv[]) { int allow_login = 0; char pwdstr[12]; char targetpwd[12] = "MyPwd123"; gets(pwdstr); if(strncmp(pwdstr, targtpwd, 12) == 0) allow_login = 1; if(allow_login == 0) printf("Login request rejected"); else printf("Login request allowed"); } We have allocated space for int named allow_login that is initially set to 0. In addition, we have allocated space for a user-submitted password ( pwdstr) and a target password ( targetpwd). We then ask the user for their password ( gets). Their response gets read into pwdstr and if pwdstr matches targetpwd (via strncmp), we set allow_login to 1. Finally, if allow_login is 0, we print "Login request rejected". Otherwise, we print "Login request allowed".. There are two things you can do with a stack: push and pop. The stack grows when something is pushed onto it, and shrinks when something is popped off of it. The current "top" of the stack is maintained by a stack pointer, which points to different memory locations as the stack grows and shrinks. We can assume that the stack grows from high (numerically larger) addresses to low (numerically smaller) addresses. This means that the stack pointer points to the highest memory address at the beginning of program execution, and decreases as frames are pushed onto the stack.. If the attacker guesses the correct password and types that as input to the program, login will be allowed. If the attacker guesses the wrong password - which fits into the allocated buffer - there will be no overflow and login will be rejected. These are the two basic outcomes for a naive attack: either the attacker guesses correctly and access is granted or the attacker guesses incorrectly and access is denied. In order to understand how an attacker can use buffer overflow to gain control of this program, we first need to look at how the data associated with this program is laid out on the stack. We know that the stack grows from higher memory addresses to lower memory address. When we make the function call to main, we push the arguments argc (4 bytes) and argv (4 bytes) onto the stack. Assuming the top of the stack is located at memory address addr, the stack pointer points to addr - 8 after pushing these argument onto the stack. Next, we have to push the return address (4 bytes) onto the stack. Every time we make a function call, we have to push the return address onto the stack so the program knows where to continue execution within the calling function once the called function completes. After pushing the return address, the stack pointer points to addr - 12. Finally, we allocate space for allowLogin (4 bytes), pwdstr (12 bytes) and targetpwd (12 bytes). If pwdstr is within 12 bytes, it will occupy only the memory allocated to it. If pwdstr is longer than 12 bytes, it will exhaust the 12 bytes allocated to it, and will overflow into the space allocated for allowLogin. The reason pwdstr overflows into allowLogin and not targetPwd is because occupation of memory occurs sequentially, from lower memory addresses to higher memory address. Note: this is the opposite of the direction in which the stack grows. If the supplied value for pwdstr is greater than 16 bytes, pwdstr will also overwrite the return address. As an attacker, we want to direct program control to some location where the attacker can craft some code. If the attacker writes more than 16 bytes to pwdstr, the buffer allocated to pwdstr will overflow and will overwrite the return address. If we know the address of the code that we want to execute, we can craft our input carefully, such that the existing return address gets overwritten with the address we want. If we do this, what will happen? Remember, the point of the return address is to give the function a location to transfer control to when it is done executing. If we overwrite that address, the function will "return" to the address we supply and begin executing instructions from that address.. The code that the attacker typically wants to craft is code that is going to launch a command shell. This type of code is called shellcode. The execution of the shellcode creates a shell which allows the attacker to execute arbitrary commands. You can write the shellcode in C, like this: int main (int argc, char *argv[]) { char *sh; char *args[2]; sh = "/bin/sh"; args[0] = sh; args[1] = NULL; execve(sh, args, NULL); } The "magic" here is execve, which replaces the currently running program with the invoked program - in this case, the shell at /bin/sh. While the code can be written in C, it must be supplied to the vulnerable program as compiled machine code, because it is going to be stored in memory as actual machine instructions that will be executed once control is transferred. The vulnerable program is running with some set of privileges before transfer is controlled to the shellcode. When control is transferred, what privileges will be used? The shellcode will have the same privileges as the host program. This can be a set of privileges associated with a certain user and/or group. Alternatively, if the host program is a system service, the shellcode may end up with root privileges, essentially being handed the "keys to the the kingdom". This is the best case scenario for the attacker, and the worst case scenario for the host. So far we have talked about stack buffer overflows. There are other variations of buffer overflows. The first variation is called return-to-libc. When we talked about shellcode, the goal was to overflow the return address to point to the location of our shellcode, but we don't need to return to code that we have explicitly written. In return-to-libc, the return address will be modified to point to a standard library function. Of course, this assumes that you will be able to figure out the address of the library function. If you return to the right kind of library function and you are able to set up the arguments for it on the stack, then you can execute any library function any parameters. For example, if you point to the address of the system library function, and pass something like /bin/sh, you should be able to open a command shell. The main idea with return-to-libc is that we have driven our exploit through instructions already present on the system, as opposed to supplying our own. An overflow doesn't have to occur to memory associated with the stack. A heap overflow describes buffer overflows that occur in the heap. One crucial difference between the heap and the stack is that the heap does not have a return address, so the traditional stack overflow / return-to-libc mechanism won't work. What we have in the heap are function pointers, which can be overwritten to point to functions that we want to execute. Heap overflows require more sophistication and more work than stack overflows. So far, when we have talked about buffer overflow, we have talked about writing data; specially, inputing data into some part of memory and overflowing the memory that was allocated to us. Overflows don't just have to be associated with writing data. For example, if a variable has 12 bytes, but we ask to read 100 bytes, the read will continue past the original 12 bytes and return data in subsequent memory locations. The OpenSSL Heartbleed vulernability did just this. It read past an assumed boundary (due to insufficient bounds checking) and was exploited to steal some important information - like encryption keys - that resided in adjacent memory. Naturally, we shouldn't write code with buffer overflow vulnerabilities, but if such code is out there deployed on systems, we need to find ways to defend against attacks that exploit these vulnerabilities. For instance, choice of programming language is crucial. There are languages where buffer overflows are not possible. These languages: Languages that have these features are referred to as "safe" languages and include languages like Java and C++. If we choose a "safe" language, buffer overflows become impossible due to the checks the language performs at runtime. For example, instead of having to perform bounds checking explicitly, programmers can rest assured knowing that the language runtime will perform the check for them. So, why don't we use these languages for everything? One drawback for these languages is performance degradation. The extra runtime checks slow down the execution of your program. When using "unsafe" languages, the programmer takes on the responsibility of preventing potential buffer overflow scenarios. One way to do that is by checking all input to ensure that it conforms to expectations. Assume that all input is evil. Another strategy to reduce the possibility of exploitation is to use safer functions that perform bounds checking for you. One such list of safe replacements for common library functions in C can be found here. A third strategy is to use automated tools that analyze a program and flag any code that looks vulnerable. These tools look for code patterns or unsafe functions and warn you which code fragments may be vulnerable for exploitation. One issue with automated analysis tools is that they may have many false positives (flagging something that is not an issue), and may even have false negatives (not flagging something that is an issue). No tool should replace thoughtful programming. There is no excuse for writing code that is insecure! A number of source code analysis tools are available. These tools analyze the source code of your application, and can flag potentially unsafe constructs and/or function usage. Companies will often incorporate the use of these tools into their software development lifecycle to ensure that all code headed for production is audited before being released. If you are attempting to analyze code that you didn't write, you may not have the source code available, at which point source code analysis tools obviously won't be helpful. One of the tricks that hackers use is to override the return address on the stack to point to some other code they want to execute. During the execution of a function, however, there is no reason for the return address to be modified; that is, there is no reason a function should change where it returns during the middle of its execution. As a result, if we can detect that the return address has been modified, we can show that a buffer overflow is being exploited and handle execution appropriately, likely with process termination. How can we detect if the return address has been modified? We can use a stack canary, or a value that we write to an address just before the return address in a stack frame. If an overflow is exploited to overwrite the return address, the canary value will be overwritten with it. All the runtime has to do, then, is to check if the canary value has changed when a function completes execution. If so, it can be sure that there is a problem. What is nice about this approach is that the programmer doesn't have to do anything: the compiler inserts these checks. Of course, this means that the code may have to be recompiled with a compiler that has these features, a step which may come with its own issues. There are also OS-/hardware-based solutions which can help to thwart the exploitation of buffer overflow vulnerabilities. The first technique that many operating systems use is address space layout randomization (ASLR). Remember that one key job of the attacker is to be able to understand/approximate how memory is laid out within the stack or, in the case of return-to-libc, within a process's address space. ASLR randomizes how memory is laid out within a process to make it very hard for an attacker to predict, even roughly, where certain key data structures and/or libraries reside. Many modern operating systems provide ASLR support. In the classic stack buffer overflow attack, the attacker writes shellcode to the stack and then overwrites the return address to point to that shellcode, which is then executed. There is no legitimate reason for programs to execute instructions that are stored on the stack. One way to block executing shellcode off the stack is to make the stack non-executable. Many modern operating systems implement such executable-space protection.. OMSCS Notes is made with in NYC by Matt Schlenker.
https://www.omscs.io/information-security/software-security/
CC-MAIN-2022-33
en
refinedweb
table of contents NAME¶ libblkid - block device identification library SYNOPSIS¶ #include <blkid.h> cc file.c -lblkid DESCRIPTION¶ the extraction of determine. CONFIGURATION FILE¶ The standard location of the /etc/blkid.conf config file can be overridden by the environment variable BLKID_CONF. For more details about the config file see blkid(8) man page. AUTHORS¶ libblkid was written by Andreas Dilger for the ext2 filesystem utilities, with input from Ted Ts’o. The library was subsequently heavily modified by Ted Ts’o. The low-level probing code was rewritten by Karel Zak. COPYING¶ libblkid is available under the terms of the GNU Library General Public License (LGPL), version 2 (or at your discretion any later version). SEE ALSO¶ REPORTING BUGS¶ For bug reports, use the issue tracker at <>. AVAILABILITY¶ The libblkid library is part of the util-linux package since version 2.15. It can be downloaded from Linux Kernel Archive <>.
https://manpages.debian.org/testing/libblkid-dev/libblkid.3.en.html
CC-MAIN-2022-33
en
refinedweb
Music¶ This is the music module. You can use it to play simple tunes, provided that you connect a speaker to your board. By default the music module expects the speaker to be connected via pin 0: This arrangement can be overridden (as discussed below). To access this module you need to: import music We assume you have done this for the examples below. Musical Notation¶ An individual note is specified thus: NOTE[octave][:duration] For example, A1:4 refers to the note “A” in octave 1 that lasts for four ticks (a tick is an arbitrary length of time defined states: ['c1:4', 'e:2', 'g', 'c2:4'] The opening of Beethoven’s 5th Symphony would be encoded thus: ['r4:2', 'g', 'g', 'g', 'eb:8', 'r:2', 'f', 'f', 'f', 'd:8'] The definition and scope of an octave conforms to the table listed on this page about scientific pitch notation. For example, middle “C” is 'c4' and concert “A” (440) is 'a4'. Octaves start on the note “C”. Functions¶ music. set_tempo(ticks=4, bpm=120)¶: music.set_tempo()- reset the tempo to default of ticks = 4, bpm = 120 music.set_tempo(ticks=8)- change the “definition” of a beat music.set_tempo(bpm=180)- just change the tempo To work out the length of a tick in milliseconds is very simple arithmetic: 60000/bpm/ticks_per_beat. For the default values that’s 60000/120/4 = 125 millisecondsor 1 beat = 500 milliseconds. music. play(music, pin=microbit.pin0, wait=True, loop=False)¶ Plays musiccontaining the musical DSL defined above. If musicis a string it is expected to be a single note such as, 'c1:4'. If musicis specified as a list of notes (as defined in the section on the musical DSL, above) then they are played one after the other to perform a melody. In both cases, the durationand octavevalues are reset to their defaults before the music (whatever it may be) is played. An optional argument to specify the output pin can be used to override the default of microbit.pin0. If waitis set to True, this function is blocking. If loopis set to True, the tune repeats until stopis called (see below) or the blocking call is interrupted. music. pitch(frequency, len=-1, pin=microbit.pin0, wait=True)¶ Plays a pitch at the integer frequency given for the specified number of milliseconds. For example, if the frequency is set to 440 and the length to 1000 then we hear a standard concert A for one second. If waitis set to True, this function is blocking. If lenis negative the pitch is played continuously until either the blocking call is interrupted or, in the case of a background call, a new frequency is set or stopis called (see below). music. reset()¶ Resets the state of the following attributes in the following way: ticks = 4 bpm = 120 duration = 4 octave = 4 Built in Melodies¶ For the purposes of education and entertainment, the module contains several example tunes that are expressed as Python lists. They can be used like this: >>> import music >>> music.play(music.NYAN) All the tunes are either out of copyright, composed by Nicholas H.Tollervey and released to the public domain or have an unknown composer and are covered by a fair (educational) use provision. They are: -. Example¶ """ music.py ~~~~~~~~ Plays a simple tune using the Micropython music module. This example requires a speaker/buzzer/headphones connected to P0 and GND. """ from microbit import * import music # play Prelude in C. notes = [ 'c4:1', 'e', 'g', 'c5', 'e5', 'g4', 'c5', 'e5', 'c4', 'e', 'g', 'c5', 'e5', 'g4', 'c5', 'e5', 'c4', 'd', 'g', 'd5', 'f5', 'g4', 'd5', 'f5', 'c4', 'd', 'g', 'd5', 'f5', 'g4', 'd5', 'f5', 'b3', 'd4', 'g', 'd5', 'f5', 'g4', 'd5', 'f5', 'b3', 'd4', 'g', 'd5', 'f5', 'g4', 'd5', 'f5', 'c4', 'e', 'g', 'c5', 'e5', 'g4', 'c5', 'e5', 'c4', 'e', 'g', 'c5', 'e5', 'g4', 'c5', 'e5', 'c4', 'e', 'a', 'e5', 'a5', 'a4', 'e5', 'a5', 'c4', 'e', 'a', 'e5', 'a5', 'a4', 'e5', 'a5', 'c4', 'd', 'f#', 'a', 'd5', 'f#4', 'a', 'd5', 'c4', 'd', 'f#', 'a', 'd5', 'f#4', 'a', 'd5', 'b3', 'd4', 'g', 'd5', 'g5', 'g4', 'd5', 'g5', 'b3', 'd4', 'g', 'd5', 'g5', 'g4', 'd5', 'g5', 'b3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'b3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'b3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'b3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'a3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'a3', 'c4', 'e', 'g', 'c5', 'e4', 'g', 'c5', 'd3', 'a', 'd4', 'f#', 'c5', 'd4', 'f#', 'c5', 'd3', 'a', 'd4', 'f#', 'c5', 'd4', 'f#', 'c5', 'g3', 'b', 'd4', 'g', 'b', 'd', 'g', 'b', 'g3', 'b3', 'd4', 'g', 'b', 'd', 'g', 'b' ] music.play(notes)
https://microbit-micropython-hu.readthedocs.io/hu/latest/music.html
CC-MAIN-2019-30
en
refinedweb
Swift OpenGL Loader This is a simple OpenGL loader library. The repo currently contains code to load a Core profiles from 3.3 to 4.6. The included profiles do not load any extensions. This repository has been adapted from code originally written by David Turnbull. His code can be found here: This fork allows generating code for multiple profiles at once and removes the implicit library loading and function binding step of the original code. By requiring an explicit call to loadGL, client code to verify if the required OpenGL profile was loaded correctly. Using a loader To load an OpenGL version 3.3 profile import GLLoader33 guard GLLoader33.loadGL(getGLProc) else { print("Could now load GL 3.3 profile") exit(1) } The type signature of getGLProc is: public typealias GLFuncType = @convention(c) () -> Void public typealias GetGLFunc = @convention(c) (_ : UnsafePointer<Int8>) -> GLFuncType? All bindings default to stub implementations and only point at the real OpenGL functions after loadGL() has returned successfully. Generating a new loader If you don't want functions for a Core 3.3 profile, or want to add extensions, "Tools/main.swift" is what you want. In the repo root: swift run glgen Tools/gl.xml loader4.6 --profile GL_VERSION_4_6 This will generate a new set of GL functions and loader func in the "loader4.6" directory. As renderers will likely depend on specific profile/extension combinations, it is recommended to fork this repo and add additional products to Package.swift or generate loaders for your desired profile/extensions combinations and add them to your own repository. The pre-generated code in this repo is handy if you only need a vanilla Core profile. Shortcomings The one major issue with the current loader logic is that the load will "fail" if any of the extensions cannot be loaded. Whilst it is reasonable to fail if the any parts of the Core profile cannot be loaded, a mechanism for communicating which extensions were successfully loaded or not is likely more helpful for clients. As it is not yet a requirement for my engine, I have not implemented this. As always, pull requests are welcome. Github Help us keep the lights on Dependencies Used By Total: 0
https://swiftpack.co/package/Satook/sw-OpenGLLoader
CC-MAIN-2019-30
en
refinedweb
Check whether product of digits at even places is divisible by sum of digits at odd place of a number Given a number N and numbers of digits in N, the task is to check whether the product of digits at even places of a number is divisible by sum of digits at odd place. If it is divisible, output “TRUE” otherwise output “FALSE”. Examples: Input: N = 2157 Output: TRUE Since, 1 * 7 = 7, which is divisible by 2+5=7 Input: N = 1234 Output: TRUE Since, 2 * 4 = 8, which is divisible by 1 + 3 = 4 Approach: - Find product of digits at even places from right to left. - Find sum of digits at odd places from right to left. - Then check the divisibility of product by taking it’s modulo with sum - If modulo gives 0, output TRUE, otherwise output FALSE Below is the implementation of the above approach: C++ Java Python 3 # Python 3 implementation of the above approach # Below function checks whether product # of digits at even places is divisible # by sum of digits at odd places def productSumDivisible(n, size): sum = 0 product = 1 while (n > 0) : # if size is even if (size % 2 == 0) : product *= n % 10 # if size is odd else : sum += n % 10 n = n // 10 size -= 1 if (product % sum == 0): return True return False # Driver code if __name__ == “__main__”: n = 1234 len = 4 if (productSumDivisible(n, len)): print(“TRUE”) else : print(“FALSE”) # This code is contributed by ChitraNayal C# PHP TRUE Recommended Posts: - Check whether product of digits at even places of a number is divisible by K - Check whether sum of digits at odd places of a number is divisible by K - Check if product of digits of a number at even and odd places is equal - Count of numbers between range having only non-zero digits whose sum of digits is N and number is divisible by M - Program to check if a number is divisible by sum of its digits - Check if N is divisible by a number which is composed of the digits from the set {A, B} - Program to check if a number is divisible by any of its digits - Given a large number, check if a subsequence of digits is divisible by 8 - Find the sum of digits of a number at even and odd places - Primality test for the sum of digits at odd places of a number - Largest number with the given set of N digits that is divisible by 2, 3 and 5 - Find N digits number which is divisible by D - Smallest number with sum of digits as N and divisible by 10^N - Number of digits in the product of two numbers - Total number of ways to place X and Y at n places such that no two X are.
https://www.geeksforgeeks.org/check-whether-product-of-digits-at-even-places-is-divisible-by-sum-of-digits-at-odd-place-of-a-number/
CC-MAIN-2019-30
en
refinedweb
Friend function in C++ Programming. user may need to add the sales of two or more goods or compare the marks of two or more students. In such cases, friend function acts as a bridge for two or more objects. An example of using friend function to access private member of an object is shown below: #include <iostream> #include <conio.h> using namespace std; class example { private: int a; public: void getdata() { cout <<"Enter value of a:"; cin >>a; } friend void findmax(example, example); /* Declaring friend function inside class */ }; void findmax(example e1, example e2) /* Defining friend function */ { if (e1.a > e2.a) /* Accessing private members */ cout <<"Data of object e1 is greater"; else if (e1.a < e2.a) cout <<"Data of object e2 is greater"; else cout <<"Data of object e1 and e2 are equal"; } int main() { example e1, e2; cout <<"Enter data for e1"<<endl; e1.getdata(); cout <<"Enter data for e2"<<endl; e2.getdata(); max(e1, e2); /* Calling friend function */ getch(); return 0; } Outputs: Enter data for e1 a = 7 Enter data for e2 a = 4 Data of object e1 is greater Enter data for e1 a = 9 Enter data for e2 a = 13 Data of object e2 is greater Enter data for e1 a = 14 Enter data for e2 a = 14 Data of object e1 and e2 are equal Properties of friend function: - It can't be called using object like other member function. - It is called like normal functions in C or C++. - Private member can be accessed inside friend function using object name and dot(.) operator. - It can take multiple objects as parameter as required. - It should be declared in all the classes whose objects are sent as parameter. - It can be declared or defined in private, public or protected section of a class.
https://www.programtopia.net/cplusplus/docs/friend-function
CC-MAIN-2019-30
en
refinedweb
October 15, 2009 Introduction Thanks to the Task Queue API released in SDK 1.2.3, it's easier than ever to do work 'offline', separate from user serving requests. In some cases, however, setting up a handler for each distinct task you want to run can be cumbersome, as can serializing and deserializing complex arguments for the task - particularly if you have many diverse but small tasks that you want to run on the queue. Fortunately, a new library in release 1.2.5 of the SDK makes these ad-hoc tasks much easier to write and execute. This library is found in google.appengine.ext.deferred, and from here on in we'll refer to it as the 'deferred' library. The deferred library lets you bypass all the work of setting up dedicated task handlers and serializing and deserializing your parameters by exposing a simple function, deferred.defer(). To call a function later, simply pass the function and its arguments to deferred.defer, like this: from google.appengine.ext import deferred def do_something_expensive(a, b, c=None): logging.info("Doing something expensive!") # Do your work here # Somewhere else deferred.defer(do_something_expensive, "Hello, world!", 42, c=True) That's all there is to it. The deferred library will package up your function call and its arguments, and add it to the task queue. When the task gets executed, the deferred library will execute do_something_expensive("Hello, world!", 42, c=True). There are two more things you need to do to get this working - you need to add the deferred library's task and URL handlers to your app.yaml. Just add the following entry to the builtins section of your app.yaml: - deferred: on and the following entry to the handlers section of the same file: - url: /_ah/queue/deferred script: google.appengine.ext.deferred.deferred.application login: admin The capabilities of the deferred library aren't limited to simple function calls with straightforward arguments. In fact, you can use nearly any python 'callable', including functions, methods, class methods and callable objects. deferred.defer supports task arguments used in the task queue API, such as countdown, eta, and name, but in order to use them, you need to prefix them with an underscore, to prevent them conflicting with keyword arguments to your own function. For example, to run do_something_expensive in 30 seconds, with a custom queue name, you could do this: deferred.defer(do_something_expensive, "Foobie bletch", 12, _countdown=30, _queue="myqueue") Example: A datastore mapper To demonstrate how powerful the deferred library can be, we're going to use an example from the Mapper class. This class will make it easy to iterate over a large set of entities, making changes or calculating totals, but won't require an external computer to run it on. Here's our example Mapper implementation:.""" pass def get_query(self): """Returns a query over the specified kind, with any appropriate filters applied.""" q = self.KIND.query() for prop, value in self.FILTERS: q = q.filter(prop == value) q = q.order("__key__") return q def run(self, batch_size=100): """Starts the mapper running.""" self._continue(None, batch_size) def _batch_write(self): """Writes updates and deletes entities in a batch.""" if self.to_put: ndb.put_multi(self.to_put) self.to_put = [] if self.to_delete: ndb.delete_multi(self.to_delete) self.to_delete = [] def _continue(self, start_key, batch_size): q = self.get_query() # If we're resuming, pick up where we left off last time. if start_key: key_prop = getattr(self.KIND, '_key') q = q.filter(key_prop > start_key) # Keep updating records until we run out of time. try: # Steps over the results, returning each entity and its index. for i, entity in enumerate(q): map_updates, map_deletes = self.map(entity) self.to_put.extend(map_updates) self.to_delete.extend(map_deletes) # Do updates and deletes in batches. if (i + 1) % batch_size == 0: self._batch_write() # Record the last entity we processed. start_key = entity.key self._batch_write() except DeadlineExceededError: # Write any unfinished updates to the datastore. self._batch_write() # Queue a new task to pick up where we left off. deferred.defer(self._continue, start_key, batch_size) return self.finish() The main loop is wrapped in a try/except clause, catching DeadlineExceededError. This allows us to process as many results as we can in the time we have available; when we're out of time, the runtime throws a DeadlineExceededError, which gives us enough time to queue the next task before returning. We're also no longer fetching results in batches - instead, we iterate over the query, which lets us fetch as many results as we have time to process. Updates and deletes are still batched, however, and whenever we've processed enough records, we update the datastore, and record the current entity as the point to continue from. The other difference is that we've added a finish() method. This gets called when every matching entity has been processed, and allows us to use the Mapper class to do things like calculate totals. Using the mapper Here's the example mapper that adds 'bar' to every guestbook entry containing 'foo', rewritten for our new mapper class: class GuestbookUpdater(Mapper): KIND = Greeting def map(self, entity): if entity.content.lower().find('foo') != -1: entity.content += ' Bar!' return ([entity], []) return ([], []) mapper = GuestbookUpdater() deferred.defer(mapper.run) As you can see, the mapper subclass itself is unchanged; the only change is how we invoke it - by 'deferring' the mapper.run method. The new finish method makes it easy to calculate totals, too. Suppose we have a class that records files and how many times they've been downloaded, and we want to count up the total number of files and the total number of downloads. We could implement it like this: class File(ndb.Model): name = ndb.StringProperty(required=True) download_count = ndb.IntegerProperty(required=True, default=0) class DailyTotal(ndb.Model): date = ndb.DateProperty(required=True, auto_now_add=True) file_count = ndb.IntegerProperty(required=True) download_count = ndb.IntegerProperty(required=True) class DownloadCountMapper(Mapper): KIND = File def __init__(self): self.file_count = 0 self.download_count = 0 def map(self, file): self.file_count += 1 self.download_count += file.download_count def finish(self): total = DailyTotal(file_count=self.file_count, download_count=self.download_count) total.put() mapper = DownloadCountMapper() deferred.defer(mapper.run) In just a few lines of code, we've written a mapper that can be run from a cron job each day, and will count up the total number of files and downloads, storing them to another entity. This will still run even if we have many more File entities than we could count up in a single request, and it'll automatically retry in the event of timeouts and other transient errors. And at no point did we have to write a task queue handler, or map URLs. Our Mapper class can be put in a library and used in multiple applications, even if they use different frameworks! The Mapper framework we wrote above isn't full-featured, of course, and it could do with a few improvements to make it more robust and versatile, but it serves to demonstrate the power of the deferred API. Deferred tips and tricks As always, there are a few tips and tricks you should be aware of in order to make optimal use of the deferred library. Make tasks as small as possible Task Queue items are limited to 100 KB of associated data. This means that when the deferred library serializes the details of your call, it must amount to less than 100 KB in order to fit on the Task Queue directly. No need to panic, though: if you try to enqueue a task that is too big to fit on the queue by itself, the deferred library will automatically create a new Entity in the datastore to hold information about the task, and will delete the entity once the task has been run. This means that, in practice, your function call can be up to 1 MB once serialized. As you might expect, however, inserting and deleting datastore entities for each task adds some overhead, so, if you can, make the arguments to your call (and the object itself, if you're calling a method) as small as possible to avoid the extra overhead. Don't pass entities to deferred.defer If you're working with datastore entities, you might be tempted to do something like this: def do_something(entity): # Do something with entity entity.put() entity = MyModel.get_by_id(123) deferred.defer(do_something, entity, _countdown=60) This is a bad idea, because the deferred library will package up the entity itself and store it to the task queue; when the time comes to execute your task, it will deserialize the entity and run the code. When your code writes the entity back to the datastore, it'll overwrite any changes that have happened since you enqueued the task with deferred.defer! Instead of passing entities around, then, you should pass keys. For example: def do_something_with_key(k): entity = k.get() # Do something with entity entity.put() k = ndb.Key('MyModel', 123) deferred.defer(do_something_with_key, k, _countdown=60) Dealing with task failures Tasks created using deferred.defer get retried in case of failure just like regular Task Queue tasks. 'Failure' in the case of a deferred task is defined as your task throwing any uncaught exception. Normally, automatic retries are what you want - for example, if the task does datastore operations, and the datastore threw a Timeout exception. If you want to deliberately cause your task to fail so it will be retried, you can simply throw an exception. Sometimes, though, you know your task will never succeed, and you want it to fail permanently. In such situations, you have two options: - Return from the deferred function normally. The handler will think the task executed successfully, and will not try again. This is best in the case of non-critical failures - for example, when you deferred something for later, and discovered it's already been done. - Raise a deferred.PermanentTaskFailureexception. This is a special exception, and is treated as a permanent failure. It is logged as an exception, so it will show up in the Google Cloud Platform Console as one, but causes the task to not be retried again. Handling import path manipulation Some applications, or the frameworks they use, rely on manipulating the Python import path in order to make all the libraries they need available. While this is a perfectly legitimate technique, the deferred library has no way of knowing what path manipulations you've engaged in, so if the task you're deferring relies on modules that aren't on the import path by default, you need to give it a helping hand. Failing to do this can result in your tasks failing to run - or worse, only failing intermittently. Fortunately, handling this is easy. Make sure your code that changes the import path is in a module all of its own, such as ' fix_path.py'. Such a module might look like this: import os import sys sys.path.append(os.path.join(os.path.dirname(__file__), 'lib')) Then, import the ' fix_path' module along with your usual imports, anywhere you rely on the modified path, such as in the module you defined the functions you're calling with deferred.defer in. Limitations of the deferred library There are a few limitations on what you can pass to deferred.defer: - All arguments must be picklable. That means you can't pass exotic things like instances of nested classes as function arguments. If you're calling an instance method, the instance must be picklable too. If you're not familiar with what 'pickling' is, don't worry - most stuff that you're likely to be passing to deferred.defer will work just fine. - You can't pass nested functions, or methods of nested classes. - You can't pass lambda functions (but you probably wouldn't want to anyway). - You can't pass a static method. - You can't pass a method defined in the request handler module. The last point above deserves special attention: passing a method defined in the request handler module - the module specified as a request handler in app.yaml - will not work. You can call deferred.defer from the request handler module, but the function you are passing to it must be defined elsewhere! When to use ext.deferred You may be wondering when to use ext.deferred, and when to stick with the built-in task queue API. Here are our suggestions. You may want to use the deferred library if: - You only use the task queue lightly. - You want to refactor existing code to run on the Task Queue with a minimum of changes. - You're writing a one off maintenance task, such as schema migration. - Your app has many different types of background tasks, and writing a separate handler for each would be burdensome. - Your task requires complex arguments that aren't easily serialized without using Pickle. - You are writing a library for other apps that needs to do background work. You may want to use the Task Queue API if: - You need complete control over how tasks are queued and executed. - You need better queue management or monitoring than deferred provides. - You have high throughput, and overhead is important. - You are building larger abstractions and need direct control over tasks. - You like the webhook model better than the RPC model. Naturally, you can use both the Task Queue API and the deferred library side-by-side, if your app has requirements that fit into both groups. If you've come up with your own uses for the deferred API, please let us and the rest of the community know about them in the discussion groups.
https://cloud.google.com/appengine/articles/deferred?hl=zh-TW
CC-MAIN-2019-30
en
refinedweb
icollectionview 2720 Changing UILabel text from UICollectionView custom delegate class 9109 How do I display the standard checkmark on a UICollectionViewCell? 7685 Implementing swipe on view on top of UICollectionViewCell without swiping during scrolling 1676 I need help animating UICollectionView cell size when collection view size changes 919 UICollectionView not removing old cells after scroll 6961 Should we use UITableView when we have UICollectionView? 9710 UICollectionView - Remove All Items Or Update to Refresh it 1464 UIRefreshControl with UICollectionView in iOS7 8399 How to archive this layout like shopkick using UICollectionView 5756 Custom UICollectionViewLayout Different Cell Sizes 7003 UIPanGestureRecognizer not calling End state 2308 How to improve the UICollectionView's performance? Let it scroll smooth 7111 UICollectionView reloadData refreshes only first item 3081 UICollectionView Assertion failure 6998 Sending a info to detailview from UICollectionView is not working? 2138 UICollectionView vertical padding issue 2218 Perform Segue from Collection View Cell 4746 Multiple supplementary views in a UICollectionViewFlowLayout? 5515 Reloading collection view inside a table view cell happens after all cells in table view have been loaded 5025 Dynamic Table View Cell & Content Size conflicting & Pintrest Layout 6991 Get `UITableViewCell` indexPath using `UIScrollView` 4345 UicollectionView didSelectItemAtIndexPath is not called in uiTableviewcell 689 iOS : ScrollView delegate method inside of CollectionView not call 1448 CollectionView layout 5458 Swift: Unable to long-press drag cells to empty sections in UICollectionViewController and UITableView 3095 Failure in append items asynchronously in collection view 9222 Pagefold animation with UICollectionView 7297 UICollectionView appears black when running the App 6251 Making a UICollectionView continuously scroll 5567 UICollectionView Update Cellsizes after changing constraints 905 Can a UICollectionView and UITableView work in a single ViewController 649 Adding a HeaderView to CollectionView when using custom FlowLayout (Swift) 790 Swift iOS. How can I get indexPath's of cells in UICollectionView 6315 Custom layout UICollectionView swift 4266 iOS 9: Determine size of collection view in portrait and landscape 9995 how to determine which cell's textfield was just edited in a collection view? 9279 iOS UIButton in CollectionView to trigger custom animation on that cell alone 6259 Set insets to the CollectionView programmatically in Swift 8999 AFNetworking and Collection View Blocking Main Thread 7766 UICollectionView not working properly 4145 Finding the index path of an array from NSUserDefaults 1022 UICollectionView with Decoration 462 How to update size of cells in UICollectionView after cell data is set? 261 How to implement infinite horizontal and vertical scrolling in the UICollectionView in iOS 6? 228 Preview UICollectionViewCell 7745 UICollectionViewCell reuse 9123 How to Display asset url In collection view ios 2892 label display text not well when use [cell setNeedsDisplay] 5350 UICollectionView Scrolling issue 4058 Use uicollectionview inside uitableviewcell 7242 UICollectionView using TLLayoutTransitioning resize with single row 6459 UICollectionView is not updating properly - used THStringyFlowLayout 9834 change the title for viewController into tabbarController 3655 From PFQueryTableView to UICollectionView example 9961 UICollectionView allow user to choose cell colour 8374 UICollectionView with CustomFlowLayout How to restricts scroll to only one page per scroll? 8519 Reload Collection view that is inside a uitableviewcell when data from a url is downloaded 6337 ios change label width at run time objective c 6217 UICollectionViewController not working in upgraded XCode 7 project 7824 how to trigger drawrect after rotation for a custom UICollectionView? 7739 Unexpectedly found nil for optional value - which is not nil after optional chaining 2513 Why isn't my UICollectionViewCell updating? 5213 UICollectionViewLayout (Converting from Swift to Objective-C) 2931 Change collection view cell width at runtime 4951 TableView and CollectionView in the same ViewController with Swift 6453 Animating UICollectionView,but cells disappear 1617 UICollectionView - Horizontal scroll, horizontal layout? 896 Left Align Cells in UICollectionView 8814 Error: Array index out of range Swift 7002 CollectionView Add extra cell "Add more" 2859 Customizing CollectionView 5959 Scroll collection view to index when collectionview paging is enabled 5746 Collectionview item position after scroll not displayed properly in iPhone 4s 7748 ScrollToItemAtIndexPath doesn't work sometimes 979 swift UICollectionView only showing black screen 6068 Hiding/Showing UINavigationBar causes UICollectionView to change it's frames 3983 UICollectionViewCell register class fails, but register nib works 528 How to wrap self sizing UICollectionViewCell 1320 How to dynamically add a cell to UICollectionView 7713 iOS Accessibility for CollectionView in a TableViewCell 3971 Select a cell by UIButton - CollectionViewCell - Swift 6430 **Prevent dragging of Two CollectionViews inside a horizontal Scrollview** « 1 » Hot Questions Finding businesses with Google Virtual Tour panoramas Adding a span class to a dynamic button Backbone.js Service Call Event Binding HSQLDB + SQuirreL: reading data by block How to active the keyup event at that time? Xamarin accessing Button.Clicked from one page to another DOM XML in PHP 5.6 Rails, Upload and Parse Text File to Database JFrame with KeyEventDispatcher Can't click 3 Arrow Keys Tap Gesture on animating UIView not working Hibernate Configuration with only hibernate.properties file. Without any hibernate.cfg.xml In chrome my drag and drop Java applet doesn't get the drop events, does chrome not support that on Mac? Detail about MSR_GS_BASE in linux x86 64 asp.net web application startup time. how to optimize? Passing image from one activity another activity Expected running time of the binary space partition algorithm ERROR!! AppGameContainer java.lang.ClassNotFoundException Saving dict as JSON so that they are human readable Get file size in Swift Extract transform and rotation matrices from homography? No line breaks using flexbox at IE Android change image in another activity based on listview posistion? How to Get Exact Location and Angles of Sectors on CD or DVD (Data Position Measurement)? Dockable window in Maya with PyQt Sorting on tuple in Scala Use checkboxes in MVC view and pass checked state back to controller for processing lua_dump unexpectedly (sometimes) pushes a userdata to the stack IntelliJ IDEA new message regarding Spring contexts PHP equivalent of enum in namespace [C/C++] How to onBackPressed() for android Fragments How can i know my array is in cache? What is the error in my code when i trying to pass values as Get through Ajax to a php file Google App Engine: deploying the same code to multiple applications why SaveChanges invokes when i'm debugging? Removing child view using id in titanium Python Sqlalchemy mysql "Cannot add or update a child row: a foreign key constraint fails" How to trigger best_in_place events Javascript Multi-level Nested Quotes Colorbar on Geopandas How to increase height when it's textwrap in textBox windowsphone8 Lotus Notes:Reading Replication History by Code PHP - recursive function foreach Macintosh shortcuts and tools Inno Script Studio Hangs Indefinitely When Clicking the Compile Button SWRevealViewController close rear view when tapping front view Replacing bookmarks in a MS Word 2010 document looses formatting center an image in a css circle Unfortunately app has stopped error eclipse? Replace illegal filename characters with valid characters failing to load log4j2 while running fatjar Selecting column names separated by a comma Just need some help figuring out and understanding the answers to my test questions Executing a shell script from java code and passing data Pubnub receive all messages when re sujbscribe get count of facebook likes with SDK api() Creating thumbnails for images on S3 GSON how do I test for an key/value pair without throwing a null pointer exception Application Architecture Question Does anyone have Oracle PL/SQL code to rename all synonyms to another schema? Javascript scoping for object Telerik RadGrid nopersistence (viewstate=false), doesn't keep the selectedindex on postback May I change .tpl file at opencart theme to .php? Tkinter .get() from entry gives weird values Docker compose - share volume Nginx Why won't Google Apps Script find a string in an array If equals error in VBA Use class in same package in Java in Ubuntu Submitting from a dynamically generated form SSIS Custom Data Flow Component - Redirect error rows Director/Lingo, making an application toggle between fullscreen and windowed? How to redirect page before refresh? Why does testng depend on junit? Get next week of year in Java with java.util.Calendar Java FlowChart framework for the web Error: A security token validation error occured for the received JWT token. Http Status Code: Unauthorized How does Linq Except Compare results How can I download a file instead of opening it in the browser (Firefox)? Contour selection in opencv Make Opera stop displaying each tab on the taskbar Trim values in asp.net detailsview when databinding Fault in Json object - three.js How to refer to the start-of a user-defined segment in a Visual Studio-project? Do session use cookies in ASP.NET? Add table row after certain Row Is there a way to set custom color for status bar text and background? Xelement constructor with Ternary Operator and nullable types l - menu block adding a class or id How do I read stdout/stderr output of a child process correctly? MySQL generates many new result-set Android BottomSheetBehavior setState() NullPointerException Storing HTML files Android app keeps screen on all the time. Screen lock permission not set Redirect to my own website when press "Update" button in Visual Studio - Tools - Extensions and Updates How to check if a home screen short-cut exists or not? Java GUI bringing up previous window Need help deciding on a database scheme for a reporting project (PHP) Passively get new access token after access token expires Symfony 2, choices from database UTL_FILE operations produces special characters in the output file MSSQL PHP cannot delete
http://www.brokencontrollers.com/tags/uicollectionview/page1.shtml
CC-MAIN-2019-30
en
refinedweb
From: Jeremy Siek (jsiek_at_[hidden]) Date: 2001-01-23 18:00:52 This situation is like when you're sitting in your car, stopped in front of a broken stop-light stuck on red. Carefully look both ways, and then run the red light! I think the best short-term solution is to use std::abs and std::swap tell people to define abs(), swap(), etc. functions for their types in namespace std. I think we need to make a decision about this soon because it affects many many things. I think the best long term solution is to change the standard to make this legal. On Tue, 23 Jan 2001, Paul Moore wrote: gustav> gustav> Paul. gustav> gustav> PS In case people think I'm just being petluant, I should point out gustav> that without Koenig lookup, the swap() and abs() stuff simply gustav> doesn't work. With abs(), you get compile errors, and with swap() gustav> you get the wrong version being used (resulting in at best no gustav> improvement, and at worst a severe pessimization of the code). I'd gustav> rather leave it out than have broken code. And as I've said before, I gustav> have to use MSVC, so I won't implement something which doesn't gustav> work on that compiler. gustav> gustav> PPS Is it only me who feels that the "using std::swap" incantation gustav> required to be able to use swap() unqualified and hence to get the gustav> benefit of Koenig lookup is clumsy, unintuitive and inelegant? gustav> gustav> gustav> gustav> ----------------------------------------------------------------------
https://lists.boost.org/Archives/boost/2001/01/8377.php
CC-MAIN-2019-30
en
refinedweb
. Tip This topic describes how to write code that displays route and directions inside your app. If you simply want to display directions, you can also use the Maps directions task, which launches the built-in Maps app. For more info, see How to use the Maps directions task for Windows Phone 8. Consider the possibility of adding text-to-speech output. For more info, see Text-to-speech (TTS) for Windows Phone 8. This topic contains the following sections. - Getting the phone’s current location - Displaying the route from the phone’s current location to a location on a map - Displaying directions from the phone’s current location to a location on a map - Results - Related Topics Getting the phone’s current location); } } } } Imports System Imports System.Collections.Generic Imports System.Windows Imports Microsoft.Phone.Controls Imports Location = System.Device.Location Imports Geolocation = Windows.Devices.Geolocation Imports Microsoft.Phone.Maps.Services Imports Microsoft.Phone.Maps.Controls Partial Public Class MainPage Inherits PhoneApplicationPage Dim MyCoordinates As New List(Of Location.GeoCoordinate)() ' Constructor Public Sub New() InitializeComponent() GetCoordinates() End Sub Private Async Sub GetCoordinates() ' Get the phone's current location. Dim MyGeolocator As New Geolocation.Geolocator() MyGeolocator.DesiredAccuracyInMeters = 5 Dim MyGeoPosition As Geolocation.Geoposition = Nothing Try MyGeoPosition = Await MyGeolocator.GetGeopositionAsync(TimeSpan.FromMinutes(1), TimeSpan.FromSeconds(10)) MyCoordinates.Add(New Location.GeoCoordinate(MyGeoPosition.Coordinate.Latitude, MyGeoPosition.Coordinate.Longitude)) Catch uaex As UnauthorizedAccessException MessageBox.Show("Location is disabled in phone settings or capabilities are not checked.") Catch exc As Exception ' Something else happened while acquiring the location. MessageBox.Show(exc.Message) End Try End Sub End Class Displaying the route from the phone’s current location to a location on a map. <maps:Map x: If you add the control by writing XAML, you also have to add the following xmlns declaration to the phone:PhoneApplicationPage element. If you drag and drop the Map control from the Toolbox, this declaration is added automatically. xmlns:maps="clr-namespace:Microsoft.Phone.Maps.Controls;assembly=Microsoft.Phone.Maps" In MainPage.xaml.cs or MainPage.xaml.vb, in the MainPage class, create the following class-level variables. RouteQuery MyQuery = null; GeocodeQuery Mygeocodequery = null; Dim MyQuery As RouteQuery Dim Mygeocodequery As GeocodeQuery. Mygeocodequery = new GeocodeQuery(); Mygeocodequery.SearchTerm = "Seattle, WA"; Mygeocodequery.GeoCoordinate = new GeoCoordinate(MyGeoPosition.Coordinate.Latitude, MyGeoPosition.Coordinate.Longitude); Mygeocodequery = New GeocodeQuery() Mygeocodequery.SearchTerm = "Seattle, WA" Mygeocodequery.GeoCoordinate = New Location.GeoCoordinate(MyGeoPosition.Coordinate.Latitude, MyGeoPosition.Coordinate.Longitude). Mygeocodequery.QueryCompleted += Mygeocodequery_QueryCompleted; Mygeocodequery.QueryAsync(); AddHandler Mygeocodequery.QueryCompleted, AddressOf Mygeocodequery_QueryCompleted Mygeocodequery.QueryAsync()(); } } Private Sub Mygeocodequery_QueryCompleted(sender As Object, e As QueryCompletedEventArgs(Of IList(Of MapLocation))) If e.[Error] Is Nothing Then MyQuery = New RouteQuery() MyCoordinates.Add(e.Result(0).GeoCoordinate) MyQuery.Waypoints = MyCoordinates AddHandler MyQuery.QueryCompleted, AddressOf MyQuery_QueryCompleted MyQuery.QueryAsync() Mygeocodequery.Dispose() End If End Sub Add the MyQuery_QueryCompleted event handler to the class. This code adds a route on the map between the two geocoordinate objects in MyCoordinates. void MyQuery_QueryCompleted(object sender, QueryCompletedEventArgs<Route> e) { if (e.Error == null) { Route MyRoute = e.Result; MapRoute MyMapRoute = new MapRoute(MyRoute); MyMap.AddRoute(MyMapRoute);) MyQuery.Dispose() End If End Sub Displaying directions from the phone’s current location to a location on a map;) Dim RouteList As New List(Of String)() For Each leg As RouteLeg In MyRoute.Legs For Each maneuver As RouteManeuver In leg.Maneuvers RouteList.Add(maneuver.InstructionText) Next Next RouteLLS.ItemsSource = RouteList MyQuery.Dispose() End If End Sub Results The following screenshot displays the result of running the sample that you created in this topic. Make sure that location is enabled in settings on the device on which you test the app. See Also Reference Other Resources How to add UIElements to a Map control in Windows Phone 8
https://docs.microsoft.com/en-us/previous-versions/windows/apps/jj244363(v%3Dvs.105)
CC-MAIN-2019-30
en
refinedweb
import BackNav from 'react-storefront/BackNav' The text to display representing the previous location. The url to navigate to when clicked. If omitted, this component will navigate back in the history when clicked. When displaying this component on a search results page (such as a subcategory), you can supply the SearchResultsModelBase instance here and this component will allow you to switch between grid and list views.
https://pwa.moovweb.com/v6.9.1/components/BackNav
CC-MAIN-2019-30
en
refinedweb
Introduction This is a pwn challenge on CodeBlue CTF. As a part of my tutorial plan, I take this one as an example on House of Force technique. Vulnerability Analysis The source code of this challenge was already given on [1] and I also list the vulnerable part below for analysis. char *cgiDecodeString (char *text) { char *cp, *xp; for (cp=text,xp=text; *cp; cp++) { if (*cp == '%') { if (strchr("0123456789ABCDEFabcdef", *(cp+1)) && strchr("0123456789ABCDEFabcdef", *(cp+2))) { if (islower(*(cp+1))) *(cp+1) = toupper(*(cp+1)); if (islower(*(cp+2))) *(cp+2) = toupper(*(cp+2)); *(xp) = (*(cp+1) >= 'A' ? *(cp+1) - 'A' + 10 : *(cp+1) - '0' ) * 16 + (*(cp+2) >= 'A' ? *(cp+2) - 'A' + 10 : *(cp+2) - '0'); xp++;cp+=2; } } else { *(xp++) = *cp; } } memset(xp, 0, cp-xp); return text; } The vulnerability lies in strchr function at line 11 and 12. To better understand why this is a vulnerability, we have to read the specification of strchr function at first. Locate first occurrence of character in string Returns a pointer to the first occurrence of character in the C string str. The terminating null-character is considered part of the C string. Therefore, it can also be located in order to retrieve a pointer to the end of a string. [2] It means that the terminating character (“\x00”) will also be taken as one valid character in “0123456789ABCDEFabcdef”. To be more specific, if the terminating character is located after a ‘%’ in the string the vulnerable function will continue to parse the characters after the terminating character. In the decoding function, we can find that this function will merge a 3-byte string starting with ‘%’ (e.g. “%A1”) into a single byte string (“\xA1”). It means that this function will move the characters in string forwards. If the function can parse the characters after the terminating character, the attacker can merge characters into current string buffer and do the decoding work at the same time. Exploitation Plan With this in mind, we are able to corrupt the size of top_chunk with 0xfffffff1 , apply House of Force to allocate one chunk at 0x804b038 and overwrite sscanf with system to get the shell. The basic plan is to allocate multiple chunks at first assuring that the size of top_chunk is of size 0x25XX. Then we allocate one more chunk of size 0xa0 to put “0xfffffff1” in the url buffer string and free this chunk. After this, we can overwrite the size of top_chunk to 0xfffffff1. Another side effect of this operation is that the heap address will be “pulled” into the url string buffer of current head. We read this heap address and locate the address of current top_chunk. Another problem I need to solve is to leak the base address of libc. At first, I try to utilise the freed unsorted chunk or freed large chunk to leak the base address. But I find that the “pulling” operation will always corrupt the bck and fwd pointer (libc base address is unknown) and result in glibc abort in next allocation. So my final plan is to leak the value of stdout (at 0x804b084). Another reason that I pick stdout is that the value at 0x804b080 is 0 (NULL as a next pointer) so that I do not need to bother crafting the next node. With the “pulling” operation of the vulnerability, I overwrite a next pointer of one node to 0x804b080 and leak the value of stdout afterwards. Exploitation from pwn import * DEBUG = int(sys.argv[1]); if(DEBUG == 0): r = remote("1.2.3.4", 23333); elif(DEBUG == 1): r = process("./nonamestill"); elif(DEBUG == 2): r = process("./nonamestill"); gdb.attach(r, '''source ./script'''); def create(size, url): r.recvuntil(">"); r.sendline("1"); r.recvuntil("size: "); r.sendline(str(size)); r.recvuntil("URL: "); r.sendline(url); def decode(index): r.recvuntil(">"); r.sendline("2"); r.recvuntil("index: "); r.sendline(str(index)); def list(): r.recvuntil(">"); r.sendline("3"); def delete(index): r.recvuntil(">"); r.sendline("4"); r.recvuntil("index"); r.sendline(str(index)); def exploit(): create(0x2528, "A"*0x251a + "%AA%AA" + "MAGIC" + "%A"); create(0x2528, "%31%25%00%00"+p32(0x0804b07c) + "AAAA"); create(0x2528, "11111"); libc = ELF("./libc.so.6"); decode(2); list(); r.recvuntil("2: "); leakedValue = u32(r.recv(4)); log.info("leaked value: 0x%x" % leakedValue); libcBase = leakedValue - 0x1b0d60; log.info("libc base address: 0x%x" % libcBase); systemRelAddr = libc.symbols['system']; systemAbsAddr = systemRelAddr + libcBase; log.info("system addr: 0x%x" % systemAbsAddr); for i in range(0, 0x16f): create(0x100, "padding"); create(0x20-8, "A"*0x4 + "MAGIC" +"%AA%AA%AA%AA%A"); create(0xa0-8, "%00"*4 + p32(0xfffffff1)+"AAAA"); delete(0); decode(0); list(); r.recvuntil("MAGIC"); r.recv(7); leakedValue = u32(r.recv(4)); log.info("leaked value: 0x%x" % leakedValue); topPtr = leakedValue + 0x18; log.info("top pointer address: 0x%x" % topPtr); evilSize = 0x804b03c - topPtr - 4*5; log.info("eval size: 0x%x" % evilSize); create(evilSize, "AAAA"); create(0x8, p32(systemAbsAddr)+"EFG"+"/bin/sh;"); r.interactive(); exploit(); Conclusion This is a very interesting pwn challenge. As Koike (hugeh0ge) says on his write-up, it’s important to know the specifications of a function. I solve this challenge with about 10 hours after reading his write-up. The write-up indeed saves me a lot of time reversing the binary code of the target. Reference [1] [2]
https://dangokyo.me/2017/11/29/codeblue-ctf-2017-pwn-nomamestill-write-up/
CC-MAIN-2019-30
en
refinedweb
MigrateAsync after using EnsureCreatedAsync I have an asp.net core mvc site in production. The DbInitializer was using this code: await context.Database.EnsureCreatedAsync() Now I found out that a migration was not being applied and I've seen this: In addition, the database that is created cannot be later updated using migrations. I've changed code to await context.Database.MigrateAsync() but no migrations are being applied and in my database dbo.__EFMigrationsHistory I don't see any records. note: my solution has already 4 migration classes in the Migrations folder, but they are added in MigrationHistory. 3 of them are applied because I had once recreated the database (still using ensurecreated). The last migration is not applied, as I now didn't recreate the database as it contains data and migrations are not applied now because I used before "EnsureCreatedAsync". How can I now apply and start using migrations in my existing database without losing any of my database data? I am using Rest API to connect to dialogflow using c# code. But it gives Authorisation issue. -. - Add JWT Bearer Authentication in Server Side Blazor (3.0.0-preview.6) I'm trying to add JWT Bearer authentication using Azure B2C. I'm using the default project template for creating Blazor Server Side application with Authentication. I chose B2C authentication at the creation page, which sets the default authentication using services.AddAuthentication(AzureADB2CDefaults.AuthenticationScheme) .AddAzureADB2C(o => Configuration.Bind("AzureAdB2C", o)); This works well. I'm trying to add JWT Bearer authentication instead. This is the code I'm using: services.AddAuthentication(AzureADB2CDefaults.JwtBearerAuthenticationScheme) .AddAzureADB2CBearer(o => Configuration.Bind("AzureAdB2C", o)); Additionally, I added "ClientId": "my_client_id"to the configuration section since it's required by the scheme and this code here points it's used when using this scheme However, when trying to Login I get error Sorry, there's nothing at this address. This suggests that the built-in controller isn't getting hit (link below). I also tried modifying the login link from /AzureADB2C/Account/SignInto /AzureADB2C/Account/SignIn/AzureADB2CJwtBearer, but I got the same result. The code here suggests the scheme name should get passed to the controller method, which is why I did that, but still same issue. I understand Authentication was added just recently and this is still experimental and looking at the code it looks like JWT should be supported, so I'm wondering if I'm missing something. Any help is greatly appreciated. - Razor code producing unexpected result when looping through a list of checkboxes What possible reason could there be for this razor code: // ShowSelectedMembers.cshtml @model MyApp.Models.MembersViewModel @for (int m = 0; m < Model.Members.Count; m++) { <input type="hidden" asp- <input type="hidden" asp- <input type="hidden" asp- <span>[debug-info: m = @m, id = @Model.Members[m].Id]</span> <span>@Model.Members[m].LastName</span>, <span>@Model.Members[m].FirstName</span> } to produce this HTML, when Model.Members.Count = 1, just containing the member with id 6653: <input type="hidden" data- <input type="hidden" id="Members_0__FirstName" name="Members[0].FirstName" value="Peter" /> <input type="hidden" id="Members_0__LastName" name="Members[0].LastName" value="Hanson" /> <span>[debug-info: m = 0, id = 6653]</span> <span>Swanson</span>, <span>Lisa</span> How can Members[0].Idhave the value of 6652 in the hidden field, and 6653 inside the <span>? This is the controller method for the view: public IActionResult ShowSelectedMembers(MembersViewModel vm) { vm.Members = vm.Members.Where(s => s.Selected).OrderBy(o => o.LastName).ThenBy(o => o.FirstName).ToList(); return View(vm); } This is the form which sends the whole member list to the controller method: // Index.cshtml @model MyApp.Models.MembersViewModel <form asp- <button type="submit">View selection</button> @if (Model.Members.Any()) { for (int i = 0; i < Model.Members.Count; i++) { Model.Members[i].Id Model.Members[i].FirstName Model.Members[i].LastName <input type="checkbox" asp- <input type="hidden" asp- <input type="hidden" asp- <input type="hidden" asp- } } </form> When the form data is sent to the controller method, it contains all the members in that view. The method then filters out all members with Selectedset to false. These are the ViewModels: public class MembersViewModel { // ... some more properties public List<MemberViewModel> Members { get; set; } } public class MemberViewModel { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public bool Selected { get; set; } // ... some more properties } UPDATE To clarify the flow of this operation: The Index-view contains a form with Selected-checkboxes for each member in the list. This form passes the whole list of members to the controller method ShowSelectedMembers. The controller method filters out members with Selectedset to false, and passes the selected members to the view ShowSelectedMembers. The view displays the filtered list, which now has corrupted data. - Using a table for input of multiple values and submitting them as a list of objects I'm going to have users that need to submit multiple lines into a database, I'm wanting to use a table to collect the input and then insert all the data into the database. In my head it looks like taking the first row of the table, clicking "Add another item" storing that row in a list of Objects... Repeat for N number of rows. When the user pushes the Submit button it will loop through the List of Objects and insert them. What would be the best approach to accomplishing something like this? See image for a brief example of what I'm going for. - ASP.NET Core 2.2 InvalidOperationException: The ConnectionString property has not been initialized I'm working on asp.net core web api application with entity framework core. Recently I updated application to support .net core 2.2 framework. I'm using Visual Studio 2019 as a code editor. After upgrading the project few weird exceptions are occurring at runtime, those are as follows. 1. Earlier I was able to add code migrations script easily but now it is throwing an exception 'Unable to create an object of type '...DbContext'. - After running project, 1st request getting served without an exception but consecutive request are failing with an error AggregateException: One or more errors occurred. (The ConnectionString property has not been initialized.) I observed error is throwing by CheckPasswordAsync method of UserManager class which belongs to Asp.net Identity provider. Please check this image to see exception details - How to properly define many to many EF Core 2.2 configuration via ids only? A project using Entity Framework Core 2.2 contains a pair of entities with a many-to-many relationship. The entities are as follows: class Feature { int Id { get; set;} } class House { int Id { get; set; } ICollection<int> FeatureIds { get; set; } } As EF Core does not support many-to-many relationship natively yet, I have created a join class. class HouseFeature { int FeatureId { get; set; } int HouseId { get; set; } } None of the three classes contains a property for the other entity, only its id. The Featureentity neither contains a list of houses. The configuration class for HouseFeaturelooks like: class HouseFeatureConfiguration : IEntityTypeConfiguration<HouseFeature> { void Configure(EntityTypeBuilder<HouseFeature> houseFeatureConfiguration) { } } How to configure the EF Core relationship? All examples I found require properties like Feature Feature { get; set; }on top of int FeatureId { get; set; }. I have no use for these Feature Feature { get; set; }so I prefer to not have them in the model. Is it possible to define the relationship including foreign keys while having id-fields only? How? - EF Core: Is it possible to have a dynamic defaultValue? E.g. depending on another value? For example I have two simple string properties inside my model: public class Model { public string BaseString {get;set;} public string MyValue {get;set;} } Inside my OnModelCreating: modelBuilder .Entity<Model>() .Property(f => f.MyValue) .ValueGeneratedOnAdd() .HasDefaultValueSQL(//insert first letter of inserted BaseString property + something else); Is something like that possible? Or can default values only be constants? What is the correct approach?
http://quabr.com/56633708/ef-core-migrateasync-after-using-ensurecreatedasync
CC-MAIN-2019-30
en
refinedweb
🐘 Non-blocking, event-driven Swift client for PostgreSQL built on SwiftNIO. Major Releases The table below shows a list of PostgresNIO major releases alongside their compatible NIO and Swift versions. Version | NIO | Swift | SPM --- | --- | --- | --- 1.0 (alpha) | 2.0+ | 5.0+ | from: "1.0.0-alpha" Use the SPM string to easily include the dependendency in your Package.swift file. .package(url: "", from: ...) Supported Platforms PostgresNIO supports the following platforms: - Ubuntu 14.04+ - macOS 10.12+ Overview PostgresNIO is a client package for connecting to, authorizing, and querying a PostgreSQL server. At the heart of this module are NIO channel handlers for parsing and serializing messages in PostgreSQL's proprietary wire protocol. These channel handlers are combined in a request / response style connection type that provides a convenient, client-like interface for performing queries. Support for both simple (text) and parameterized (binary) querying is provided out of the box alongside a PostgresData type that handles conversion between PostgreSQL's wire format and native Swift types. Motiviation Most Swift implementations of Postgres clients are based on the libpq C library which handles transport internally. Building a library directly on top of Postgres' wire protocol using SwiftNIO should yield a more reliable, maintainable, and performant interface for PostgreSQL databases. Goals This package is meant to be a low-level, unopinionated PostgreSQL wire-protocol implementation for Swift. The hope is that higher level packages can share PostgresNIO as a foundation for interacting with PostgreSQL servers without needing to duplicate complex logic. Because of this, PostgresNIO excludes some important concepts for the sake of simplicity, such as: - Connection pooling - Swift Codableintegration - Query building If you are looking for a PostgreSQL client package to use in your project, take a look at these higher-level packages built on top of PostgresNIO: Dependencies This package has four dependencies: apple/swift-niofor IO apple/swift-nio-sslfor TLS apple/swift-logfor logging apple/swift-metricsfor metrics This package has no additional system dependencies. API Docs Check out the PostgresNIO API docs for a detailed look at all of the classes, structs, protocols, and more. Getting Started This section will provide a quick look at using PostgresNIO. Creating a Connection The first step to making a query is creating a new PostgresConnection. The minimum requirements to create one are a SocketAddress and EventLoop. import PostgresNIO let eventLoop: EventLoop = ... let conn = try PostgresConnection.connect( to: .makeAddressResolvingHost("my.psql.server", port: 5432), on: eventLoop ).wait() Note: These examples will make use of wait() for simplicity. This is appropriate if you are using PostgresNIO on the main thread, like for a CLI tool or in tests. However, you should never use wait() on an event loop. There are a few ways to create a SocketAddress: init(ipAddress: String, port: Int) init(unixDomainSocketPath: String) makeAddressResolvingHost(_ host: String, port: Int) There are also some additional arguments you can supply to connect. tlsConfigurationAn optional TLSConfigurationstruct. If supplied, the PostgreSQL connection will be upgraded to use SSL. serverHostnameAn optional Stringto use in conjunction with tlsConfigurationto specify the server's hostname. connect will return a future PostgresConnection, or an error if it could not connect. Client Protocol Interaction with a server revolves around the PostgresClient protocol. This protocol includes methods like query(_:) for executing SQL queries and reading the resulting rows. PostgresConnection is the default implementation of PostgresClient provided by this package. Assume the client here is the connection from the previous example. import PostgresNIO let client: PostgresClient = ... // now we can use client to do queries Simple Query Simple (or text) queries allow you to execute a SQL string on the connected PostgreSQL server. These queries do not support binding parameters, so any values sent must be escaped manually. These queries are most useful for schema or transactional queries, or simple selects. Note that values returned by simple queries will be transferred in the less efficient text format. simpleQuery has two overloads, one that returns an array of rows, and one that accepts a closure for handling each row as it is returned. let rows = try client.simpleQuery("SELECT version()").wait() print(rows) // [["version": "11.0.0"]] try client.simpleQuery("SELECT version()") { row in print(row) // ["version": "11.0.0"] }.wait() Parameterized Query Parameterized (or binary) queries allow you to execute a SQL string on the connected PostgreSQL server. These queries support passing bound parameters as a separate argument. Each parameter is represented in the SQL string using incrementing placeholders, starting at $1. These queries are most useful for selecting, inserting, and updating data. Data for these queries is transferred using the highly efficient binary format. Just like simpleQuery, query also offers two overloads. One that returns an array of rows, and one that accepts a closure for handling each row as it is returned. let rows = try client.query("SELECT * FROM planets WHERE name = $1", ["Earth"]).wait() print(rows) // [["id": 42, "name": "Earth"]] try client.query("SELECT * FROM planets WHERE name = $1", ["Earth"]) { row in print(row) // ["id": 42, "name": "Earth"] }.wait() Rows and Data Both simpleQuery and query return the same PostgresRow type. Columns can be fetched from the row using the column(_: String) method. let row: PostgresRow = ... let version = row.column("version") print(version) // PostgresData? PostgresRow columns are stored as PostgresData. This struct contains the raw bytes returned by PostgreSQL as well as some information for parsing them, such as: - Postgres column type - Wire format: binary or text - Value as array of bytes PostgresData has a variety of convenience methods for converting column data to usable Swift types. let data: PostgresData= ... print(data.string) // String? print(data.int) // Int? print(data.int8) // Int8? print(data.int16) // Int16? print(data.int32) // Int32? print(data.int64) // Int64? print(data.uint) // UInt? print(data.uint8) // UInt8? print(data.uint16) // UInt16? print(data.uint32) // UInt32? print(data.uint64) // UInt64? print(data.bool) // Bool? print(try data.jsonb(as: Foo.self)) // Foo? print(data.float) // Float? print(data.double) // Double? print(data.date) // Date? print(data.uuid) // UUID? print(data.numeric) // PostgresNumeric? PostgresData is also used for sending data to the server via parameterized values. To create PostgresData from a Swift type, use the available intializer methods. Github Help us keep the lights on Used By Total: 2 Releases 1.0.0-alpha.1.3 - Jun 25, 2019 New: PostgresData(bytes:)and postgresData.bytesmethods for going to / from [UInt8](#38) Foundation.Datanow conforms to PostgresDataConvertible(#38) PostgresData(jsonb:)and postgresData.jsonb()methods for going to / from JSON (#36) - New PostgresJSONBCodableprotocol for allowing codable types to automatically serialize as JSONB (#36) 1.0.0-alpha.1.2 - Jun 12, 2019 New: - Added Boolsupport to PostgresData. (#32, #33) 1.0.0-alpha.1.1 - Jun 7, 2019 New: - Conforms UUIDto PostgresDataConvertible - Conforms Optionalto PostgresDataConvertible - Conforms PostgresDatato CustomDebugStringConvertible 1.0.0-alpha.1 - Jun 6, 2019 More information on Vapor 4 alpha releases: API Docs:
https://swiftpack.co/package/vapor/postgres-nio
CC-MAIN-2019-30
en
refinedweb
1621/selenium-diff-between-pom-page-object-model-and-page-factory Seems like Page object model & page factory are doing similar things. IpmObjectInitializer initialize = new IpmObjectInitializer(driver.getWebDriver()); By using page factory: Initialize elements in BatchCreationPageFactory class batchCreationPageFactory = initialize.getBatchCreationPageFactoryObj(); Page Object Model is a design pattern to create an Object Repository for web UI elements. However, Page Factory is a built-in class in Selenium for maintaining object repository. For that, we can import the package: Page Factory. public class LogInPage { private WebElement usrnm; private WebElement pwd; public LogInPage() { } public void locateElements() { usrnm = driver.findElement(By.id("userName")); pwd = driver.findElement(By.id("password")); } public void doLogIn() { usrnm.sendKeys("qwe"); pwd.sendKeys("123"); } } With Page Factory, initElement() statement can be used for easily looking up elements in page class. Page Factory allows storing of page elements in cache memory using @CacheLookup annotation So, which ever methods we have defined in a diff class, those can be imported by using the page factory library. public class LogInPage { @FindBy(id="userName") private WebElement usrnm; @FindBy(id="password") private WebElement pwd; public LogInPage() { PageFactory.initElements(driver, this); // initialize the members like driver.findElement() } public void doLogIn() { usrnm.sendKeys("qwe"); pwd.sendKeys("123"); } } Hi,. Okay, first of all, page object model ...READ MORE RC works by injecting the JavaScript functions ...READ MORE If you want to learn about Selenium .. thoughtworks.selenium is the original Selenium (aka Selenium 1, ...READ MORE This is not a very important concept ...READ MORE OR
https://www.edureka.co/community/1621/selenium-diff-between-pom-page-object-model-and-page-factory?show=41208
CC-MAIN-2019-30
en
refinedweb
Run thread next to UI Thread I am currently developing a game using Google Maps. I want to run a thread which is executed next to the UI Thread. How can I do that? See also questions close to this topic - How to compare objects in two ways? I know how to compare objects on a single basis. I want to show the User class separately by the age and date of creation, but the compare method seems to be the only one in the class. Creating two objects would make the adapter too complex. UserList<User> userlist_time UserList<User> userlist_age Collections.sort(userlist_time); ??? new UserAdapter(getContext(), userlist_time) - Android Product flavours not allowing demo and full version to install over another I am using following manifest file details. I am trying to create a demo version and full version to upload to store. If user installs DEMO and then tries to install full version, it should automatically update the Demo to full version. It was working before, however adding the dimension is now required by Android. After adding dimensions, once I have installed Demo, its unable to install Full version on top of it, giving App not Installed Error. What am I missing? -- To be noted that while making builds , I create release versions of both apps. // Manifest version information! def versionMajor = 5 def versionMinor = 20 def versionPatch = 0 def versionBuild = 2 // bump for dogfood builds, public betas, etc. def gitSha = 'git rev-parse --short HEAD'.execute([], project.rootDir).text.trim() def buildTime = new Date().format("yyyy-MM-dd'T'HH:mm'Z'", TimeZone.getTimeZone("UTC")) def isTravis = "true".equals(System.getenv("TRAVIS")) def preDexEnabled = "true".equals(System.getProperty("pre-dex", "true")) def baseAndFullAppId = "com.app.appname" android { compileSdkVersion 28 lintOptions { checkReleaseBuilds false abortOnError false } defaultConfig { dimension "data" applicationId baseAndFullAppId minSdkVersion 17 targetSdkVersion 28 versionCode versionMajor * 10000 + versionMinor * 1000 + versionPatch * 100 + versionBuild versionName "${versionMajor}.${versionMinor}" manifestPlaceholders = [HOCKEYAPP_APP_ID: "xxxxxxxx"] multiDexEnabled true buildConfigField "String", "GIT_SHA", "\"${gitSha}\"" buildConfigField "String", "BUILD_TIME", "\"${buildTime}\"" buildConfigField "String", "FULL_APP_PACKAGE", "\"${baseAndFullAppId}\"" } flavorDimensions "version" productFlavors { full { dimension "version" buildConfigField "int", "AREA_LIMIT", "-1" } demo { dimension "version" applicationId 'com.app.appnamelite' buildConfigField "int", "AREA_LIMIT", "2" } } signingConfigs { debug { storeFile rootProject.file('debug.keystore') storePassword 'android' keyAlias 'android' keyPassword 'android' } release { storeFile rootProject.file('release.keystore') storePassword 'xxxx' keyAlias 'xx' keyPassword 'xxxx' } } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' signingConfig signingConfigs.release } debug { versionNameSuffix '-dev' signingConfig signingConfigs.debug } } dexOptions { jumboMode true // Skip pre-dexing when running on Travis CI or when disabled via -Dpre-dex=false. preDexLibraries = preDexEnabled && !isTravis } } - Why the expandable list does not show up in the tabbed activity? I tried to put an expandable list into a fragment and display it through a tabbed activity. There is no syntax as far as I knew. However, the list does not show up when I run the app. This is the android version I am using below: android { compileSdkVersion 29 buildToolsVersion "29.0.1" defaultConfig { applicationId "com.example.chineseapmed" minSdkVersion 23' } } } I have also fixed all the deprecated class PageTwo.java package com.example.chineseapmed; import android.os.Bundle; import androidx.fragment.app.Fragment; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ExpandableListView; import java.util.ArrayList; import java.util.HashMap; import java.util.List; public class PageTwo extends Fragment { ExpandableListAdapter listAdapter; ExpandableListView expListView; List<String> listDataHeader; HashMap<String, List<String>> listDataChild; @Override @SuppressWarnings("deprecation") public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_page_two, container, false); expListView = rootView.findViewById(R.id.lvExp); prepareListData(); listAdapter = new ExpandableListAdapter(getContext(), listDataHeader, listDataChild); expListView.setAdapter(listAdapter); return rootView; } private void prepareListData() { listDataHeader = new ArrayList<String>(); listDataChild = new HashMap<String, List<String>>(); // Adding child data listDataHeader.add("Top 250"); listDataHeader.add("Now Showing"); listDataHeader.add("Coming Soon.."); // Adding child data List<String> top250 = new ArrayList<String>(); top250.add("The Shawshank Redemption"); top250.add("The Godfather"); top250.add("The Godfather: Part II"); top250.add("Pulp Fiction"); top250.add("The Good, the Bad and the Ugly"); top250.add("The Dark Knight"); top250.add("12 Angry Men"); List<String> nowShowing = new ArrayList<String>(); nowShowing.add("The Conjuring"); nowShowing.add("Despicable Me 2"); nowShowing.add("Turbo"); nowShowing.add("Grown Ups 2"); nowShowing.add("Red 212"); nowShowing.add("The Wolverine"); List<String> comingSoon = new ArrayList<String>(); comingSoon.add("2 Guns"); comingSoon.add("The Smurfs 2"); comingSoon.add("The Spectacular Now"); comingSoon.add("The Canyons"); comingSoon.add("Europa Report"); listDataChild.put(listDataHeader.get(0), top250); // Header, Child data listDataChild.put(listDataHeader.get(1), nowShowing); listDataChild.put(listDataHeader.get(2), comingSoon); } } fragment_page_two.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns: <ExpandableListView android: </FrameLayout> I expect there is a list in the tab but it is blank - Is my approach to implement Producer-Consumer problem correct? I have implemented producer problem using wait/notify combination. Could someone please let me know if my understanding on producer consumer problem is correct or not and if my implementation is correct/optimized? Now i'm thinking how to implement the same problem using ExecutorService and CountDownLatch, ReentrantLock, CyclicBarrier? Is there any way to do it? Meanwhile I will try to see if I can implement the problem solution using the latch. import java.util.ArrayList; import java.util.EmptyStackException; import java.util.Random; public class ProducerConsumerProblem { private Object syncher = new Object(); private volatile ArrayList<Integer> sharedBuffer = new ArrayList<Integer>(); public static void main(String[] args) { ProducerConsumerProblem object = new ProducerConsumerProblem(); Thread producerThread = new Thread(() -> { object.produceData(); },"Producer"); Thread consumerThread = new Thread(() -> { object.consumeData(); },"Consumer"); producerThread.start(); consumerThread.start(); } public void produceData() { Random randomNumber = new Random(); while(true) { synchronized (syncher) { if(sharedBuffer.size() == 1) { try { //System.out.println("Producer waiting..."); syncher.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } Integer producedElem = randomNumber.nextInt(10); System.out.println("+++ Produced: "+producedElem); sharedBuffer.add(producedElem); try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } syncher.notify(); } } } public void consumeData() { while(true) { synchronized (syncher) { while(sharedBuffer.size() == 0) { try { //System.out.println("Consumer waiting..."); syncher.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } Integer consumedElem = sharedBuffer.stream().findAny().orElseThrow(()-> new EmptyStackException()); System.out.println("--- Consumed: "+consumedElem); sharedBuffer.remove(consumedElem); try { Thread.sleep(2000); } catch (InterruptedException e) { e.printStackTrace(); } syncher.notify(); } } } } - How can I share String messages with multiple producers and a single receiver in Rust (crossbeam)? I have a project where multiple threads will transmit readings in String format, and I want them to be consumed by one handler thread. Unfortunately, Strings don't implement Copy/Clone so I can't pass references of my crossbeam channel so a second thread without getting the error --> src/main.rs:71:30 | 61 | let (tx_ws, rx_ws) = unbounded(); | --------- move occurs because `tx_ws` has type `crossbeam::Sender<node::WebsocketResponse>`, which does not implement the `Copy` trait ... 70 | let node0_thread = thread::spawn(move || node0::run(Some(&n0_settings), tx_ws.clone())); | ------- value moved into closure here --------- variable moved due to use in closure 71 | let node1_thread = thread::spawn(move || node1::run(Some(&n1_settings), tx_ws.clone())); | ^^^^^^^ value used here after move --------- use occurs due to use in closure What tricks do you guys have to get around this? I understand String is a non-boxed type, but not sure how to get around it. Is there another way to send String like messages over a crossbeam-channel? - Are `thenRunAsync(...)` and `CompletableFuture.runAsync(() -> { ... });` related at all? I need to perform some extra tasks but let the original thread finish up, e.g. send back an HTTP response. I think I can just do this: return mainTasksFuture.thenApply(response -> { CompletableFuture.runAsync(() -> { // extra tasks }); return response; }); But I remembered there's a thenRunAsync. Is return mainTasksFuture.thenApply(response -> { return response; }).thenRunAsync(() -> { // extra tasks }); basically another way to do the same thing? In other words, are the then*Asyncmethods terminators (completion methods) that return the previous chain's result in the original thread, then spawn a new thread to execute the rest? I'm almost certain the answer is no. It just seems it might be that purely based on method names, to someone new to CompletableFutures. I wanted a confirmation though, in case what I'm reading about ForkJoinPool.commonPoolis actually saying what I'm doubting, just in a different way. - fix the animation in the google-map-marker with Polymer I try to add animation in google-map-marker within Polymer 3.0 but it cannot work with animation. I find out the attributes from google-map-marker in webcomponents for adding animation(BOUNCE), then I test it to none animation. <google-map</google-map-marker> <template is="dom-repeat" items="[[locations]]"> <google-map-marker</google-map-marker> </template> </google-map> It is none error when I test animation in google-map-marker. - Google Maps JavaScript API multiple times on this page error I am not trying to beat a dead horse. Honest. This is a different problem. I have a self-contained js library that I distribute to other devs. One of the methods renders a google map on a div supplied by the parent page. I have no control over what's going on the page that calls my library. Occasionally I get this error: You have included the Google Maps JavaScript API multiple times on this page. This may cause unexpected errors., despite the fact that I check whether Google Maps API is already loaded on the page. I check it in the following manner (pseudo code): if (typeof google === 'object' && typeof google.maps === 'object') { // google maps already loaded in the parent page - proceed } else { // google maps not in the parent page. Load my own copy var script = document.createElement('script'); script.src = '...'; document.getElementsByTagName('head')[0].appendChild(script); } So after some debugging, I found the following scenario. - The page references Google Maps and my library - My library loads and the page calls a method. - The library checks if Google Maps are loaded. At this point it's still downloading, so my check fails and I load my own. - The Google Maps referenced on the page finally loads. - The copy of Google Maps my library requested also loads and results in the you have included the Google Maps JavaScript API multiple times on this pageerror. So my questions: - Is it possible to catch this error and revert to using the copy of GM requested by the parent page. - Is it possible to detect if Google Maps library download is in progress? - Is there a simple solution that I am missing? P.S. I could tell devs that use Google Maps API to wait until the map has loaded prior to calling my library. However, there are too many people to tell and I'd like to take care of this problem on my end, if possible. - How does an animation work on UI thread without blocking other messages and runnable in message queue of UI Thread? I am working on some old android code where the code looks something like this: public void TestMethod() { // handler posting on main thread handler.post(() -> { //Invokes method(); }); animation.addListener(new AnimatatorListenerAdapter(){ @Override public void onAnimationEnd() { // Do some stuff; } }); animation.start(); } As per my understanding, the animation life cycle callbacks always execute on UI thread. Since method() is posted first on message queue therefore it should be executed before onAnimationEnd(), but some times (3/10 times) onAnimationEnd executes before method(). Therefore now I am confused about android animation ( Surely I am missing something). Questions: What should be the ideal flow in this code? Between method()and onAnimationEnd(), which one will be executed first and why? How android animation executes on UI thread without blocking other messages and runnable in the message queue of UI Thread? - Cancelling Flutters compute function from ui I have a function that uses the compute method to process some functionality and not lock the ui thread up. This works fine but I would like to give the user the option to cancel the processing. Is there a method of cancelling the compute function, I thought an exception could do it but as the method i am computing is static I wasn't sure how to trigger the exception. Here is the function for reference Thanks :)
http://quabr.com/56463755/run-thread-next-to-ui-thread
CC-MAIN-2019-30
en
refinedweb
customization 9079 How can I customise the worldpay help pages? 4968 Need help to change Custom tab bar (in top) icon when receive a notification from background in iPad app 6957 Display label if key exists, leave blank if it doesn't 7271 Mantis Bug tracker severity options customization 638 CKEditor add CSS styling to Preview and Editor 729 How to make a Spring Application' Business Logic User Configurable? 9679 Is there a way to change the output of validation_errors() in Codeigniter? 8339 Highlight Specific Dates in Angular UI DatePicker 9122 Graphic spot appearing on the screen when customizing UIActionSheet 9691 Progressbarstyle not working in android 1150 MS Word and RibbonX error (Wrong number of arguments ...) 9511 How to change inside background color of UISearchBar component on iOS 101 Replace default smileys menu 4124 Suppose I want to customize the look and feel of my website for my corporate clients 5004 Can you create a custom jQuery library build? 4267 Is Django admin difficult to customize? 3107 Design pattern to invoke different business rules on presented UI elements 6967 Woocommerce - Filtering products based on attribute 2828 Cannot delete SAP Change Request due to locked objects 9654 Highlighting curly brackets with a different colour in Eclipse 9138 Custom JavaScript snippets for emmet (in Sublime Text 2) 811 How do you set the default foreground color for code in Eclipse? 1296 Openresty custom json access log 8448 Trigger validation based on save button acumatica 337 Customize Eclipse editor to add new features specific to a proprietary language 5933 I want the same UITableView style across various table views - how can i do this? 8369 Android Animate Rotate 5852 Customize WPF Ribbon 6935 combining initializer with custom bootstrap and stylebootstrap 9875 WP search function to search custom table 2239 Customizing ListView Items with 3 TextView 9361 add custom tab to crm home page and not to entities in crm 2011 9697 How can I customize each row of Android ListView? 8236 How can I remove the "unlink relationship" either permanently, user based or amount of relationships? 423 Changing the color of UITableViewCellDeleteConfirmationControl(Delete Button) in UITableViewCell 865 Use of XAML ResourceDictionary outside of EXE to function like CSS file 1131 How to use multi-customized UITableCell in just one UITableView 6810 VS2010 modify intellisense popup order via an addin? 3495 JCalendar: how to change the foreground color of certain days? 7870 How can I add a login area with a custom skin in DotNetNuke? 7259 freeradius (MySQL config) adding custom attributes to the reply-item 8456 UIAlertController customization « 1 » Hot Questions Help: ZX81 'BASIC' Peek function Playing ogg Sound with loop Java: how to count non-repeated (occurring only once) Strings in ArrayList? iPhone does not stop at breakpoints JQuery Mobile Page Scripts How to distinguish between the first page load and the consecutive page refresh events in Javascript? React Redux not passing data flex-grow on HTML 5 <video> element not working To display Gender with name instead of showing true or false. Move elements in a list one position backward without mutating the original list How to make my object live even my app closed nodejs error on logging to console How to verify that this Windows system trusts a given SSL server certificate? Error in prototxt of caffe, caffe.SolverParameter has no field named "name" Replace the existing data in CSV file using php or CodeIgniter jQuery ui Dialog: Turn off 'Draggable' for Dialog content Update a record in ASP .Net MVC Compiling Xgboost library for python in 32 bit system How to install a plugin using JRuby? 3rd size Avatar in Buddypress Can I amend a commit to the master branch from a new local branch? Fatal error: Class 'Locale' not found with ZF2 beta5 skeleton application How to develop a desktop application or exe file for the web app developed in PHP,so that the customer can access it when there is no internet How to put multiple websites on a localhost port 80 using nginx.conf file? C# Copy-paste multiple times not working How can I kill Excel.exe through .NET code? Don't split hyphenated words Install Ruby 1.9.2 on Mac OSX 10.6 with 32bit version Moved system.js file from jspm_packages to another folder, wont resolve packages in config How to get image absolute path? How to sign based on data, but verify based on hash in Java? Race Condition in performTraversals and onConfigurationChanged results in incorrect layout? Finding palindrome using regex Creating a jar with sensitive information which cannot be accessed by a user Can I get MOXy to not output an attribute when generating json? WPF - Where to put DAL in 3-tier architecture design with MVVM? R sem function in lavaan WARNING: could not compute standard errors SQL to LINQ conversion with NOT IN unable to connect to XQuartz host How to integrate highchart range slider in angularjs Qt MOC error for namespace like FOO::BAR Compiling just one class in netbeans in a web app Google Chrome DevTools site dimensions missing Smarty, new row every 4 iteration Iterating through an NSArray and storing items in groups of 12 Opening Lotus Notes inside a VB.NET form How to properly read columns from text file C++ Inheritance and Virtual function error in simple program implementation Selector for "Define" Edit Menu item in iOS 5 Removing duplicates from array using on visualforce component Which is faster to process a 1TB file: a single machine or 5 networked machines? Does inner class need to be patched within outer? making numpy.nanargmin return nan if column is all nan How can I force NSTextField to only allow numbers? How do you assign a column default in MySQL to another column's value? How can I transform or copy an array to a linked list? JAX-RS - Spring Boot vs without Spring Boot Keys mapped to different keys Sorting a vector of objects containing two data members using std::sort Cast as decimal not giving me correct results How can I download a single raw file from a private github repo using the command line? IIS - Changing binding from one site to another without downtime Java Application that recognizes a Java Class How to get data from JQuery Selectable? OpenCV Error : Assertion failed while calling detectMultiScale sparse file usage in python Mysql: execute command denied to user ''@'localhost' for routine error Disallow sub folders of an allowed folder? Overlay many plots with a different range of x Camera and projector calibration Yii2 and Google maps API wix: unable to detect servicepacks Xcode GIT problems iPhone App Stopping and restarting timer Building server part of (React) isomorphic webapp with webpack including style-loader for CSS Yii2 How can append additional paramater when using gridview filter shouldAutorotateToInterfaceOrientation not changing the object location Filter categories/sub-categories with jQuery Nginx Django and Gunicorn. Gunicorn sock file is missing? How to store the form data in a MULTIPAGE form? SQL Server: String vs Binary? Join in bookshelf.js Android - Saving Player data - Comparison of 3 different options How can I edit point in UIBezierPath on iOS? Administrative Limit Exceeded During C# LDAP Search passing a variable declared in one class to another class Retrieve zipped dependency from ivy using gradle UINavigationController with strange behavior Locking and unlocking an SKAction so only one runs at a time Installer or no installer? JBoss Cluster setup with Hudson? How do I set my hostname and DNS records if I'm running Apache and Postfix from the same VPS? Is there a performance difference between JUnit3 and JUnit4? Resilient s3 notifications Enum in Eclipse nosuchmethoderror undefined _other_node in nil:NilClass error in Neo4j with rails Transitions with overlaps CDO.Message w/ multiple address@school.edu.au won't send Java equivalent of Python's struct.pack? Access Redis from relational
http://www.brokencontrollers.com/tags/customization/page1.shtml
CC-MAIN-2019-30
en
refinedweb
In this Java Video Tutorial I cover how to use Java threads. A thread is just a block of code that is expected to execute while other blocks of code execute. That’s it. When you want to execute more than one block of code at a time you have to alert Java. In this video I show you how to alert the interpreter. Part 1 of this series is here Java Video Tutorial. Heavily commented code follows the video. If you like videos like this share it The country codes I mentioned are here Country Codes Code From the Video LESSONSEVENTEEN.JAVA public class LessonSeventeen{ public static void main(String[] args){ // Create a new Thread that executes the code in GetTime20 Thread getTime = new GetTime20(); // Create a new Thread created using the Runnable interface // Execute the code in run after 10 seconds Runnable getMail = new GetTheMail(10); Runnable getMailAgain = new GetTheMail(20); // Call for the code in the method run to execute getTime.start(); new Thread(getMail).start(); new Thread(getMailAgain).start(); } } GETTIME20.JAVA // By using threads you can execute multiple blocks // of code at the same time. This program will output // the current time and then at a specific time execute // other code without stopping the time output // Need this for Date and Locale classes import java.util.*; // Need this to format the dates import java.text.DateFormat; // By extending the Thread class you can run your code // concurrently with other threads public class GetTime20 extends Thread{ // All of the code that the thread executes must be // in the run method, or be in a method called for // from inside of the run method public void run(){ // Creating fields that will contain date info Date rightNow; Locale currentLocale; DateFormat timeFormatter; DateFormat dateFormatter; String timeOutput; String dateOutput; // Output the current date and time 20 times for(int i = 1; i <= 20; i++){ // A Date object contains date and time data rightNow = new Date(); // Locale defines time formats depending on location currentLocale = new Locale("en", "US"); // DateFormat allows you to define dates / times using predefined // styles DEFAULT, SHORT, MEDIUM, LONG, or FULL // getTimeInstance only outputs time information timeFormatter = DateFormat.getTimeInstance(DateFormat.DEFAULT, currentLocale); // getDateInstance only outputs time information dateFormatter = DateFormat.getDateInstance(DateFormat.DEFAULT, currentLocale); // Convert the time and date into Strings timeOutput = timeFormatter.format(rightNow); dateOutput = dateFormatter.format(rightNow); System.out.println(timeOutput); System.out.println(dateOutput); System.out.println(); // You must wrap the sleep method in error handling // code to catch the InterruptedException exception // sleep pauses thread execution for 2 seconds below try { Thread.sleep(2000); } catch(InterruptedException e) {} } } } GETTHEMAIL.JAVA // You can use the Runnable interface instead of // wasting your 1 class extension. public class GetTheMail implements Runnable { // Stores the number of seconds before the code // will be executed private int startTime; // Constructor that sets the wait time for each // new Thread public GetTheMail(int startTime){ this.startTime = startTime; } // All of the code that the thread executes must be // in the run method, or be in a method called for // from inside of the run method public void run(){ try { // Don't execute until 10 seconds has passed if // startTime equals 10 Thread.sleep(startTime * 1000); } catch(InterruptedException e) {} System.out.println("Checking for Mail"); } } Hey nice video, but i have a question, although localization is used to change the entire webpage into country specific language, but what i have to do, if i want to convert the entire webpage into state/city specific language (ie. hindi, marathi), how can i do this??? I’ll get into language specific translation when I start making JavaServer pages. This is basically for stand alone applications instead of web applications. I’ll do my best to explain that topic, but I’m not very well versed in other speaking languages thank you for response, i also want to know for web applications, if you know anything about this, please let me know, where should i look for this?? Thank you.. I will cover developing web applications with Java very soon. okie, thank you very much… Hey! I like your java tutorials. I think your regex tutorial is best. I want to know which IDE you use because I use notepad++ & in notepad++ console window is not at the right side. And, which screen recording software you use for making these tuts because it’s just awesome!!!. I use Eclipse because it is free and looks exactly the same on every OS. A definite plus if you are making video tutorials for the world. I record my screencasts with Quicktime Player and edit them with iMovie. I used to think that Quicktime Player was the best, but since the last update it is kind of broken. I plan on purchasing a new screencasting program but haven’t decided which one yet. Thanks for stopping by Thanks for reply!! If possible then make video tutorial about java.io(Input/Output) or about file. I want to completely understand which class used for what because there is streamreader & bufferedreader & char stream and Scanner etc.It is a bit confusing!!!! I’ll be covering that later in the current tutorial. Don’t worry I’ll get there soon hi 🙂 ur tutorials are amazing . i feel very thankful about what u do . i have a questions if u dont mind 1. i am learning java to be able to bild app for android , so if i master the 60 lessons that u have made , do u think it’s enough to start learn android programing ?? 2. on this lesson is the” runnable” interface is built already in java ?? and what ist exactly thanks in advance ^_^ Thank you very much 🙂 Yes, if you understand Java you will understand how to make Android apps. I’m going to create a 6 month long Android tutorial in which I’ll cover everything. I’ll make very specific apps. You can expect it to be about 90 videos in length (15 minutes each) based on my estimates. It will be the largest, most in depth Android tutorial ever made. Runnable is just like any other interface in which it is a blueprint detailing what methods you need to use. Runnable works by instantiating a Thread instance and then passing itself in using a reference to code that needs to run on its own separate from other threads that are currently running. It is just like how you multitask. You can listen to music while you type an email. The code just executes in tandem. Does that make sense? Hi… I’m confused as to how I’m going to do this. I am using notepad, since it is what’s required to us. Am i supposed to just put everything into 1 file and save it as LessonSeventeen.java? I noticed on your tut that you have 3 separate “files” Thanks Save them as 3 separate files. Then execute LessonSeventeen which contains the main function. If you save them in the same directory they’ll find each other Damn it! Got error because I typed starTime XD Have to be really careful.. THANK YOU FOR ALWAYS RESPONDING! 🙂 Keep at it and you’ll get it. Little errors are part of the learning process. I respond to every comment, but some times it takes a little time 🙂 I know.. Should’ve seen your site/YT tut 2 mos ago.. now I only have 1 week to prepare.. Good Luck to me! XD Thanks again.. 🙂 Good luck! If you just need to understand the core java language, I cover it completely. I hope I can help 🙂 Thanks again sir for this video, actually i was working on some examples after watching this video and i am stuck here: NewThread() { // Create a new, second thread t = new Thread(this, “Demo Thread”); System.out.println(“Child thread: ” + t); t.start(); // Start the thread } it is actually a part a of program and it produces an output of Child thread: Thread[Demo Thread,5,main] on its first line, i actually understood the program but i did not understand this output. what does Thread,these[],5 mean in it , plz clearify. i guess its threadname, priority,but does not get it why main is displayed , plz make it clear main is a thread just like any other. A thread is just a series of statements that execute. Since main is just a series of statements as well it is a thread. I hope that makes sense Let’s see if I have understood… Runnable(interface) have all the methods that I need to creat a Thread(class)… So if a class A implements Runnable, I have everything that I need to create a Thread, using A as parameter for the Thread constructor? Does it make sense? (I like when you say “I hope that makes sense”) Congratulations, your work is great. Sorry my english, it is not that good. Robinson Yes you have it. Thank you very much for the nice compliments. I do my best I am getting the following error after running the above code in eclipse : Exception in thread “Thread-0” java.lang.IllegalArgumentException: Cannot format given Object as a Date at java.text.DateFormat.format(Unknown Source) at java.text.Format.format(Unknown Source) at GetTime20.run(GetTime20.java:63) Can you please let me know the reason for this exception? Thanks in advance Please check the code I provide on my site. There is probably a little typo some place. I’m sure the code on my site works Hello Derek I’m trying to excute this exact same code but getting an error in LessonSeventeen Class Thread getTime = GetTime20(); is undefined for the type LessonSeventeen. I have created GetTime20.java and defined everything in there for somehow its not connecting with file LessonSeventeen.java Do you have both classes in the same folder? Are you using Eclipse like I am? Hello Derek, I have a question,how can I see Runnable interface? I’m not sure what you mean? You’re really a great teacher. Not only are the videos amazing but after coming to your website and seeing the code published with comments — it’s priceless. Thank you for sharing your knowledge and teaching us. I had you on YouTube as a subscriber for a very long time. I am super happy to have found this website and the rest of Derek Banas’ amazing tutorials. Thank you — truly for every minute, hour, day, you spend to do this for the rest of the world. This is my new domain. Thank you for all the very nice compliments 🙂 I’m very happy that you enjoy my site
http://www.newthinktank.com/2012/02/java-video-tutorial-17/
CC-MAIN-2019-30
en
refinedweb
. Other possible titles: Over on the Claims Based Identity Blog they made an announcement that they have stopped development of CardSpace v2. CardSpace was an excellent technology, but nobody used it. Some of us saw the writing on the wall when Microsoft paused development last year, and kept quiet about why. For better or for worse, Microsoft stopped development and moved on to a different technology affectionately called U-Prove.. So what exactly does this mean? Central to U-Prove is something called an Agent:. Alright, what does that really mean? Short answer: it’s kind of like CardSpace, except you—the developer—manage the application that controls the flow of claims from IdP to RP. The goal is to enable stronger control of the release of private data to relying parties. For more information check out the FAQ section on Connect.. One of the projects I’ve been working on for the last couple months has a requirement to aggregate a set of claims from multiple data sources for an identity and return the collection. It all seems pretty straightforward as long as you know what the data sources are at development time as well as how you want to transform the data to claims. In the real world though, chances are you will need to modify how that transformation happens or modify the data sources in some way. There are lots of ways this can be accomplished, and I’m going to look at how you can do it with the Managed Extensibility Framework (MEF). Whenever I think of MEF, this is the best way I can describe how it works: MEF being the magical part. In actual fact, it is pretty straightforward how the underlying pieces work, but here is the sales bit:. The architecture of it can be explained on the Codeplex site: The composition container is designed to discover ComposablePart’s that have Export attributes, and assign these Parts to an object with an Import attribute. Think of it this way (this is just one possible way it could work). Let’s say I have a bunch of classes that are plugins for some system. I will attach an Export attribute to each of those classes. Then within the system itself I have a class that manages these plugins. That class will contain an object that is a collection of the plugin class type, and it will have an attribute of ImportMany. Within this manager class is some code that will discover the Exported classes, and generate a collection of them instantiated. You can then iterate through the collection and do something with those plugins. Some code might help. First, we need something to tie the Import/Export attributes together. For a plugin-type situation I prefer to use an interface. namespace PluginInterfaces { public interface IPlugin { public string PlugInName { get; set; } } } Then we need to create a plugin. using PluginInterfaces; namespace SomePlugin { class MyAwesomePlugin : IPlugin { public string PlugInName { get { return "Steve is Awesome!"; } set { } } }; } Then we need to actually Export the plugin. Notice the namespace addition. The namespace can be found in the System.ComponentModel.Composition assembly in .NET 4. using PluginInterfaces; using System.ComponentModel.Composition; namespace SomePlugin { [Export(typeof(IPlugin))] class MyAwesomePlugin : IPlugin { public string PlugInName { get { return "Steve is Awesome!"; } set { } } }; } The [Export(typeof(IPlugin))] is a way of tying the Export to the Import. Importing the plugin’s requires a little bit more code. First we need to create a collection to import into: [ImportMany(typeof(IPlugin))] List<IPlugin> plugins = new List<IPlugin>(); Notice the typeof(IPlugin). Next we need to compose the pieces: using (DirectoryCatalog catalog = new DirectoryCatalog(pathToPluginDlls)) using (CompositionContainer container = new CompositionContainer(catalog)) { container.ComposeParts(this); } The ComposeParts() method is looking at the passed object and finds anything with the Import or ImportMany attributes and then looks into the DirectoryCatalog to find any classes with the Export attribute, and then tries to tie everything together based on the typeof(IPlugin). At this point we should now have a collection of plugins that we could iterate through and do whatever we want with each plugin. So what does that have to do with Claims? If you continue down the Claims Model path, eventually you will get tired of having to modify the STS every time you wanted to change what data is returned from the RST (Request for Security Token). Imagine if you could create a plugin model that all you had to do was create a new plugin for any new data source, or all you had to do was modify the plugins instead of the STS itself. You could even build a transformation engine similar to Active Directory Federation Services and create a DSL that is executed at runtime. It would make for simpler deployment, that’s for sure. And what about Parallelization? If you have a large collection of plugins, it may be beneficial to run some things in parallel, such as a GetClaims([identity]) type call. Using the Parallel libraries within .NET 4, you could very easily do something like: Parallel.ForEach<IPlugin>(plugins, (plugin) => { plugin.GetClaims(identity); }); The basic idea for this method is to take a collection, and do an action on each item in the collection, potentially in parallel. The ForEach method is described as: ForEach<TSource>(IEnumerable<TSource> source, Action<TSource> action) When everything is all said and done, you now have a basic parallelized plugin model for your Security Token Service. Pretty cool, I think..
https://blogs.objectsharp.com/?tag=/Claims
CC-MAIN-2019-30
en
refinedweb
import java.util.Map ;52 import java.util.WeakHashMap ;53 54 /**55 * CacheStorage implementation which is backed by a WeakHashMap.56 *57 * @author Anthony Eden58 */59 public class HashMapCacheStorage extends AbstractCacheStorage {60 61 private Map cache = new WeakHashMap ();62 63 /**64 * Get the specified cached object. This method may return null if the object is not in the cache.65 *66 * @param key The key67 * @return The value or null68 */69 public Object get(Object key) {70 return cache.get(key);71 }72 73 /**74 * Put a value into the cache store using the specified key.75 *76 * @param key The key77 * @param value The value78 */79 public void put(Object key, Object value) {80 cache.put(key, value);81 }82 83 /**84 * Remove the value for the specified key.85 *86 * @param key The key87 */88 public void remove(Object key) {89 cache.remove(key);90 }91 92 /**93 * Remove all items from the cache storage.94 */95 public void clear() {96 cache.clear();97 }98 99 }100 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/jpublish/cache/HashMapCacheStorage.java.htm
CC-MAIN-2017-17
en
refinedweb
PROfile All time reputation: 22 AchievementYou have not received any achievements. remysharp's Recent Snippets Tagged 'namespace' - All / - JavaScript / - HTML / - PHP / - CSS / - Ruby / - Objective C « Prev [Page 1 of 1] Next » JavaScript textmate namespace saved by 1 person JavaScript namespaces posted on February 4, 2008 by remysharp
http://snipplr.com/users/remysharp/tags/namespace/
CC-MAIN-2017-17
en
refinedweb
SBT Native Packager 1.2.0 Native Packager 1.2.0 will come with a bunch of new features and some necessary changes. Some of this features are already implement in the current milestone (1.2.0-M5). Feel free to try them and provide feedback in our gitter channel or open an issue if you find a bug. The features are in no particular order. Systemloaders are AutoPlugins now Previously, the Java Server Application Archetype provided a setting serverLoading where you could define your system loader like this: import com.typesafe.sbt.packager.archetypes.ServerLoader serverLoading in Rpm := ServerLoader.Upstart This adds the necessary configuration files and maintainer scripts (postinst, postun, etc. ) in order to register and start your application. The biggest problem with the tight coupling between the server archetype and the systemloaders is that it’s hard to add systemloader specific settings without changing a lot in the server archetype. It’s also a lot harder to reason about the code and the output. With extra systemloader plugins we open the possibility to - easily extend a single systemloader - have a place to put generic systemloader functionality ( there is a SystemLoaderPlugin which takes care of common settings ) - test systemloaders in isolation - better developer experience You enable a systemloader by enabling a concrete systemloader plugin enablePlugins(SystemdPlugin) For the complete discussion you can take a look at the pull request. Single Project — Multiple Apps A major pain point for beginners is the start script creation. The bash and bat start scripts are only generated when there is a either - Exactly one main class - Explicitly set main class with mainClass in Compile := Some(“com.example.MainClass”) For 1.2.x we will extends the implementation and support multiple main classes by default. Native packager will generate a start script for each main class found on the classpath. SBT provides them via the discoveredMainClasses in Compile task. If there is only one main class, SBT will assign it to the mainClass in Compile setting. This leads to three cases - Exactly one main class. In this case native-packager will behave like previous versions and just generate a single start script, using the executableScriptName setting for the script name. - Multiple main classes and mainClass in Compile := None . This is the default behaviour defined by SBT. In this case native-packager will generate the same start script for each main class. - Multiple main classes and mainClass in Compile := Some(…). The user has set a specific main class, which will lead to a main start script being generated using the executableScriptName setting. For all other main classes native-packager generates forwarder scripts. Stage task SBT comes with a stage task. Its purpose is to the prepare everything to build a package. As an example docker:stage creates a directory with a Dockerfile and all files included in the docker image. Now you could execute the docker build tool yourself. Currently some package formats have their own tasks to perform extactly this task, e.g. debianExplodedPackage. With 1.2.x native-packager will provide a consistent behaviour through out all package formats. Maintainerscripts In 1.1.0 we added a maintainerScripts setting to provide a single data structure for lifecycle / maintainer scripts supported by a package format. This includes rpm scriptlets or debian postun, etc. The 1.2.0 release series will be the last series supporting the old custom settings. Check out the migration guide in the release description. Schedule? I would love to announce a day, week or month when we will have finished implementing all this. Unfortunately I can’t. Native-packager is developed in the free time of the core maintainers and by more than 120 other contributors (work or free time). This makes planning almost impossible. So if you want to make this happen faster come and join the native-packager contributors :)
https://medium.com/@muuki88/sbt-native-packager-1-2-0-a93e305888c5
CC-MAIN-2017-17
en
refinedweb